1. 21 Mar, 2018 1 commit
    • Antonio Nino Diaz's avatar
      Rename 'smcc' to 'smccc' · 085e80ec
      Antonio Nino Diaz authored
      
      
      When the source code says 'SMCC' it is talking about the SMC Calling
      Convention. The correct acronym is SMCCC. This affects a few definitions
      and file names.
      
      Some files have been renamed (smcc.h, smcc_helpers.h and smcc_macros.S)
      but the old files have been kept for compatibility, they include the
      new ones with an ERROR_DEPRECATED guard.
      
      Change-Id: I78f94052a502436fdd97ca32c0fe86bd58173f2f
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      085e80ec
  2. 28 Feb, 2018 1 commit
  3. 27 Feb, 2018 1 commit
    • Antonio Nino Diaz's avatar
      Add comments about mismatched TCR_ELx and xlat tables · 883d1b5d
      Antonio Nino Diaz authored
      
      
      When the MMU is enabled and the translation tables are mapped, data
      read/writes to the translation tables are made using the attributes
      specified in the translation tables themselves. However, the MMU
      performs table walks with the attributes specified in TCR_ELx. They are
      completely independent, so special care has to be taken to make sure
      that they are the same.
      
      This has to be done manually because it is not practical to have a test
      in the code. Such a test would need to know the virtual memory region
      that contains the translation tables and check that for all of the
      tables the attributes match the ones in TCR_ELx. As the tables may not
      even be mapped at all, this isn't a test that can be made generic.
      
      The flags used by enable_mmu_xxx() have been moved to the same header
      where the functions are.
      
      Also, some comments in the linker scripts related to the translation
      tables have been fixed.
      
      Change-Id: I1754768bffdae75f53561b1c4a5baf043b45a304
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      883d1b5d
  4. 26 Feb, 2018 2 commits
    • Soby Mathew's avatar
      BL1: Deprecate the `bl1_init_bl2_mem_layout()` API · 101d01e2
      Soby Mathew authored
      
      
      The `bl1_init_bl2_mem_layout()` API is now deprecated. The default weak
      implementation of `bl1_plat_handle_post_image_load()` calculates the
      BL2 memory layout and populates the same in x1(r1). This ensures
      compatibility for the deprecated API.
      
      Change-Id: Id44bdc1f572dc42ee6ceef4036b3a46803689315
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      101d01e2
    • Soby Mathew's avatar
      Add image_id to bl1_plat_handle_post/pre_image_load() · 566034fc
      Soby Mathew authored
      
      
      This patch adds an argument to bl1_plat_post/pre_image_load() APIs
      to make it more future proof. The default implementation of
      these are moved to `plat_bl1_common.c` file.
      
      These APIs are now invoked appropriately in the FWU code path prior
      to or post image loading by BL1 and are not restricted
      to LOAD_IMAGE_V2.
      
      The patch also reorganizes some common platform files. The previous
      `plat_bl2_el3_common.c` and `platform_helpers_default.c` files are
      merged into a new `plat_bl_common.c` file.
      
      NOTE: The addition of an argument to the above mentioned platform APIs
      is not expected to have a great impact because these APIs were only
      recently added and are unlikely to be used.
      
      Change-Id: I0519caaee0f774dd33638ff63a2e597ea178c453
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      566034fc
  5. 21 Feb, 2018 1 commit
  6. 02 Feb, 2018 1 commit
  7. 01 Feb, 2018 1 commit
  8. 18 Jan, 2018 1 commit
  9. 29 Nov, 2017 1 commit
    • Antonio Nino Diaz's avatar
      Replace magic numbers in linkerscripts by PAGE_SIZE · a2aedac2
      Antonio Nino Diaz authored
      
      
      When defining different sections in linker scripts it is needed to align
      them to multiples of the page size. In most linker scripts this is done
      by aligning to the hardcoded value 4096 instead of PAGE_SIZE.
      
      This may be confusing when taking a look at all the codebase, as 4096
      is used in some parts that aren't meant to be a multiple of the page
      size.
      
      Change-Id: I36c6f461c7782437a58d13d37ec8b822a1663ec1
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      a2aedac2
  10. 12 Jul, 2017 1 commit
    • Isla Mitchell's avatar
      Fix order of #includes · 2a4b4b71
      Isla Mitchell authored
      
      
      This fix modifies the order of system includes to meet the ARM TF coding
      standard. There are some exceptions in order to retain header groupings,
      minimise changes to imported headers, and where there are headers within
      the #if and #ifndef statements.
      
      Change-Id: I65085a142ba6a83792b26efb47df1329153f1624
      Signed-off-by: default avatarIsla Mitchell <isla.mitchell@arm.com>
      2a4b4b71
  11. 23 Jun, 2017 3 commits
  12. 21 Jun, 2017 2 commits
    • David Cunado's avatar
      Fully initialise essential control registers · 18f2efd6
      David Cunado authored
      
      
      This patch updates the el3_arch_init_common macro so that it fully
      initialises essential control registers rather then relying on hardware
      to set the reset values.
      
      The context management functions are also updated to fully initialise
      the appropriate control registers when initialising the non-secure and
      secure context structures and when preparing to leave EL3 for a lower
      EL.
      
      This gives better alignement with the ARM ARM which states that software
      must initialise RES0 and RES1 fields with 0 / 1.
      
      This patch also corrects the following typos:
      
      "NASCR definitions" -> "NSACR definitions"
      
      Change-Id: Ia8940b8351dc27bc09e2138b011e249655041cfc
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      18f2efd6
    • Soby Mathew's avatar
      Fix issues in FWU code · ee05ae16
      Soby Mathew authored
      
      
      This patch fixes the following issues in Firmware Update (FWU) code:
      
      1. The FWU layer maintains a list of loaded image ids and
         while checking for image overlaps, INVALID_IMAGE_IDs were not
         skipped. The patch now adds code to skip INVALID_IMAGE_IDs.
      
      2. While resetting the state corresponding to an image, the code
         now resets the memory used by the image only if the image were
         copied previously via IMAGE_COPY smc. This prevents the invalid
         zeroing of image memory which are not copied but are directly
         authenticated via IMAGE_AUTH smc.
      
      Change-Id: Idf18e69bcba7259411c88807bd0347d59d9afb8f
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      ee05ae16
  13. 01 Jun, 2017 2 commits
    • Antonio Nino Diaz's avatar
      FWU: Introduce FWU_SMC_IMAGE_RESET · 9d6fc3c3
      Antonio Nino Diaz authored
      
      
      This SMC is as a means for the image loading state machine to go from
      COPYING, COPIED or AUTHENTICATED states to RESET state. Previously, this
      was only done when the authentication of an image failed or when the
      execution of the image finished.
      
      Documentation updated.
      
      Change-Id: Ida6d4c65017f83ae5e27465ec36f54499c6534d9
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      9d6fc3c3
    • Antonio Nino Diaz's avatar
      FWU: Check for overlaps when loading images · 128daee2
      Antonio Nino Diaz authored
      
      
      Added checks to FWU_SMC_IMAGE_COPY to prevent loading data into a
      memory region where another image data is already loaded.
      
      Without this check, if two images are configured to be loaded in
      overlapping memory regions, one of them can be loaded and
      authenticated and the copy function is still able to load data from
      the second image on top of the first one. Since the first image is
      still in authenticated state, it can be executed, which could lead to
      the execution of unauthenticated arbitrary code of the second image.
      
      Firmware update documentation updated.
      
      Change-Id: Ib6871e569794c8e610a5ea59fe162ff5dcec526c
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      128daee2
  14. 15 May, 2017 1 commit
  15. 12 May, 2017 1 commit
    • Soby Mathew's avatar
      AArch32: Rework SMC context save and restore mechanism · b6285d64
      Soby Mathew authored
      
      
      The current SMC context data structure `smc_ctx_t` and related helpers are
      optimized for case when SMC call does not result in world switch. This was
      the case for SP_MIN and BL1 cold boot flow. But the firmware update usecase
      requires world switch as a result of SMC and the current SMC context helpers
      were not helping very much in this regard. Therefore this patch does the
      following changes to improve this:
      
      1. Add monitor stack pointer, `spmon` to `smc_ctx_t`
      
      The C Runtime stack pointer in monitor mode, `sp_mon` is added to the
      SMC context, and the `smc_ctx_t` pointer is cached in `sp_mon` prior
      to exit from Monitor mode. This makes is easier to retrieve the
      context when the next SMC call happens. As a result of this change,
      the SMC context helpers no longer depend on the stack to save and
      restore the register.
      
      This aligns it with the context save and restore mechanism in AArch64.
      
      2. Add SCR in `smc_ctx_t`
      
      Adding the SCR register to `smc_ctx_t` makes it easier to manage this
      register state when switching between non secure and secure world as a
      result of an SMC call.
      
      Change-Id: I5e12a7056107c1701b457b8f7363fdbf892230bf
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      b6285d64
  16. 03 May, 2017 1 commit
  17. 02 May, 2017 1 commit
  18. 20 Apr, 2017 2 commits
    • Antonio Nino Diaz's avatar
      Control inclusion of helper code used for asserts · aa61368e
      Antonio Nino Diaz authored
      
      
      Many asserts depend on code that is conditionally compiled based on the
      DEBUG define. This patch modifies the conditional inclusion of such code
      so that it is based on the ENABLE_ASSERTIONS build option.
      
      Change-Id: I6406674788aa7e1ad7c23d86ce94482ad3c382bd
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      aa61368e
    • Antonio Nino Diaz's avatar
      tspd:FWU:Fix usage of SMC_RET0 · 7a317a70
      Antonio Nino Diaz authored
      
      
      SMC_RET0 should only be used when the SMC code works as a function that
      returns void. If the code of the SMC uses SMC_RET1 to return a value to
      signify success and doesn't return anything in case of an error (or the
      other way around) SMC_RET1 should always be used to return clearly
      identifiable values.
      
      This patch fixes two cases in which the code used SMC_RET0 instead of
      SMC_RET1.
      
      It also introduces the define SMC_OK to use when an SMC must return a
      value to tell that it succeeded, the same way as SMC_UNK is used in case
      of failure.
      
      Change-Id: Ie4278b51559e4262aced13bbde4e844023270582
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      7a317a70
  19. 31 Mar, 2017 2 commits
    • Douglas Raillard's avatar
      Add support for GCC stack protection · 51faada7
      Douglas Raillard authored
      
      
      Introduce new build option ENABLE_STACK_PROTECTOR. It enables
      compilation of all BL images with one of the GCC -fstack-protector-*
      options.
      
      A new platform function plat_get_stack_protector_canary() is introduced.
      It returns a value that is used to initialize the canary for stack
      corruption detection. Returning a random value will prevent an attacker
      from predicting the value and greatly increase the effectiveness of the
      protection.
      
      A message is printed at the ERROR level when a stack corruption is
      detected.
      
      To be effective, the global data must be stored at an address
      lower than the base of the stacks. Failure to do so would allow an
      attacker to overwrite the canary as part of an attack which would void
      the protection.
      
      FVP implementation of plat_get_stack_protector_canary is weak as
      there is no real source of entropy on the FVP. It therefore relies on a
      timer's value, which could be predictable.
      
      Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      51faada7
    • Antonio Nino Diaz's avatar
      Flush console where necessary · 0b32628e
      Antonio Nino Diaz authored
      
      
      Call console_flush() before execution either terminates or leaves an
      exception level.
      
      Fixes: ARM-software/tf-issues#123
      
      Change-Id: I64eeb92effb039f76937ce89f877b68e355588e3
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      0b32628e
  20. 20 Mar, 2017 1 commit
  21. 06 Feb, 2017 1 commit
    • Douglas Raillard's avatar
      Introduce unified API to zero memory · 308d359b
      Douglas Raillard authored
      
      
      Introduce zeromem_dczva function on AArch64 that can handle unaligned
      addresses and make use of DC ZVA instruction to zero a whole block at a
      time. This zeroing takes place directly in the cache to speed it up
      without doing external memory access.
      
      Remove the zeromem16 function on AArch64 and replace it with an alias to
      zeromem. This zeromem16 function is now deprecated.
      
      Remove the 16-bytes alignment constraint on __BSS_START__ in
      firmware-design.md as it is now not mandatory anymore (it used to comply
      with zeromem16 requirements).
      
      Change the 16-bytes alignment constraints in SP min's linker script to a
      8-bytes alignment constraint as the AArch32 zeromem implementation is now
      more efficient on 8-bytes aligned addresses.
      
      Introduce zero_normalmem and zeromem helpers in platform agnostic header
      that are implemented this way:
      * AArch32:
      	* zero_normalmem: zero using usual data access
      	* zeromem: alias for zero_normalmem
      * AArch64:
      	* zero_normalmem: zero normal memory  using DC ZVA instruction
      	                  (needs MMU enabled)
      	* zeromem: zero using usual data access
      
      Usage guidelines: in most cases, zero_normalmem should be preferred.
      
      There are 2 scenarios where zeromem (or memset) must be used instead:
      * Code that must run with MMU disabled (which means all memory is
        considered device memory for data accesses).
      * Code that fills device memory with null bytes.
      
      Optionally, the following rule can be applied if performance is
      important:
      * Code zeroing small areas (few bytes) that are not secrets should use
        memset to take advantage of compiler optimizations.
      
        Note: Code zeroing security-related critical information should use
        zero_normalmem/zeromem instead of memset to avoid removal by
        compilers' optimizations in some cases or misbehaving versions of GCC.
      
      Fixes ARM-software/tf-issues#408
      
      Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      308d359b
  22. 30 Jan, 2017 1 commit
    • Jeenu Viswambharan's avatar
      Report errata workaround status to console · 10bcd761
      Jeenu Viswambharan authored
      
      
      The errata reporting policy is as follows:
      
        - If an errata workaround is enabled:
      
          - If it applies (i.e. the CPU is affected by the errata), an INFO
            message is printed, confirming that the errata workaround has been
            applied.
      
          - If it does not apply, a VERBOSE message is printed, confirming
            that the errata workaround has been skipped.
      
        - If an errata workaround is not enabled, but would have applied had
          it been, a WARN message is printed, alerting that errata workaround
          is missing.
      
      The CPU errata messages are printed by both BL1 (primary CPU only) and
      runtime firmware on debug builds, once for each CPU/errata combination.
      
      Relevant output from Juno r1 console when ARM Trusted Firmware is built
      with PLAT=juno LOG_LEVEL=50 DEBUG=1:
      
        VERBOSE: BL1: cortex_a57: errata workaround for 806969 was not applied
        VERBOSE: BL1: cortex_a57: errata workaround for 813420 was not applied
        INFO:    BL1: cortex_a57: errata workaround for disable_ldnp_overread was applied
        WARNING: BL1: cortex_a57: errata workaround for 826974 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 826977 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 828024 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 829520 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 833471 was missing!
        ...
        VERBOSE: BL31: cortex_a57: errata workaround for 806969 was not applied
        VERBOSE: BL31: cortex_a57: errata workaround for 813420 was not applied
        INFO:    BL31: cortex_a57: errata workaround for disable_ldnp_overread was applied
        WARNING: BL31: cortex_a57: errata workaround for 826974 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 826977 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 828024 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 829520 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 833471 was missing!
        ...
        VERBOSE: BL31: cortex_a53: errata workaround for 826319 was not applied
        INFO:    BL31: cortex_a53: errata workaround for disable_non_temporal_hint was applied
      
      Also update documentation.
      
      Change-Id: Iccf059d3348adb876ca121cdf5207bdbbacf2aba
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      10bcd761
  23. 20 Dec, 2016 4 commits
    • Sandrine Bailleux's avatar
      Fix integer overflows in BL1 FWU code · 949a52d2
      Sandrine Bailleux authored
      
      
      Before adding a base address and a size to compute the end
      address of an image to copy or authenticate, check this
      won't result in an integer overflow. If it does then consider
      the input arguments are invalid.
      
      As a result, bl1_plat_mem_check() can now safely assume the
      end address (computed as the sum of the base address and size
      of the memory region) doesn't overflow, as the validation is
      done upfront in bl1_fwu_image_copy/auth(). A debug assertion
      has been added nonetheless in the ARM implementation in order
      to help catching such problems, should bl1_plat_mem_check()
      be called in a different context in the future.
      
      Fixes TFV-1: Malformed Firmware Update SMC can result in copy
      of unexpectedly large data into secure memory
      
      Change-Id: I8b8f8dd4c8777705722c7bd0e8b57addcba07e25
      Signed-off-by: default avatarSandrine Bailleux <sandrine.bailleux@arm.com>
      Signed-off-by: default avatarDan Handley <dan.handley@arm.com>
      949a52d2
    • Sandrine Bailleux's avatar
      Add some debug assertions in BL1 FWU copy code · 1bfb7068
      Sandrine Bailleux authored
      
      
      These debug assertions sanity check the state of the internal
      FWU state machine data when resuming an incomplete image copy
      operation.
      
      Change-Id: I38a125b0073658c3e2b4b1bdc623ec221741f43e
      Signed-off-by: default avatarSandrine Bailleux <sandrine.bailleux@arm.com>
      1bfb7068
    • Sandrine Bailleux's avatar
      bl1_fwu_image_copy() refactoring · b38a9e5c
      Sandrine Bailleux authored
      
      
      This patch refactors the code of the function handling a FWU_AUTH_COPY
      SMC in BL1. All input validation has been moved upfront so it is now
      shared between the RESET and COPYING states.
      
      Change-Id: I6a86576b9ce3243c401c2474fe06f06687a70e2f
      Signed-off-by: default avatarSandrine Bailleux <sandrine.bailleux@arm.com>
      Signed-off-by: default avatarDan Handley <dan.handley@arm.com>
      b38a9e5c
    • Sandrine Bailleux's avatar
      Minor refactoring of BL1 FWU code · 9f1489e4
      Sandrine Bailleux authored
      
      
      This patch introduces no functional change, it just changes
      the serial console output.
      
       - Improve accuracy of error messages by decoupling some
         error cases;
      
       - Improve comments;
      
       - Move declaration of 'mem_layout' local variable closer to
         where it is used and make it const;
      
       - Rename a local variable to clarify whether it is a source
         or a destination address (base_addr -> dest_addr).
      
      Change-Id: I349fcf053e233f316310892211d49e35ef2c39d9
      Signed-off-by: default avatarSandrine Bailleux <sandrine.bailleux@arm.com>
      Signed-off-by: default avatarDan Handley <dan.handley@arm.com>
      9f1489e4
  24. 14 Dec, 2016 1 commit
  25. 05 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Define and use no_ret macro where no return is expected · a806dad5
      Jeenu Viswambharan authored
      
      
      There are many instances in ARM Trusted Firmware where control is
      transferred to functions from which return isn't expected. Such jumps
      are made using 'bl' instruction to provide the callee with the location
      from which it was jumped to. Additionally, debuggers infer the caller by
      examining where 'lr' register points to. If a 'bl' of the nature
      described above falls at the end of an assembly function, 'lr' will be
      left pointing to a location outside of the function range. This misleads
      the debugger back trace.
      
      This patch defines a 'no_ret' macro to be used when jumping to functions
      from which return isn't expected. The macro ensures to use 'bl'
      instruction for the jump, and also, for debug builds, places a 'nop'
      instruction immediately thereafter (unless instructed otherwise) so as
      to leave 'lr' pointing within the function range.
      
      Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0
      Co-authored-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      a806dad5
  26. 23 Sep, 2016 1 commit
    • Yatharth Kochar's avatar
      AArch32: Fix detection of virtualization support · fabf3017
      Yatharth Kochar authored
      The Virtualization field in the ID_PFR1 register has only 2
      valid values (0 or 1) but it was incorrectly checked against
      unrelated value tied to the SPSR register instead.
      
      This patch fixes the detection of virtualization support by
      using the valid values in BL1 context management code.
      
      Change-Id: If12592e343770e1da90f0f5fecf0a3376047ac29
      fabf3017
  27. 21 Sep, 2016 1 commit
    • Yatharth Kochar's avatar
      AArch32: Add generic changes in BL1 · f3b4914b
      Yatharth Kochar authored
      This patch adds generic changes in BL1 to support AArch32 state.
      New AArch32 specific assembly/C files are introduced and
      some files are moved to AArch32/64 specific folders.
      BL1 for AArch64 is refactored but functionally identical.
      BL1 executes in Secure Monitor mode in AArch32 state.
      
      NOTE: BL1 in AArch32 state ONLY handles BL1_RUN_IMAGE SMC.
      
      Change-Id: I6e2296374c7efbf3cf2aa1a0ce8de0732d8c98a5
      f3b4914b
  28. 20 Sep, 2016 1 commit
    • Yatharth Kochar's avatar
      Changes for new version of image loading in BL1/BL2 · 42019bf4
      Yatharth Kochar authored
      This patch adds changes in BL1 & BL2 to use new version
      of image loading to load the BL images.
      
      Following are the changes in BL1:
        -Use new version of load_auth_image() to load BL2
        -Modified `bl1_init_bl2_mem_layout()` to remove using
         `reserve_mem()` and to calculate `bl2_mem_layout`.
         `bl2_mem_layout` calculation now assumes that BL1 RW
         data is at the top of the bl1_mem_layout, which is more
         restrictive than the previous BL1 behaviour.
      
      Following are the changes in BL2:
        -The `bl2_main.c` is refactored and all the functions
         for loading BLxx images are now moved to `bl2_image_load.c`
         `bl2_main.c` now calls a top level `bl2_load_images()` to
         load all the images that are applicable in BL2.
        -Added new file `bl2_image_load_v2.c` that uses new version
         of image loading to load the BL images in BL2.
      
      All the above changes are conditionally compiled using the
      `LOAD_IMAGE_V2` flag.
      
      Change-Id: Ic6dcde5a484495bdc05526d9121c59fa50c1bf23
      42019bf4
  29. 22 Aug, 2016 1 commit
    • Yatharth Kochar's avatar
      Remove looping around `plat_report_exception` · 5bbc451e
      Yatharth Kochar authored
      This patch removes the tight loop that calls `plat_report_exception`
      in unhandled exceptions in AArch64 state.
      The new behaviour is to call the `plat_report_exception` only
      once followed by call to `plat_panic_handler`.
      This allows platforms to take platform-specific action when
      there is an unhandled exception, instead of always spinning
      in a tight loop.
      
      Note: This is a subtle break in behaviour for platforms that
            expect `plat_report_exception` to be continuously executed
            when there is an unhandled exception.
      
      Change-Id: Ie2453804b9b7caf9b010ee73e1a90eeb8384e4e8
      5bbc451e
  30. 09 Aug, 2016 1 commit