1. 22 May, 2018 1 commit
  2. 07 Apr, 2018 2 commits
    • Jiafei Pan's avatar
      fix instruction address range limitation · b4ad9768
      Jiafei Pan authored
      
      
      For the adr instruction, it require the label's offset from the
      address of this instruction must be in the range +/-1MB. If the
      option "BL2_IN_XIP_MEM" is set to '1', in some cases, BL2's RW
      memory will not in the range of +/-1MB from BL2's RO memory region.
      so we need to use ldr instruction to cover this case.
      Signed-off-by: default avatarJiafei Pan <Jiafei.Pan@nxp.com>
      b4ad9768
    • Jiafei Pan's avatar
      Add support for BL2 in XIP memory · 7d173fc5
      Jiafei Pan authored
      
      
      In some use-cases BL2 will be stored in eXecute In Place (XIP) memory,
      like BL1. In these use-cases, it is necessary to initialize the RW sections
      in RAM, while leaving the RO sections in place. This patch enable this
      use-case with a new build option, BL2_IN_XIP_MEM. For now, this option
      is only supported when BL2_AT_EL3 is 1.
      Signed-off-by: default avatarJiafei Pan <Jiafei.Pan@nxp.com>
      7d173fc5
  3. 18 Jan, 2018 1 commit
    • Roberto Vargas's avatar
      bl2-el3: Add BL2_EL3 image · b1d27b48
      Roberto Vargas authored
      
      
      This patch enables BL2 to execute at the highest exception level
      without any dependancy on TF BL1. This enables platforms which already
      have a non-TF Boot ROM to directly load and execute BL2 and subsequent BL
      stages without need for BL1.  This is not currently possible because
      BL2 executes at S-EL1 and cannot jump straight to EL3.
      
      Change-Id: Ief1efca4598560b1b8c8e61fbe26d1f44e929d69
      Signed-off-by: default avatarRoberto Vargas <roberto.vargas@arm.com>
      b1d27b48
  4. 11 Jan, 2018 1 commit
    • Dimitris Papastamos's avatar
      Workaround for CVE-2017-5715 on Cortex A57 and A72 · f62ad322
      Dimitris Papastamos authored
      
      
      Invalidate the Branch Target Buffer (BTB) on entry to EL3 by disabling
      and enabling the MMU.  To achieve this without performing any branch
      instruction, a per-cpu vbar is installed which executes the workaround
      and then branches off to the corresponding vector entry in the main
      vector table.  A side effect of this change is that the main vbar is
      configured before any reset handling.  This is to allow the per-cpu
      reset function to override the vbar setting.
      
      This workaround is enabled by default on the affected CPUs.
      
      Change-Id: I97788d38463a5840a410e3cea85ed297a1678265
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      f62ad322
  5. 30 Nov, 2017 1 commit
    • David Cunado's avatar
      Enable SVE for Non-secure world · 1a853370
      David Cunado authored
      
      
      This patch adds a new build option, ENABLE_SVE_FOR_NS, which when set
      to one EL3 will check to see if the Scalable Vector Extension (SVE) is
      implemented when entering and exiting the Non-secure world.
      
      If SVE is implemented, EL3 will do the following:
      
      - Entry to Non-secure world: SIMD, FP and SVE functionality is enabled.
      
      - Exit from Non-secure world: SIMD, FP and SVE functionality is
        disabled. As SIMD and FP registers are part of the SVE Z-registers
        then any use of SIMD / FP functionality would corrupt the SVE
        registers.
      
      The build option default is 1. The SVE functionality is only supported
      on AArch64 and so the build option is set to zero when the target
      archiecture is AArch32.
      
      This build option is not compatible with the CTX_INCLUDE_FPREGS - an
      assert will be raised on platforms where SVE is implemented and both
      ENABLE_SVE_FOR_NS and CTX_INCLUDE_FPREGS are set to 1.
      
      Also note this change prevents secure world use of FP&SIMD registers on
      SVE-enabled platforms. Existing Secure-EL1 Payloads will not work on
      such platforms unless ENABLE_SVE_FOR_NS is set to 0.
      
      Additionally, on the first entry into the Non-secure world the SVE
      functionality is enabled and the SVE Z-register length is set to the
      maximum size allowed by the architecture. This includes the use case
      where EL2 is implemented but not used.
      
      Change-Id: Ie2d733ddaba0b9bef1d7c9765503155188fe7dae
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      1a853370
  6. 20 Nov, 2017 1 commit
    • Dimitris Papastamos's avatar
      Refactor Statistical Profiling Extensions implementation · 281a08cc
      Dimitris Papastamos authored
      
      
      Factor out SPE operations in a separate file.  Use the publish
      subscribe framework to drain the SPE buffers before entering secure
      world.  Additionally, enable SPE before entering normal world.
      
      A side effect of this change is that the profiling buffers are now
      only drained when a transition from normal world to secure world
      happens.  Previously they were drained also on return from secure
      world, which is unnecessary as SPE is not supported in S-EL1.
      
      Change-Id: I17582c689b4b525770dbb6db098b3a0b5777b70a
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      281a08cc
  7. 22 Jun, 2017 1 commit
    • dp-arm's avatar
      aarch64: Enable Statistical Profiling Extensions for lower ELs · d832aee9
      dp-arm authored
      
      
      SPE is only supported in non-secure state.  Accesses to SPE specific
      registers from SEL1 will trap to EL3.  During a world switch, before
      `TTBR` is modified the SPE profiling buffers are drained.  This is to
      avoid a potential invalid memory access in SEL1.
      
      SPE is architecturally specified only for AArch64.
      
      Change-Id: I04a96427d9f9d586c331913d815fdc726855f6b0
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      d832aee9
  8. 21 Jun, 2017 1 commit
    • David Cunado's avatar
      Fully initialise essential control registers · 18f2efd6
      David Cunado authored
      
      
      This patch updates the el3_arch_init_common macro so that it fully
      initialises essential control registers rather then relying on hardware
      to set the reset values.
      
      The context management functions are also updated to fully initialise
      the appropriate control registers when initialising the non-secure and
      secure context structures and when preparing to leave EL3 for a lower
      EL.
      
      This gives better alignement with the ARM ARM which states that software
      must initialise RES0 and RES1 fields with 0 / 1.
      
      This patch also corrects the following typos:
      
      "NASCR definitions" -> "NSACR definitions"
      
      Change-Id: Ia8940b8351dc27bc09e2138b011e249655041cfc
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      18f2efd6
  9. 03 May, 2017 1 commit
  10. 31 Mar, 2017 1 commit
    • Douglas Raillard's avatar
      Add support for GCC stack protection · 51faada7
      Douglas Raillard authored
      
      
      Introduce new build option ENABLE_STACK_PROTECTOR. It enables
      compilation of all BL images with one of the GCC -fstack-protector-*
      options.
      
      A new platform function plat_get_stack_protector_canary() is introduced.
      It returns a value that is used to initialize the canary for stack
      corruption detection. Returning a random value will prevent an attacker
      from predicting the value and greatly increase the effectiveness of the
      protection.
      
      A message is printed at the ERROR level when a stack corruption is
      detected.
      
      To be effective, the global data must be stored at an address
      lower than the base of the stacks. Failure to do so would allow an
      attacker to overwrite the canary as part of an attack which would void
      the protection.
      
      FVP implementation of plat_get_stack_protector_canary is weak as
      there is no real source of entropy on the FVP. It therefore relies on a
      timer's value, which could be predictable.
      
      Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      51faada7
  11. 15 Feb, 2017 1 commit
    • dp-arm's avatar
      Disable secure self-hosted debug via MDCR_EL3/SDCR · 85e93ba0
      dp-arm authored
      
      
      Trusted Firmware currently has no support for secure self-hosted
      debug.  To avoid unexpected exceptions, disable software debug
      exceptions, other than software breakpoint instruction exceptions,
      from all exception levels in secure state.  This applies to both
      AArch32 and AArch64 EL3 initialization.
      
      Change-Id: Id097e54a6bbcd0ca6a2be930df5d860d8d09e777
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      85e93ba0
  12. 06 Feb, 2017 1 commit
    • Douglas Raillard's avatar
      Introduce unified API to zero memory · 308d359b
      Douglas Raillard authored
      
      
      Introduce zeromem_dczva function on AArch64 that can handle unaligned
      addresses and make use of DC ZVA instruction to zero a whole block at a
      time. This zeroing takes place directly in the cache to speed it up
      without doing external memory access.
      
      Remove the zeromem16 function on AArch64 and replace it with an alias to
      zeromem. This zeromem16 function is now deprecated.
      
      Remove the 16-bytes alignment constraint on __BSS_START__ in
      firmware-design.md as it is now not mandatory anymore (it used to comply
      with zeromem16 requirements).
      
      Change the 16-bytes alignment constraints in SP min's linker script to a
      8-bytes alignment constraint as the AArch32 zeromem implementation is now
      more efficient on 8-bytes aligned addresses.
      
      Introduce zero_normalmem and zeromem helpers in platform agnostic header
      that are implemented this way:
      * AArch32:
      	* zero_normalmem: zero using usual data access
      	* zeromem: alias for zero_normalmem
      * AArch64:
      	* zero_normalmem: zero normal memory  using DC ZVA instruction
      	                  (needs MMU enabled)
      	* zeromem: zero using usual data access
      
      Usage guidelines: in most cases, zero_normalmem should be preferred.
      
      There are 2 scenarios where zeromem (or memset) must be used instead:
      * Code that must run with MMU disabled (which means all memory is
        considered device memory for data accesses).
      * Code that fills device memory with null bytes.
      
      Optionally, the following rule can be applied if performance is
      important:
      * Code zeroing small areas (few bytes) that are not secrets should use
        memset to take advantage of compiler optimizations.
      
        Note: Code zeroing security-related critical information should use
        zero_normalmem/zeromem instead of memset to avoid removal by
        compilers' optimizations in some cases or misbehaving versions of GCC.
      
      Fixes ARM-software/tf-issues#408
      
      Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      308d359b
  13. 23 Jan, 2017 1 commit
    • Masahiro Yamada's avatar
      Use #ifdef for IMAGE_BL* instead of #if · 3d8256b2
      Masahiro Yamada authored
      
      
      One nasty part of ATF is some of boolean macros are always defined
      as 1 or 0, and the rest of them are only defined under certain
      conditions.
      
      For the former group, "#if FOO" or "#if !FOO" must be used because
      "#ifdef FOO" is always true.  (Options passed by $(call add_define,)
      are the cases.)
      
      For the latter, "#ifdef FOO" or "#ifndef FOO" should be used because
      checking the value of an undefined macro is strange.
      
      Here, IMAGE_BL* is handled by make_helpers/build_macro.mk like
      follows:
      
        $(eval IMAGE := IMAGE_BL$(call uppercase,$(3)))
      
        $(OBJ): $(2)
                @echo "  CC      $$<"
                $$(Q)$$(CC) $$(TF_CFLAGS) $$(CFLAGS) -D$(IMAGE) -c $$< -o $$@
      
      This means, IMAGE_BL* is defined when building the corresponding
      image, but *undefined* for the other images.
      
      So, IMAGE_BL* belongs to the latter group where we should use #ifdef
      or #ifndef.
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      3d8256b2
  14. 09 Nov, 2016 1 commit
    • David Cunado's avatar
      Reset debug registers MDCR-EL3/SDCR and MDCR_EL2/HDCR · 495f3d3c
      David Cunado authored
      
      
      In order to avoid unexpected traps into EL3/MON mode, this patch
      resets the debug registers, MDCR_EL3 and MDCR_EL2 for AArch64,
      and SDCR and HDCR for AArch32.
      
      MDCR_EL3/SDCR is zero'ed when EL3/MON mode is entered, at the
      start of BL1 and BL31/SMP_MIN.
      
      For MDCR_EL2/HDCR, this patch zero's the bits that are
      architecturally UNKNOWN values on reset. This is done when
      exiting from EL3/MON mode but only on platforms that support
      EL2/HYP mode but choose to exit to EL1/SVC mode.
      
      Fixes ARM-software/tf-issues#430
      
      Change-Id: Idb992232163c072faa08892251b5626ae4c3a5b6
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      495f3d3c
  15. 19 Jul, 2016 1 commit
    • Soby Mathew's avatar
      Rearrange assembly helper macros · 738b1fd7
      Soby Mathew authored
      This patch moves assembler macros which are not architecture specific
      to a new file `asm_macros_common.S` and moves the `el3_common_macros.S`
      into `aarch64` specific folder.
      
      Change-Id: I444a1ee3346597bf26a8b827480cd9640b38c826
      738b1fd7
  16. 07 Apr, 2016 1 commit
    • Soby Mathew's avatar
      Enable SCR_EL3.SIF bit · 99e58f9e
      Soby Mathew authored
      This patch enables the SCR_EL3.SIF (Secure Instruction Fetch) bit in BL1 and
      BL31 common architectural setup code. When in secure state, this disables
      instruction fetches from Non-secure memory.
      
      NOTE: THIS COULD BREAK PLATFORMS THAT HAVE SECURE WORLD CODE EXECUTING FROM
      NON-SECURE MEMORY, BUT THIS IS CONSIDERED UNLIKELY AND IS A SERIOUS SECURITY
      RISK.
      
      Fixes ARM-Software/tf-issues#372
      
      Change-Id: I684e84b8d523c3b246e9a5fabfa085b6405df319
      99e58f9e
  17. 30 Mar, 2016 1 commit
    • Gerald Lejeune's avatar
      Enable asynchronous abort exceptions during boot · adb4fcfb
      Gerald Lejeune authored
      
      
      Asynchronous abort exceptions generated by the platform during cold boot are
      not taken in EL3 unless SCR_EL3.EA is set.
      
      Therefore EA bit is set along with RES1 bits in early BL1 and BL31 architecture
      initialisation. Further write accesses to SCR_EL3 preserve these bits during
      cold boot.
      
      A build flag controls SCR_EL3.EA value to keep asynchronous abort exceptions
      being trapped by EL3 after cold boot or not.
      
      For further reference SError Interrupts are also known as asynchronous external
      aborts.
      
      On Cortex-A53 revisions below r0p2, asynchronous abort exceptions are taken in
      EL3 whatever the SCR_EL3.EA value is.
      
      Fixes arm-software/tf-issues#368
      Signed-off-by: default avatarGerald Lejeune <gerald.lejeune@st.com>
      adb4fcfb
  18. 14 Mar, 2016 1 commit
    • Antonio Nino Diaz's avatar
      Remove all non-configurable dead loops · 1c3ea103
      Antonio Nino Diaz authored
      Added a new platform porting function plat_panic_handler, to allow
      platforms to handle unexpected error situations. It must be
      implemented in assembly as it may be called before the C environment
      is initialized. A default implementation is provided, which simply
      spins.
      
      Corrected all dead loops in generic code to call this function
      instead. This includes the dead loop that occurs at the end of the
      call to panic().
      
      All unnecesary wfis from bl32/tsp/aarch64/tsp_exceptions.S have
      been removed.
      
      Change-Id: I67cb85f6112fa8e77bd62f5718efcef4173d8134
      1c3ea103
  19. 07 Mar, 2016 1 commit
    • Antonio Nino Diaz's avatar
      Initialize secondary CPUs during cold boot · 4e85e4fd
      Antonio Nino Diaz authored
      The previous reset code in BL1 performed the following steps in
      order:
      
      1. Warm/Cold boot detection.
         If it's a warm boot, jump to warm boot entrypoint.
      
      2. Primary/Secondary CPU detection.
         If it's a secondary CPU, jump to plat_secondary_cold_boot_setup(),
         which doesn't return.
      
      3. CPU initialisations (cache, TLB...).
      
      4. Memory and C runtime initialization.
      
      For a secondary CPU, steps 3 and 4 are never reached. This shouldn't
      be a problem in most cases, since current implementations of
      plat_secondary_cold_boot_setup() either panic or power down the
      secondary CPUs.
      
      The main concern is the lack of secondary CPU initialization when
      bare metal EL3 payloads are used in case they don't take care of this
      initialisation themselves.
      
      This patch moves the detection of primary/secondary CPU after step 3
      so that the CPU initialisations are performed per-CPU, while the
      memory and the C runtime initialisation are only performed on the
      primary CPU. The diagrams used in the ARM Trusted Firmware Reset
      Design documentation file have been updated to reflect the new boot
      flow.
      
      Platforms ports might be affected by this patch depending on the
      behaviour of plat_secondary_cold_boot_setup(), as the state of the
      platform when entering this function will be different.
      
      Fixes ARM-software/tf-issues#342
      
      Change-Id: Icbf4a0ee2a3e5b856030064472f9fa6696f2eb9e
      4e85e4fd
  20. 14 Dec, 2015 1 commit
  21. 09 Dec, 2015 1 commit
    • Yatharth Kochar's avatar
      Add descriptor based image management support in BL1 · 7baff11f
      Yatharth Kochar authored
      As of now BL1 loads and execute BL2 based on hard coded information
      provided in BL1. But due to addition of support for upcoming Firmware
      Update feature, BL1 now require more flexible approach to load and
      run different images using information provided by the platform.
      
      This patch adds new mechanism to load and execute images based on
      platform provided image id's. BL1 now queries the platform to fetch
      the image id of the next image to be loaded and executed. In order
      to achieve this, a new struct image_desc_t was added which holds the
      information about images, such as: ep_info and image_info.
      
      This patch introduces following platform porting functions:
      
      unsigned int bl1_plat_get_next_image_id(void);
      	This is used to identify the next image to be loaded
      	and executed by BL1.
      
      struct image_desc *bl1_plat_get_image_desc(unsigned int image_id);
      	This is used to retrieve the image_desc for given image_id.
      
      void bl1_plat_set_ep_info(unsigned int image_id,
      struct entry_point_info *ep_info);
      	This function allows platforms to update ep_info for given
      	image_id.
      
      The plat_bl1_common.c file provides default weak implementations of
      all above functions, the `bl1_plat_get_image_desc()` always return
      BL2 image descriptor, the `bl1_plat_get_next_image_id()` always return
      BL2 image ID and `bl1_plat_set_ep_info()` is empty and just returns.
      These functions gets compiled into all BL1 platforms by default.
      
      Platform setup in BL1, using `bl1_platform_setup()`, is now done
      _after_ the initialization of authentication module. This change
      provides the opportunity to use authentication while doing the
      platform setup in BL1.
      
      In order to store secure/non-secure context, BL31 uses percpu_data[]
      to store context pointer for each core. In case of BL1 only the
      primary CPU will be active hence percpu_data[] is not required to
      store the context pointer.
      
      This patch introduce bl1_cpu_context[] and bl1_cpu_context_ptr[] to
      store the context and context pointers respectively. It also also
      re-defines cm_get_context() and cm_set_context() for BL1 in
      bl1/bl1_context_mgmt.c.
      
      BL1 now follows the BL31 pattern of using SP_EL0 for the C runtime
      environment, to support resuming execution from a previously saved
      context.
      
      NOTE: THE `bl1_plat_set_bl2_ep_info()` PLATFORM PORTING FUNCTION IS
            NO LONGER CALLED BY BL1 COMMON CODE. PLATFORMS THAT OVERRIDE
            THIS FUNCTION MAY NEED TO IMPLEMENT `bl1_plat_set_ep_info()`
            INSTEAD TO MAINTAIN EXISTING BEHAVIOUR.
      
      Change-Id: Ieee4c124b951c2e9bc1c1013fa2073221195d881
      7baff11f
  22. 14 Sep, 2015 1 commit
    • Achin Gupta's avatar
      Make generic code work in presence of system caches · 54dc71e7
      Achin Gupta authored
      On the ARMv8 architecture, cache maintenance operations by set/way on the last
      level of integrated cache do not affect the system cache. This means that such a
      flush or clean operation could result in the data being pushed out to the system
      cache rather than main memory. Another CPU could access this data before it
      enables its data cache or MMU. Such accesses could be serviced from the main
      memory instead of the system cache. If the data in the sysem cache has not yet
      been flushed or evicted to main memory then there could be a loss of
      coherency. The only mechanism to guarantee that the main memory will be updated
      is to use cache maintenance operations to the PoC by MVA(See section D3.4.11
      (System level caches) of ARMv8-A Reference Manual (Issue A.g/ARM DDI0487A.G).
      
      This patch removes the reliance of Trusted Firmware on the flush by set/way
      operation to ensure visibility of data in the main memory. Cache maintenance
      operations by MVA are now used instead. The following are the broad category of
      changes:
      
      1. The RW areas of BL2/BL31/BL32 are invalidated by MVA before the C runtime is
         initialised. This ensures that any stale cache lines at any level of cache
         are removed.
      
      2. Updates to global data in runtime firmware (BL31) by the primary CPU are made
         visible to secondary CPUs using a cache clean operation by MVA.
      
      3. Cache maintenance by set/way operations are only used prior to power down.
      
      NOTE: NON-UPSTREAM TRUSTED FIRMWARE CODE SHOULD MAKE EQUIVALENT CHANGES IN
      ORDER TO FUNCTION CORRECTLY ON PLATFORMS WITH SUPPORT FOR SYSTEM CACHES.
      
      Fixes ARM-software/tf-issues#205
      
      Change-Id: I64f1b398de0432813a0e0881d70f8337681f6e9a
      54dc71e7
  23. 13 Aug, 2015 2 commits
    • Soby Mathew's avatar
      PSCI: Add documentation and fix plat_is_my_cpu_primary() · 58523c07
      Soby Mathew authored
      This patch adds the necessary documentation updates to porting_guide.md
      for the changes in the platform interface mandated as a result of the new
      PSCI Topology and power state management frameworks. It also adds a
      new document `platform-migration-guide.md` to aid the migration of existing
      platform ports to the new API.
      
      The patch fixes the implementation and callers of
      plat_is_my_cpu_primary() to use w0 as the return parameter as implied by
      the function signature rather than x0 which was used previously.
      
      Change-Id: Ic11e73019188c8ba2bd64c47e1729ff5acdcdd5b
      58523c07
    • Soby Mathew's avatar
      PSCI: Migrate TF to the new platform API and CM helpers · 85a181ce
      Soby Mathew authored
      This patch migrates the rest of Trusted Firmware excluding Secure Payload and
      the dispatchers to the new platform and context management API. The per-cpu
      data framework APIs which took MPIDRs as their arguments are deleted and only
      the ones which take core index as parameter are retained.
      
      Change-Id: I839d05ad995df34d2163a1cfed6baa768a5a595d
      85a181ce
  24. 24 Jun, 2015 1 commit
    • Sandrine Bailleux's avatar
      Bug fix: Build time condition to relocate RW data · c9915c0b
      Sandrine Bailleux authored
      This patch fixes the build time condition deciding whether the
      read-write data should be relocated from ROM to RAM. It was incorrectly
      using __DATA_ROM_START__, which is a linker symbol and not a compiler
      build flag. As a result, the relocation code was always compiled out.
      
      This bug has been introduced by the following patch:
      "Rationalize reset handling code"
      
      Change-Id: I1c8d49de32f791551ab4ac832bd45101d6934045
      c9915c0b
  25. 04 Jun, 2015 1 commit
    • Sandrine Bailleux's avatar
      Rationalize reset handling code · 52010cc7
      Sandrine Bailleux authored
      The attempt to run the CPU reset code as soon as possible after reset
      results in highly complex conditional code relating to the
      RESET_TO_BL31 option.
      
      This patch relaxes this requirement a little. In the BL1, BL3-1 and
      PSCI entrypoints code, the sequence of operations is now as follows:
       1) Detect whether it is a cold or warm boot;
       2) For cold boot, detect whether it is the primary or a secondary
          CPU. This is needed to handle multiple CPUs entering cold reset
          simultaneously;
       3) Run the CPU init code.
      
      This patch also abstracts the EL3 registers initialisation done by
      the BL1, BL3-1 and PSCI entrypoints into common code.
      
      This improves code re-use and consolidates the code flows for
      different types of systems.
      
      NOTE: THE FUNCTION plat_secondary_cold_boot() IS NOW EXPECTED TO
      NEVER RETURN. THIS PATCH FORCES PLATFORM PORTS THAT RELIED ON THE
      FORMER RETRY LOOP AT THE CALL SITE TO MODIFY THEIR IMPLEMENTATION.
      OTHERWISE, SECONDARY CPUS WILL PANIC.
      
      Change-Id: If5ecd74d75bee700b1bd718d23d7556b8f863546
      52010cc7