1. 13 Sep, 2019 1 commit
    • Alexei Fedorov's avatar
      Refactor ARMv8.3 Pointer Authentication support code · ed108b56
      Alexei Fedorov authored
      
      
      This patch provides the following features and makes modifications
      listed below:
      - Individual APIAKey key generation for each CPU.
      - New key generation on every BL31 warm boot and TSP CPU On event.
      - Per-CPU storage of APIAKey added in percpu_data[]
        of cpu_data structure.
      - `plat_init_apiakey()` function replaced with `plat_init_apkey()`
        which returns 128-bit value and uses Generic timer physical counter
        value to increase the randomness of the generated key.
        The new function can be used for generation of all ARMv8.3-PAuth keys
      - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
      - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
        generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
        pauth_disable_el1()` and `pauth_disable_el3()` functions disable
        PAuth for EL1 and EL3 respectively;
        `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
        cpu-data structure.
      - Combined `save_gp_pauth_registers()` function replaces calls to
        `save_gp_registers()` and `pauth_context_save()`;
        `restore_gp_pauth_registers()` replaces `pauth_context_restore()`
        and `restore_gp_registers()` calls.
      - `restore_gp_registers_eret()` function removed with corresponding
        code placed in `el3_exit()`.
      - Fixed the issue when `pauth_t pauth_ctx` structure allocated space
        for 12 uint64_t PAuth registers instead of 10 by removal of macro
        CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
        and assigning its value to CTX_PAUTH_REGS_END.
      - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
        in `msr	spsel`  instruction instead of hard-coded values.
      - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.
      
      Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      ed108b56
  2. 24 May, 2019 1 commit
    • Alexei Fedorov's avatar
      Add support for Branch Target Identification · 9fc59639
      Alexei Fedorov authored
      
      
      This patch adds the functionality needed for platforms to provide
      Branch Target Identification (BTI) extension, introduced to AArch64
      in Armv8.5-A by adding BTI instruction used to mark valid targets
      for indirect branches. The patch sets new GP bit [50] to the stage 1
      Translation Table Block and Page entries to denote guarded EL3 code
      pages which will cause processor to trap instructions in protected
      pages trying to perform an indirect branch to any instruction other
      than BTI.
      BTI feature is selected by BRANCH_PROTECTION option which supersedes
      the previous ENABLE_PAUTH used for Armv8.3-A Pointer Authentication
      and is disabled by default. Enabling BTI requires compiler support
      and was tested with GCC versions 9.0.0, 9.0.1 and 10.0.0.
      The assembly macros and helpers are modified to accommodate the BTI
      instruction.
      This is an experimental feature.
      Note. The previous ENABLE_PAUTH build option to enable PAuth in EL3
      is now made as an internal flag and BRANCH_PROTECTION flag should be
      used instead to enable Pointer Authentication.
      Note. USE_LIBROM=1 option is currently not supported.
      
      Change-Id: Ifaf4438609b16647dc79468b70cd1f47a623362e
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      9fc59639
  3. 12 Mar, 2019 1 commit
    • John Tsichritzis's avatar
      Apply stricter speculative load restriction · 02b57943
      John Tsichritzis authored
      
      
      The SCTLR.DSSBS bit is zero by default thus disabling speculative loads.
      However, we also explicitly set it to zero for BL2 and TSP images when
      each image initialises its context. This is done to ensure that the
      image environment is initialised in a safe state, regardless of the
      reset value of the bit.
      
      Change-Id: If25a8396641edb640f7f298b8d3309d5cba3cd79
      Signed-off-by: default avatarJohn Tsichritzis <john.tsichritzis@arm.com>
      02b57943
  4. 27 Feb, 2019 2 commits
    • Antonio Nino Diaz's avatar
      BL2_AT_EL3: Enable pointer authentication support · dcbfa11b
      Antonio Nino Diaz authored
      
      
      The size increase after enabling options related to ARMv8.3-PAuth is:
      
      +----------------------------+-------+-------+-------+--------+
      |                            |  text |  bss  |  data | rodata |
      +----------------------------+-------+-------+-------+--------+
      | CTX_INCLUDE_PAUTH_REGS = 1 |   +44 |   +0  |   +0  |   +0   |
      |                            |  0.2% |       |       |        |
      +----------------------------+-------+-------+-------+--------+
      | ENABLE_PAUTH = 1           |  +712 |   +0  |  +16  |   +0   |
      |                            |  3.1% |       |  0.9% |        |
      +----------------------------+-------+-------+-------+--------+
      
      The results are valid for the following build configuration:
      
          make PLAT=fvp SPD=tspd DEBUG=1 \
          BL2_AT_EL3=1                   \
          CTX_INCLUDE_PAUTH_REGS=1       \
          ENABLE_PAUTH=1
      
      Change-Id: I1c0616e7dea30962a92b4fd113428bc30a018320
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      dcbfa11b
    • Antonio Nino Diaz's avatar
      BL2: Enable pointer authentication support · 9d93fc2f
      Antonio Nino Diaz authored
      
      
      The size increase after enabling options related to ARMv8.3-PAuth is:
      
      +----------------------------+-------+-------+-------+--------+
      |                            |  text |  bss  |  data | rodata |
      +----------------------------+-------+-------+-------+--------+
      | CTX_INCLUDE_PAUTH_REGS = 1 |   +40 |   +0  |   +0  |   +0   |
      |                            |  0.2% |       |       |        |
      +----------------------------+-------+-------+-------+--------+
      | ENABLE_PAUTH = 1           |  +664 |   +0  |  +16  |   +0   |
      |                            |  3.1% |       |  0.9% |        |
      +----------------------------+-------+-------+-------+--------+
      
      Results calculated with the following build configuration:
      
          make PLAT=fvp SPD=tspd DEBUG=1 \
          SDEI_SUPPORT=1                 \
          EL3_EXCEPTION_HANDLING=1       \
          TSP_NS_INTR_ASYNC_PREEMPT=1    \
          CTX_INCLUDE_PAUTH_REGS=1       \
          ENABLE_PAUTH=1
      
      The changes for BL2_AT_EL3 aren't done in this commit.
      
      Change-Id: I8c803b40c7160525a06173bc6cdca21c4505837d
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      9d93fc2f
  5. 04 Jan, 2019 1 commit
    • Antonio Nino Diaz's avatar
      Sanitise includes across codebase · 09d40e0e
      Antonio Nino Diaz authored
      Enforce full include path for includes. Deprecate old paths.
      
      The following folders inside include/lib have been left unchanged:
      
      - include/lib/cpus/${ARCH}
      - include/lib/el3_runtime/${ARCH}
      
      The reason for this change is that having a global namespace for
      includes isn't a good idea. It defeats one of the advantages of having
      folders and it introduces problems that are sometimes subtle (because
      you may not know the header you are actually including if there are two
      of them).
      
      For example, this patch had to be created because two headers were
      called the same way: e0ea0928 ("Fix gpio includes of mt8173 platform
      to avoid collision."). More recently, this patch has had similar
      problems: 46f9b2c3 ("drivers: add tzc380 support").
      
      This problem was introduced in commit 4ecca339
      
       ("Move include and
      source files to logical locations"). At that time, there weren't too
      many headers so it wasn't a real issue. However, time has shown that
      this creates problems.
      
      Platforms that want to preserve the way they include headers may add the
      removed paths to PLAT_INCLUDES, but this is discouraged.
      
      Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      09d40e0e
  6. 29 Oct, 2018 1 commit
    • Soby Mathew's avatar
      PIE: Use PC relative adrp/adr for symbol reference · f1722b69
      Soby Mathew authored
      
      
      This patch fixes up the AArch64 assembly code to use
      adrp/adr instructions instead of ldr instruction for
      reference to symbols. This allows these assembly
      sequences to be Position Independant. Note that the
      the reference to sizes have been replaced with
      calculation of size at runtime. This is because size
      is a constant value and does not depend on execution
      address and using PC relative instructions for loading
      them makes them relative to execution address. Also
      we cannot use `ldr` instruction to load size as it
      generates a dynamic relocation entry which must *not*
      be fixed up and it is difficult for a dynamic loader
      to differentiate which entries need to be skipped.
      
      Change-Id: I8bf4ed5c58a9703629e5498a27624500ef40a836
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      f1722b69
  7. 11 Jul, 2018 1 commit
    • Roberto Vargas's avatar
      Add end_vector_entry assembler macro · a9203eda
      Roberto Vargas authored
      
      
      Check_vector_size checks if the size of the vector fits
      in the size reserved for it. This check creates problems in
      the Clang assembler. A new macro, end_vector_entry, is added
      and check_vector_size is deprecated.
      
      This new macro fills the current exception vector until the next
      exception vector. If the size of the current vector is bigger
      than 32 instructions then it gives an error.
      
      Change-Id: Ie8545cf1003a1e31656a1018dd6b4c28a4eaf671
      Signed-off-by: default avatarRoberto Vargas <roberto.vargas@arm.com>
      a9203eda
  8. 28 Feb, 2018 1 commit
  9. 26 Feb, 2018 1 commit
    • Soby Mathew's avatar
      Introduce the new BL handover interface · a6f340fe
      Soby Mathew authored
      
      
      This patch introduces a new BL handover interface. It essentially allows
      passing 4 arguments between the different BL stages. Effort has been made
      so as to be compatible with the previous handover interface. The previous
      blx_early_platform_setup() platform API is now deprecated and the new
      blx_early_platform_setup2() variant is introduced. The weak compatiblity
      implementation for the new API is done in the `plat_bl_common.c` file.
      Some of the new arguments in the new API will be reserved for generic
      code use when dynamic configuration support is implemented. Otherwise
      the other registers are available for platform use.
      
      Change-Id: Ifddfe2ea8e32497fe1beb565cac155ad9d50d404
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      a6f340fe
  10. 18 Jan, 2018 1 commit
    • Roberto Vargas's avatar
      bl2-el3: Add BL2_EL3 image · b1d27b48
      Roberto Vargas authored
      
      
      This patch enables BL2 to execute at the highest exception level
      without any dependancy on TF BL1. This enables platforms which already
      have a non-TF Boot ROM to directly load and execute BL2 and subsequent BL
      stages without need for BL1.  This is not currently possible because
      BL2 executes at S-EL1 and cannot jump straight to EL3.
      
      Change-Id: Ief1efca4598560b1b8c8e61fbe26d1f44e929d69
      Signed-off-by: default avatarRoberto Vargas <roberto.vargas@arm.com>
      b1d27b48
  11. 03 May, 2017 1 commit
  12. 31 Mar, 2017 1 commit
    • Douglas Raillard's avatar
      Add support for GCC stack protection · 51faada7
      Douglas Raillard authored
      
      
      Introduce new build option ENABLE_STACK_PROTECTOR. It enables
      compilation of all BL images with one of the GCC -fstack-protector-*
      options.
      
      A new platform function plat_get_stack_protector_canary() is introduced.
      It returns a value that is used to initialize the canary for stack
      corruption detection. Returning a random value will prevent an attacker
      from predicting the value and greatly increase the effectiveness of the
      protection.
      
      A message is printed at the ERROR level when a stack corruption is
      detected.
      
      To be effective, the global data must be stored at an address
      lower than the base of the stacks. Failure to do so would allow an
      attacker to overwrite the canary as part of an attack which would void
      the protection.
      
      FVP implementation of plat_get_stack_protector_canary is weak as
      there is no real source of entropy on the FVP. It therefore relies on a
      timer's value, which could be predictable.
      
      Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      51faada7
  13. 06 Feb, 2017 1 commit
    • Douglas Raillard's avatar
      Introduce unified API to zero memory · 308d359b
      Douglas Raillard authored
      
      
      Introduce zeromem_dczva function on AArch64 that can handle unaligned
      addresses and make use of DC ZVA instruction to zero a whole block at a
      time. This zeroing takes place directly in the cache to speed it up
      without doing external memory access.
      
      Remove the zeromem16 function on AArch64 and replace it with an alias to
      zeromem. This zeromem16 function is now deprecated.
      
      Remove the 16-bytes alignment constraint on __BSS_START__ in
      firmware-design.md as it is now not mandatory anymore (it used to comply
      with zeromem16 requirements).
      
      Change the 16-bytes alignment constraints in SP min's linker script to a
      8-bytes alignment constraint as the AArch32 zeromem implementation is now
      more efficient on 8-bytes aligned addresses.
      
      Introduce zero_normalmem and zeromem helpers in platform agnostic header
      that are implemented this way:
      * AArch32:
      	* zero_normalmem: zero using usual data access
      	* zeromem: alias for zero_normalmem
      * AArch64:
      	* zero_normalmem: zero normal memory  using DC ZVA instruction
      	                  (needs MMU enabled)
      	* zeromem: zero using usual data access
      
      Usage guidelines: in most cases, zero_normalmem should be preferred.
      
      There are 2 scenarios where zeromem (or memset) must be used instead:
      * Code that must run with MMU disabled (which means all memory is
        considered device memory for data accesses).
      * Code that fills device memory with null bytes.
      
      Optionally, the following rule can be applied if performance is
      important:
      * Code zeroing small areas (few bytes) that are not secrets should use
        memset to take advantage of compiler optimizations.
      
        Note: Code zeroing security-related critical information should use
        zero_normalmem/zeromem instead of memset to avoid removal by
        compilers' optimizations in some cases or misbehaving versions of GCC.
      
      Fixes ARM-software/tf-issues#408
      
      Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      308d359b
  14. 05 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Define and use no_ret macro where no return is expected · a806dad5
      Jeenu Viswambharan authored
      
      
      There are many instances in ARM Trusted Firmware where control is
      transferred to functions from which return isn't expected. Such jumps
      are made using 'bl' instruction to provide the callee with the location
      from which it was jumped to. Additionally, debuggers infer the caller by
      examining where 'lr' register points to. If a 'bl' of the nature
      described above falls at the end of an assembly function, 'lr' will be
      left pointing to a location outside of the function range. This misleads
      the debugger back trace.
      
      This patch defines a 'no_ret' macro to be used when jumping to functions
      from which return isn't expected. The macro ensures to use 'bl'
      instruction for the jump, and also, for debug builds, places a 'nop'
      instruction immediately thereafter (unless instructed otherwise) so as
      to leave 'lr' pointing within the function range.
      
      Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0
      Co-authored-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      a806dad5
  15. 14 Mar, 2016 1 commit
    • Antonio Nino Diaz's avatar
      Remove all non-configurable dead loops · 1c3ea103
      Antonio Nino Diaz authored
      Added a new platform porting function plat_panic_handler, to allow
      platforms to handle unexpected error situations. It must be
      implemented in assembly as it may be called before the C environment
      is initialized. A default implementation is provided, which simply
      spins.
      
      Corrected all dead loops in generic code to call this function
      instead. This includes the dead loop that occurs at the end of the
      call to panic().
      
      All unnecesary wfis from bl32/tsp/aarch64/tsp_exceptions.S have
      been removed.
      
      Change-Id: I67cb85f6112fa8e77bd62f5718efcef4173d8134
      1c3ea103
  16. 09 Dec, 2015 1 commit
    • Yatharth Kochar's avatar
      Remove `RUN_IMAGE` usage as opcode passed to next EL. · 5698c5b3
      Yatharth Kochar authored
      The primary usage of `RUN_IMAGE` SMC function id, used by BL2 is to
      make a request to BL1 to execute BL31. But BL2 also uses it as
      opcode to check if it is allowed to execute which is not the
      intended usage of `RUN_IMAGE` SMC.
      
      This patch removes the usage of `RUN_IMAGE` as opcode passed to
      next EL to check if it is allowed to execute.
      
      Change-Id: I6aebe0415ade3f43401a4c8a323457f032673657
      5698c5b3
  17. 14 Sep, 2015 1 commit
    • Achin Gupta's avatar
      Make generic code work in presence of system caches · 54dc71e7
      Achin Gupta authored
      On the ARMv8 architecture, cache maintenance operations by set/way on the last
      level of integrated cache do not affect the system cache. This means that such a
      flush or clean operation could result in the data being pushed out to the system
      cache rather than main memory. Another CPU could access this data before it
      enables its data cache or MMU. Such accesses could be serviced from the main
      memory instead of the system cache. If the data in the sysem cache has not yet
      been flushed or evicted to main memory then there could be a loss of
      coherency. The only mechanism to guarantee that the main memory will be updated
      is to use cache maintenance operations to the PoC by MVA(See section D3.4.11
      (System level caches) of ARMv8-A Reference Manual (Issue A.g/ARM DDI0487A.G).
      
      This patch removes the reliance of Trusted Firmware on the flush by set/way
      operation to ensure visibility of data in the main memory. Cache maintenance
      operations by MVA are now used instead. The following are the broad category of
      changes:
      
      1. The RW areas of BL2/BL31/BL32 are invalidated by MVA before the C runtime is
         initialised. This ensures that any stale cache lines at any level of cache
         are removed.
      
      2. Updates to global data in runtime firmware (BL31) by the primary CPU are made
         visible to secondary CPUs using a cache clean operation by MVA.
      
      3. Cache maintenance by set/way operations are only used prior to power down.
      
      NOTE: NON-UPSTREAM TRUSTED FIRMWARE CODE SHOULD MAKE EQUIVALENT CHANGES IN
      ORDER TO FUNCTION CORRECTLY ON PLATFORMS WITH SUPPORT FOR SYSTEM CACHES.
      
      Fixes ARM-software/tf-issues#205
      
      Change-Id: I64f1b398de0432813a0e0881d70f8337681f6e9a
      54dc71e7
  18. 13 Aug, 2015 1 commit
    • Soby Mathew's avatar
      PSCI: Migrate TF to the new platform API and CM helpers · 85a181ce
      Soby Mathew authored
      This patch migrates the rest of Trusted Firmware excluding Secure Payload and
      the dispatchers to the new platform and context management API. The per-cpu
      data framework APIs which took MPIDRs as their arguments are deleted and only
      the ones which take core index as parameter are retained.
      
      Change-Id: I839d05ad995df34d2163a1cfed6baa768a5a595d
      85a181ce
  19. 08 Apr, 2015 1 commit
    • Kévin Petit's avatar
      Add support to indicate size and end of assembly functions · 8b779620
      Kévin Petit authored
      
      
      In order for the symbol table in the ELF file to contain the size of
      functions written in assembly, it is necessary to report it to the
      assembler using the .size directive.
      
      To fulfil the above requirements, this patch introduces an 'endfunc'
      macro which contains the .endfunc and .size directives. It also adds
      a .func directive to the 'func' assembler macro.
      
      The .func/.endfunc have been used so the assembler can fail if
      endfunc is omitted.
      
      Fixes ARM-Software/tf-issues#295
      
      Change-Id: If8cb331b03d7f38fe7e3694d4de26f1075b278fc
      Signed-off-by: default avatarKévin Petit <kevin.petit@arm.com>
      8b779620
  20. 22 Jan, 2015 1 commit
    • Soby Mathew's avatar
      Remove coherent memory from the BL memory maps · ab8707e6
      Soby Mathew authored
      This patch extends the build option `USE_COHERENT_MEMORY` to
      conditionally remove coherent memory from the memory maps of
      all boot loader stages. The patch also adds necessary
      documentation for coherent memory removal in firmware-design,
      porting and user guides.
      
      Fixes ARM-Software/tf-issues#106
      
      Change-Id: I260e8768c6a5c2efc402f5804a80657d8ce38773
      ab8707e6
  21. 15 Aug, 2014 1 commit
    • Achin Gupta's avatar
      Unmask SError interrupt and clear SCR_EL3.EA bit · 0c8d4fef
      Achin Gupta authored
      This patch disables routing of external aborts from lower exception levels to
      EL3 and ensures that a SError interrupt generated as a result of execution in
      EL3 is taken locally instead of a lower exception level.
      
      The SError interrupt is enabled in the TSP code only when the operation has not
      been directly initiated by the normal world. This is to prevent the possibility
      of an asynchronous external abort which originated in normal world from being
      taken when execution is in S-EL1.
      
      Fixes ARM-software/tf-issues#153
      
      Change-Id: I157b996c75996d12fd86d27e98bc73dd8bce6cd5
      0c8d4fef
  22. 01 Aug, 2014 1 commit
    • Juan Castillo's avatar
      Call platform_is_primary_cpu() only from reset handler · 53fdcebd
      Juan Castillo authored
      The purpose of platform_is_primary_cpu() is to determine after reset
      (BL1 or BL3-1 with reset handler) if the current CPU must follow the
      cold boot path (primary CPU), or wait in a safe state (secondary CPU)
      until the primary CPU has finished the system initialization.
      
      This patch removes redundant calls to platform_is_primary_cpu() in
      subsequent bootloader entrypoints since the reset handler already
      guarantees that code is executed exclusively on the primary CPU.
      
      Additionally, this patch removes the weak definition of
      platform_is_primary_cpu(), so the implementation of this function
      becomes mandatory. Removing the weak symbol avoids other
      bootloaders accidentally picking up an invalid definition in case the
      porting layer makes the real function available only to BL1.
      
      The define PRIMARY_CPU is no longer mandatory in the platform porting
      because platform_is_primary_cpu() hides the implementation details
      (for instance, there may be platforms that report the primary CPU in
      a system register). The primary CPU definition in FVP has been moved
      to fvp_def.h.
      
      The porting guide has been updated accordingly.
      
      Fixes ARM-software/tf-issues#219
      
      Change-Id: If675a1de8e8d25122b7fef147cb238d939f90b5e
      53fdcebd
  23. 28 Jul, 2014 1 commit
    • Achin Gupta's avatar
      Simplify management of SCTLR_EL3 and SCTLR_EL1 · ec3c1003
      Achin Gupta authored
      This patch reworks the manner in which the M,A, C, SA, I, WXN & EE bits of
      SCTLR_EL3 & SCTLR_EL1 are managed. The EE bit is cleared immediately after reset
      in EL3. The I, A and SA bits are set next in EL3 and immediately upon entry in
      S-EL1. These bits are no longer managed in the blX_arch_setup() functions. They
      do not have to be saved and restored either. The M, WXN and optionally the C
      bit are set in the enable_mmu_elX() function. This is done during both the warm
      and cold boot paths.
      
      Fixes ARM-software/tf-issues#226
      
      Change-Id: Ie894d1a07b8697c116960d858cd138c50bc7a069
      ec3c1003
  24. 19 Jul, 2014 1 commit
    • Achin Gupta's avatar
      Remove coherent stack usage from the cold boot path · 754a2b7a
      Achin Gupta authored
      This patch reworks the cold boot path across the BL1, BL2, BL3-1 and BL3-2 boot
      loader stages to not use stacks allocated in coherent memory for early platform
      setup and enabling the MMU. Stacks allocated in normal memory are used instead.
      
      Attributes for stack memory change from nGnRnE when the MMU is disabled to
      Normal WBWA Inner-shareable when the MMU and data cache are enabled. It is
      possible for the CPU to read stale stack memory after the MMU is enabled from
      another CPUs cache. Hence, it is unsafe to turn on the MMU and data cache while
      using normal stacks when multiple CPUs are a part of the same coherency
      domain. It is safe to do so in the cold boot path as only the primary cpu
      executes it. The secondary cpus are in a quiescent state.
      
      This patch does not remove the allocation of coherent stack memory. That is done
      in a subsequent patch.
      
      Change-Id: I12c80b7c7ab23506d425c5b3a8a7de693498f830
      754a2b7a
  25. 22 May, 2014 1 commit
    • Vikram Kanigiri's avatar
      Rework handover interface between BL stages · 29fb905d
      Vikram Kanigiri authored
      This patch reworks the handover interface from: BL1 to BL2 and
      BL2 to BL3-1. It removes the raise_el(), change_el(), drop_el()
      and run_image() functions as they catered for code paths that were
      never exercised.
      BL1 calls bl1_run_bl2() to jump into BL2 instead of doing the same
      by calling run_image(). Similarly, BL2 issues the SMC to transfer
      execution to BL3-1 through BL1 directly. Only x0 and x1 are used
      to pass arguments to BL31. These arguments and parameters for
      running BL3-1 are passed through a reference to a
      'el_change_info_t' structure. They were being passed value in
      general purpose registers earlier.
      
      Change-Id: Id4fd019a19a9595de063766d4a66295a2c9307e1
      29fb905d
  26. 07 May, 2014 2 commits
    • Andrew Thoelke's avatar
      Access system registers directly in assembler · 7935d0a5
      Andrew Thoelke authored
      Instead of using the system register helper functions to read
      or write system registers, assembler coded functions should
      use MRS/MSR instructions. This results in faster and more
      compact code.
      
      This change replaces all usage of the helper functions with
      direct register accesses.
      
      Change-Id: I791d5f11f257010bb3e6a72c6c5ab8779f1982b3
      7935d0a5
    • Andrew Thoelke's avatar
      Correct usage of data and instruction barriers · 8cec598b
      Andrew Thoelke authored
      The current code does not always use data and instruction
      barriers as required by the architecture and frequently uses
      barriers excessively due to their inclusion in all of the
      write_*() helper functions.
      
      Barriers should be used explicitly in assembler or C code
      when modifying processor state that requires the barriers in
      order to enable review of correctness of the code.
      
      This patch removes the barriers from the helper functions and
      introduces them as necessary elsewhere in the code.
      
      PORTING NOTE: check any port of Trusted Firmware for use of
      system register helper functions for reliance on the previous
      barrier behaviour and add explicit barriers as necessary.
      
      Fixes ARM-software/tf-issues#92
      
      Change-Id: Ie63e187404ff10e0bdcb39292dd9066cb84c53bf
      8cec598b
  27. 06 May, 2014 1 commit
    • Dan Handley's avatar
      Reduce deep nesting of header files · 97043ac9
      Dan Handley authored
      Reduce the number of header files included from other header
      files as much as possible without splitting the files. Use forward
      declarations where possible. This allows removal of some unnecessary
      "#ifndef __ASSEMBLY__" statements.
      
      Also, review the .c and .S files for which header files really need
      including and reorder the #include statements alphabetically.
      
      Fixes ARM-software/tf-issues#31
      
      Change-Id: Iec92fb976334c77453e010b60bcf56f3be72bd3e
      97043ac9
  28. 26 Mar, 2014 1 commit
    • Andrew Thoelke's avatar
      Place assembler functions in separate sections · 0a30cf54
      Andrew Thoelke authored
      This extends the --gc-sections behaviour to the many assembler
      support functions in the firmware images by placing each function
      into its own code section. This is achieved by creating a 'func'
      macro used to declare each function label.
      
      Fixes ARM-software/tf-issues#80
      
      Change-Id: I301937b630add292d2dec6d2561a7fcfa6fec690
      0a30cf54
  29. 17 Jan, 2014 2 commits
    • Jeenu Viswambharan's avatar
      Change comments in assembler files to help ctags · 3a4cae05
      Jeenu Viswambharan authored
      Ctags seem to have a problem with generating tags for assembler symbols
      when a comment immediately follows an assembly label.
      
      This patch inserts a single space character between the label
      definition and the following comments to help ctags.
      
      The patch is generated by the command:
      
        git ls-files -- \*.S | xargs sed -i 's/^\([^:]\+\):;/\1: ;/1'
      
      Change-Id: If7a3c9d0f51207ea033cc8b8e1b34acaa0926475
      3a4cae05
    • Dan Handley's avatar
      Update year in copyright text to 2014 · e83b0cad
      Dan Handley authored
      Change-Id: Ic7fb61aabae1d515b9e6baf3dd003807ff42da60
      e83b0cad
  30. 12 Dec, 2013 1 commit
  31. 05 Dec, 2013 3 commits
    • Dan Handley's avatar
      Enable third party contributions · ab2d31ed
      Dan Handley authored
      - Add instructions for contributing to ARM Trusted Firmware.
      
      - Update copyright text in all files to acknowledge contributors.
      
      Change-Id: I9311aac81b00c6c167d2f8c889aea403b84450e5
      ab2d31ed
    • Sandrine Bailleux's avatar
      Properly initialise the C runtime environment · 65f546a1
      Sandrine Bailleux authored
      This patch makes sure the C runtime environment is properly
      initialised before executing any C code.
      
        - Zero-initialise NOBITS sections (e.g. the bss section).
        - Relocate BL1 data from ROM to RAM.
      
      Change-Id: I0da81b417b2f0d1f7ef667cc5131b1e47e22571f
      65f546a1
    • Sandrine Bailleux's avatar
      Various improvements/cleanups on the linker scripts · 8d69a03f
      Sandrine Bailleux authored
        - Check at link-time that bootloader images will fit in memory
          at run time and that they won't overlap each other.
        - Remove text and rodata orphan sections.
        - Define new linker symbols to remove the need for platform setup
          code to know the order of sections.
        - Reduce the size of the raw binary images by cutting some sections
          out of the disk image and allocating them at load time, whenever
          possible.
        - Rework alignment constraints on sections.
        - Remove unused linker symbols.
        - Homogenize linker symbols names across all BLs.
        - Add some comments in the linker scripts.
      
      Change-Id: I47a328af0ccc7c8ab47fcc0dc6e7dd26160610b9
      8d69a03f
  32. 27 Nov, 2013 1 commit
  33. 25 Oct, 2013 1 commit