1. 21 Apr, 2021 1 commit
    • Yann Gautier's avatar
      Avoid the use of linker *_SIZE__ macros · fb4f511f
      Yann Gautier authored
      The use of end addresses is preferred over the size of sections.
      This was done for some AARCH64 files for PIE with commit [1],
      and some extra explanations can be found in its commit message.
      Align the missing AARCH64 files.
      
      For AARCH32 files, this is required to prepare PIE support introduction.
      
       [1] f1722b69
      
       ("PIE: Use PC relative adrp/adr for symbol reference")
      
      Change-Id: I8f1c06580182b10c680310850f72904e58a54d7d
      Signed-off-by: default avatarYann Gautier <yann.gautier@st.com>
      fb4f511f
  2. 13 Nov, 2020 1 commit
    • Alexei Fedorov's avatar
      TSP: Fix GCC 11.0.0 compilation error. · caff3c87
      Alexei Fedorov authored
      
      
      This patch fixes the following compilation error
      reported by aarch64-none-elf-gcc 11.0.0:
      
      bl32/tsp/tsp_main.c: In function 'tsp_smc_handler':
      bl32/tsp/tsp_main.c:393:9: error: 'tsp_get_magic'
       accessing 32 bytes in a region of size 16
       [-Werror=stringop-overflow=]
        393 |         tsp_get_magic(service_args);
            |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~
      bl32/tsp/tsp_main.c:393:9: note: referencing argument 1
       of type 'uint64_t *' {aka 'long long unsigned int *'}
      In file included from bl32/tsp/tsp_main.c:19:
      bl32/tsp/tsp_private.h:64:6: note: in a call to function 'tsp_get_magic'
         64 | void tsp_get_magic(uint64_t args[4]);
            |      ^~~~~~~~~~~~~
      
      by changing declaration of tsp_get_magic function from
      void tsp_get_magic(uint64_t args[4]);
      to
      uint128_t tsp_get_magic(void);
      which returns arguments directly in x0 and x1 registers.
      
      In bl32\tsp\tsp_main.c the current tsp_smc_handler()
      implementation calls tsp_get_magic(service_args);
      , where service_args array is declared as
      uint64_t service_args[2];
      and tsp_get_magic() in bl32\tsp\aarch64\tsp_request.S
      copies only 2 registers in output buffer:
      	/* Store returned arguments to the array */
      	stp	x0, x1, [x4, #0]
      
      Change-Id: Ib34759fc5d7bb803e6c734540d91ea278270b330
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      caff3c87
  3. 12 Oct, 2020 1 commit
    • Jimmy Brisson's avatar
      Increase type widths to satisfy width requirements · d7b5f408
      Jimmy Brisson authored
      
      
      Usually, C has no problem up-converting types to larger bit sizes. MISRA
      rule 10.7 requires that you not do this, or be very explicit about this.
      This resolves the following required rule:
      
          bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None>
          The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U |
          0x3c0U" (32 bits) is less that the right hand operand
          "18446744073709547519ULL" (64 bits).
      
      This also resolves MISRA defects such as:
      
          bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)]
          In the expression "3U << 20", shifting more than 7 bits, the number
          of bits in the essential type of the left expression, "3U", is
          not allowed.
      
      Further, MISRA requires that all shifts don't overflow. The definition of
      PAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues.
      This fixes the violation by changing the definition to 1UL << 12. Since
      this uses 32bits, it should not create any issues for aarch32.
      
      This patch also contains a fix for a build failure in the sun50i_a64
      platform. Specifically, these misra fixes removed a single and
      instruction,
      
          92407e73        and     x19, x19, #0xffffffff
      
      from the cm_setup_context function caused a relocation in
      psci_cpus_on_start to require a linker-generated stub. This increased the
      size of the .text section and caused an alignment later on to go over a
      page boundary and round up to the end of RAM before placing the .data
      section. This sectionn is of non-zero size and therefore causes a link
      error.
      
      The fix included in this reorders the functions during link time
      without changing their ording with respect to alignment.
      
      Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16
      Signed-off-by: default avatarJimmy Brisson <jimmy.brisson@arm.com>
      d7b5f408
  4. 24 Jan, 2020 1 commit
  5. 22 Jan, 2020 1 commit
  6. 13 Sep, 2019 1 commit
    • Alexei Fedorov's avatar
      Refactor ARMv8.3 Pointer Authentication support code · ed108b56
      Alexei Fedorov authored
      
      
      This patch provides the following features and makes modifications
      listed below:
      - Individual APIAKey key generation for each CPU.
      - New key generation on every BL31 warm boot and TSP CPU On event.
      - Per-CPU storage of APIAKey added in percpu_data[]
        of cpu_data structure.
      - `plat_init_apiakey()` function replaced with `plat_init_apkey()`
        which returns 128-bit value and uses Generic timer physical counter
        value to increase the randomness of the generated key.
        The new function can be used for generation of all ARMv8.3-PAuth keys
      - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
      - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
        generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
        pauth_disable_el1()` and `pauth_disable_el3()` functions disable
        PAuth for EL1 and EL3 respectively;
        `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
        cpu-data structure.
      - Combined `save_gp_pauth_registers()` function replaces calls to
        `save_gp_registers()` and `pauth_context_save()`;
        `restore_gp_pauth_registers()` replaces `pauth_context_restore()`
        and `restore_gp_registers()` calls.
      - `restore_gp_registers_eret()` function removed with corresponding
        code placed in `el3_exit()`.
      - Fixed the issue when `pauth_t pauth_ctx` structure allocated space
        for 12 uint64_t PAuth registers instead of 10 by removal of macro
        CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
        and assigning its value to CTX_PAUTH_REGS_END.
      - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
        in `msr	spsel`  instruction instead of hard-coded values.
      - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.
      
      Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      ed108b56
  7. 24 May, 2019 1 commit
    • Alexei Fedorov's avatar
      Add support for Branch Target Identification · 9fc59639
      Alexei Fedorov authored
      
      
      This patch adds the functionality needed for platforms to provide
      Branch Target Identification (BTI) extension, introduced to AArch64
      in Armv8.5-A by adding BTI instruction used to mark valid targets
      for indirect branches. The patch sets new GP bit [50] to the stage 1
      Translation Table Block and Page entries to denote guarded EL3 code
      pages which will cause processor to trap instructions in protected
      pages trying to perform an indirect branch to any instruction other
      than BTI.
      BTI feature is selected by BRANCH_PROTECTION option which supersedes
      the previous ENABLE_PAUTH used for Armv8.3-A Pointer Authentication
      and is disabled by default. Enabling BTI requires compiler support
      and was tested with GCC versions 9.0.0, 9.0.1 and 10.0.0.
      The assembly macros and helpers are modified to accommodate the BTI
      instruction.
      This is an experimental feature.
      Note. The previous ENABLE_PAUTH build option to enable PAuth in EL3
      is now made as an internal flag and BRANCH_PROTECTION flag should be
      used instead to enable Pointer Authentication.
      Note. USE_LIBROM=1 option is currently not supported.
      
      Change-Id: Ifaf4438609b16647dc79468b70cd1f47a623362e
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      9fc59639
  8. 12 Mar, 2019 1 commit
    • John Tsichritzis's avatar
      Apply stricter speculative load restriction · 02b57943
      John Tsichritzis authored
      
      
      The SCTLR.DSSBS bit is zero by default thus disabling speculative loads.
      However, we also explicitly set it to zero for BL2 and TSP images when
      each image initialises its context. This is done to ensure that the
      image environment is initialised in a safe state, regardless of the
      reset value of the bit.
      
      Change-Id: If25a8396641edb640f7f298b8d3309d5cba3cd79
      Signed-off-by: default avatarJohn Tsichritzis <john.tsichritzis@arm.com>
      02b57943
  9. 27 Feb, 2019 1 commit
    • Antonio Nino Diaz's avatar
      TSP: Enable pointer authentication support · 67b6ff9f
      Antonio Nino Diaz authored
      
      
      The size increase after enabling options related to ARMv8.3-PAuth is:
      
      +----------------------------+-------+-------+-------+--------+
      |                            |  text |  bss  |  data | rodata |
      +----------------------------+-------+-------+-------+--------+
      | CTX_INCLUDE_PAUTH_REGS = 1 |   +40 |   +0  |   +0  |   +0   |
      |                            |  0.4% |       |       |        |
      +----------------------------+-------+-------+-------+--------+
      | ENABLE_PAUTH = 1           |  +352 |    +0 |  +16  |   +0   |
      |                            |  3.1% |       | 15.8% |        |
      +----------------------------+-------+-------+-------+--------+
      
      Results calculated with the following build configuration:
      
          make PLAT=fvp SPD=tspd DEBUG=1 \
          SDEI_SUPPORT=1                 \
          EL3_EXCEPTION_HANDLING=1       \
          TSP_NS_INTR_ASYNC_PREEMPT=1    \
          CTX_INCLUDE_PAUTH_REGS=1       \
          ENABLE_PAUTH=1
      
      Change-Id: I6cc1fe0b2345c547dcef66f98758c4eb55fe5ee4
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      67b6ff9f
  10. 04 Jan, 2019 1 commit
    • Antonio Nino Diaz's avatar
      Sanitise includes across codebase · 09d40e0e
      Antonio Nino Diaz authored
      Enforce full include path for includes. Deprecate old paths.
      
      The following folders inside include/lib have been left unchanged:
      
      - include/lib/cpus/${ARCH}
      - include/lib/el3_runtime/${ARCH}
      
      The reason for this change is that having a global namespace for
      includes isn't a good idea. It defeats one of the advantages of having
      folders and it introduces problems that are sometimes subtle (because
      you may not know the header you are actually including if there are two
      of them).
      
      For example, this patch had to be created because two headers were
      called the same way: e0ea0928 ("Fix gpio includes of mt8173 platform
      to avoid collision."). More recently, this patch has had similar
      problems: 46f9b2c3 ("drivers: add tzc380 support").
      
      This problem was introduced in commit 4ecca339
      
       ("Move include and
      source files to logical locations"). At that time, there weren't too
      many headers so it wasn't a real issue. However, time has shown that
      this creates problems.
      
      Platforms that want to preserve the way they include headers may add the
      removed paths to PLAT_INCLUDES, but this is discouraged.
      
      Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      09d40e0e
  11. 11 Jul, 2018 1 commit
    • Roberto Vargas's avatar
      Add end_vector_entry assembler macro · a9203eda
      Roberto Vargas authored
      
      
      Check_vector_size checks if the size of the vector fits
      in the size reserved for it. This check creates problems in
      the Clang assembler. A new macro, end_vector_entry, is added
      and check_vector_size is deprecated.
      
      This new macro fills the current exception vector until the next
      exception vector. If the size of the current vector is bigger
      than 32 instructions then it gives an error.
      
      Change-Id: Ie8545cf1003a1e31656a1018dd6b4c28a4eaf671
      Signed-off-by: default avatarRoberto Vargas <roberto.vargas@arm.com>
      a9203eda
  12. 27 Jun, 2018 1 commit
  13. 21 Aug, 2017 1 commit
    • Julius Werner's avatar
      Fix x30 reporting for unhandled exceptions · 4d91838b
      Julius Werner authored
      
      
      Some error paths that lead to a crash dump will overwrite the value in
      the x30 register by calling functions with the no_ret macro, which
      resolves to a BL instruction. This is not very useful and not what the
      reader would expect, since a crash dump should usually show all
      registers in the state they were in when the exception happened. This
      patch replaces the offending function calls with a B instruction to
      preserve the value in x30.
      
      Change-Id: I2a3636f2943f79bab0cd911f89d070012e697c2a
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      4d91838b
  14. 15 Aug, 2017 1 commit
    • Julius Werner's avatar
      Add new alignment parameter to func assembler macro · 64726e6d
      Julius Werner authored
      
      
      Assembler programmers are used to being able to define functions with a
      specific aligment with a pattern like this:
      
          .align X
        myfunction:
      
      However, this pattern is subtly broken when instead of a direct label
      like 'myfunction:', you use the 'func myfunction' macro that's standard
      in Trusted Firmware. Since the func macro declares a new section for the
      function, the .align directive written above it actually applies to the
      *previous* section in the assembly file, and the function it was
      supposed to apply to is linked with default alignment.
      
      An extreme case can be seen in Rockchip's plat_helpers.S which contains
      this code:
      
        [...]
        endfunc plat_crash_console_putc
      
        .align 16
        func platform_cpu_warmboot
        [...]
      
      This assembles into the following plat_helpers.o:
      
        Sections:
        Idx Name                             Size  [...]  Algn
         9 .text.plat_crash_console_putc 00010000  [...]  2**16
        10 .text.platform_cpu_warmboot   00000080  [...]  2**3
      
      As can be seen, the *previous* function actually got the alignment
      constraint, and it is also 64KB big even though it contains only two
      instructions, because the .align directive at the end of its section
      forces the assembler to insert a giant sled of NOPs. The function we
      actually wanted to align has the default constraint. This code only
      works at all because the linker just happens to put the two functions
      right behind each other when linking the final image, and since the end
      of plat_crash_console_putc is aligned the start of platform_cpu_warmboot
      will also be. But it still wastes almost 64KB of image space
      unnecessarily, and it will break under certain circumstances (e.g. if
      the plat_crash_console_putc function becomes unused and its section gets
      garbage-collected out).
      
      There's no real way to fix this with the existing func macro. Code like
      
       func myfunc
       .align X
      
      happens to do the right thing, but is still not really correct code
      (because the function label is inserted before the .align directive, so
      the assembler is technically allowed to insert padding at the beginning
      of the function which would then get executed as instructions if the
      function was called). Therefore, this patch adds a new parameter with a
      default value to the func macro that allows overriding its alignment.
      
      Also fix up all existing instances of this dangerous antipattern.
      
      Change-Id: I5696a07e2fde896f21e0e83644c95b7b6ac79a10
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      64726e6d
  15. 03 May, 2017 1 commit
  16. 26 Apr, 2017 1 commit
  17. 31 Mar, 2017 1 commit
    • Douglas Raillard's avatar
      Add support for GCC stack protection · 51faada7
      Douglas Raillard authored
      
      
      Introduce new build option ENABLE_STACK_PROTECTOR. It enables
      compilation of all BL images with one of the GCC -fstack-protector-*
      options.
      
      A new platform function plat_get_stack_protector_canary() is introduced.
      It returns a value that is used to initialize the canary for stack
      corruption detection. Returning a random value will prevent an attacker
      from predicting the value and greatly increase the effectiveness of the
      protection.
      
      A message is printed at the ERROR level when a stack corruption is
      detected.
      
      To be effective, the global data must be stored at an address
      lower than the base of the stacks. Failure to do so would allow an
      attacker to overwrite the canary as part of an attack which would void
      the protection.
      
      FVP implementation of plat_get_stack_protector_canary is weak as
      there is no real source of entropy on the FVP. It therefore relies on a
      timer's value, which could be predictable.
      
      Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      51faada7
  18. 08 Mar, 2017 1 commit
  19. 06 Feb, 2017 1 commit
    • Douglas Raillard's avatar
      Introduce unified API to zero memory · 308d359b
      Douglas Raillard authored
      
      
      Introduce zeromem_dczva function on AArch64 that can handle unaligned
      addresses and make use of DC ZVA instruction to zero a whole block at a
      time. This zeroing takes place directly in the cache to speed it up
      without doing external memory access.
      
      Remove the zeromem16 function on AArch64 and replace it with an alias to
      zeromem. This zeromem16 function is now deprecated.
      
      Remove the 16-bytes alignment constraint on __BSS_START__ in
      firmware-design.md as it is now not mandatory anymore (it used to comply
      with zeromem16 requirements).
      
      Change the 16-bytes alignment constraints in SP min's linker script to a
      8-bytes alignment constraint as the AArch32 zeromem implementation is now
      more efficient on 8-bytes aligned addresses.
      
      Introduce zero_normalmem and zeromem helpers in platform agnostic header
      that are implemented this way:
      * AArch32:
      	* zero_normalmem: zero using usual data access
      	* zeromem: alias for zero_normalmem
      * AArch64:
      	* zero_normalmem: zero normal memory  using DC ZVA instruction
      	                  (needs MMU enabled)
      	* zeromem: zero using usual data access
      
      Usage guidelines: in most cases, zero_normalmem should be preferred.
      
      There are 2 scenarios where zeromem (or memset) must be used instead:
      * Code that must run with MMU disabled (which means all memory is
        considered device memory for data accesses).
      * Code that fills device memory with null bytes.
      
      Optionally, the following rule can be applied if performance is
      important:
      * Code zeroing small areas (few bytes) that are not secrets should use
        memset to take advantage of compiler optimizations.
      
        Note: Code zeroing security-related critical information should use
        zero_normalmem/zeromem instead of memset to avoid removal by
        compilers' optimizations in some cases or misbehaving versions of GCC.
      
      Fixes ARM-software/tf-issues#408
      
      Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      308d359b
  20. 23 Dec, 2016 1 commit
    • Douglas Raillard's avatar
      Abort preempted TSP STD SMC after PSCI CPU suspend · 3df6012a
      Douglas Raillard authored
      
      
      Standard SMC requests that are handled in the secure-world by the Secure
      Payload can be preempted by interrupts that must be handled in the
      normal world. When the TSP is preempted the secure context is stored and
      control is passed to the normal world to handle the non-secure
      interrupt. Once completed the preempted secure context is restored. When
      restoring the preempted context, the dispatcher assumes that the TSP
      preempted context is still stored as the SECURE context by the context
      management library.
      
      However, PSCI power management operations causes synchronous entry into
      TSP. This overwrites the preempted SECURE context in the context
      management library. When restoring back the SECURE context, the Secure
      Payload crashes because this context is not the preempted context
      anymore.
      
      This patch avoids corruption of the preempted SECURE context by aborting
      any preempted SMC during PSCI power management calls. The
      abort_std_smc_entry hook of the TSP is called when aborting the SMC
      request.
      
      It also exposes this feature as a FAST SMC callable from normal world to
      abort preempted SMC with FID TSP_FID_ABORT.
      
      Change-Id: I7a70347e9293f47d87b5de20484b4ffefb56b770
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      3df6012a
  21. 05 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Define and use no_ret macro where no return is expected · a806dad5
      Jeenu Viswambharan authored
      
      
      There are many instances in ARM Trusted Firmware where control is
      transferred to functions from which return isn't expected. Such jumps
      are made using 'bl' instruction to provide the callee with the location
      from which it was jumped to. Additionally, debuggers infer the caller by
      examining where 'lr' register points to. If a 'bl' of the nature
      described above falls at the end of an assembly function, 'lr' will be
      left pointing to a location outside of the function range. This misleads
      the debugger back trace.
      
      This patch defines a 'no_ret' macro to be used when jumping to functions
      from which return isn't expected. The macro ensures to use 'bl'
      instruction for the jump, and also, for debug builds, places a 'nop'
      instruction immediately thereafter (unless instructed otherwise) so as
      to leave 'lr' pointing within the function range.
      
      Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0
      Co-authored-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      a806dad5
  22. 26 May, 2016 1 commit
    • Sandrine Bailleux's avatar
      Introduce some helper macros for exception vectors · e0ae9fab
      Sandrine Bailleux authored
      This patch introduces some assembler macros to simplify the
      declaration of the exception vectors. It abstracts the section
      the exception code is put into as well as the alignments
      constraints mandated by the ARMv8 architecture. For all TF images,
      the exception code has been updated to make use of these macros.
      
      This patch also updates some invalid comments in the exception
      vector code.
      
      Change-Id: I35737b8f1c8c24b6da89b0a954c8152a4096fa95
      e0ae9fab
  23. 14 Mar, 2016 1 commit
    • Antonio Nino Diaz's avatar
      Remove all non-configurable dead loops · 1c3ea103
      Antonio Nino Diaz authored
      Added a new platform porting function plat_panic_handler, to allow
      platforms to handle unexpected error situations. It must be
      implemented in assembly as it may be called before the C environment
      is initialized. A default implementation is provided, which simply
      spins.
      
      Corrected all dead loops in generic code to call this function
      instead. This includes the dead loop that occurs at the end of the
      call to panic().
      
      All unnecesary wfis from bl32/tsp/aarch64/tsp_exceptions.S have
      been removed.
      
      Change-Id: I67cb85f6112fa8e77bd62f5718efcef4173d8134
      1c3ea103
  24. 09 Dec, 2015 1 commit
    • Soby Mathew's avatar
      TSP: Allow preemption of synchronous S-EL1 interrupt handling · 63b8440f
      Soby Mathew authored
      Earlier the TSP only ever expected to be preempted during Standard SMC
      processing. If a S-EL1 interrupt triggered while in the normal world, it
      will routed to S-EL1 `synchronously` for handling. The `synchronous` S-EL1
      interrupt handler `tsp_sel1_intr_entry` used to panic if this S-EL1 interrupt
      was preempted by another higher priority pending interrupt which should be
      handled in EL3 e.g. Group0 interrupt in GICv3.
      
      With this patch, the `tsp_sel1_intr_entry` now expects `TSP_PREEMPTED` as the
      return code from the `tsp_common_int_handler` in addition to 0 (interrupt
      successfully handled) and in both cases it issues an SMC with id
      `TSP_HANDLED_S_EL1_INTR`. The TSPD switches the context and returns back
      to normal world. In case a higher priority EL3 interrupt was pending, the
      execution will be routed to EL3 where interrupt will be handled. On return
      back to normal world, the pending S-EL1 interrupt which was preempted will
      get routed to S-EL1 to be handled `synchronously` via `tsp_sel1_intr_entry`.
      
      Change-Id: I2087c7fedb37746fbd9200cdda9b6dba93e16201
      63b8440f
  25. 04 Dec, 2015 2 commits
    • Soby Mathew's avatar
      Enable use of FIQs and IRQs as TSP interrupts · 02446137
      Soby Mathew authored
      On a GICv2 system, interrupts that should be handled in the secure world are
      typically signalled as FIQs. On a GICv3 system, these interrupts are signalled
      as IRQs instead. The mechanism for handling both types of interrupts is the same
      in both cases. This patch enables the TSP to run on a GICv3 system by:
      
      1. adding support for handling IRQs in the exception handling code.
      2. removing use of "fiq" in the names of data structures, macros and functions.
      
      The build option TSPD_ROUTE_IRQ_TO_EL3 is deprecated and is replaced with a
      new build flag TSP_NS_INTR_ASYNC_PREEMPT. For compatibility reasons, if the
      former build flag is defined, it will be used to define the value for the
      new build flag. The documentation is also updated accordingly.
      
      Change-Id: I1807d371f41c3656322dd259340a57649833065e
      02446137
    • Soby Mathew's avatar
      Unify interrupt return paths from TSP into the TSPD · 404dba53
      Soby Mathew authored
      The TSP is expected to pass control back to EL3 if it gets preempted due to
      an interrupt while handling a Standard SMC in the following scenarios:
      
      1. An FIQ preempts Standard SMC execution and that FIQ is not a TSP Secure
         timer interrupt or is preempted by a higher priority interrupt by the time
         the TSP acknowledges it. In this case, the TSP issues an SMC with the ID
         as `TSP_EL3_FIQ`. Currently this case is never expected to happen as only
         the TSP Secure Timer is expected to generate FIQ.
      
      2. An IRQ preempts Standard SMC execution and in this case the TSP issues
         an SMC with the ID as `TSP_PREEMPTED`.
      
      In both the cases, the TSPD hands control back to the normal world and returns
      returns an error code to the normal world to indicate that the standard SMC it
      had issued has been preempted but not completed.
      
      This patch unifies the handling of these two cases in the TSPD and ensures that
      the TSP only uses TSP_PREEMPTED instead of separate SMC IDs. Also instead of 2
      separate error codes, SMC_PREEMPTED and TSP_EL3_FIQ, only SMC_PREEMPTED is
      returned as error code back to the normal world.
      
      Background information: On a GICv3 system, when the secure world has affinity
      routing enabled, in 2. an FIQ will preempt TSP execution instead of an IRQ. The
      FIQ could be a result of a Group 0 or a Group 1 NS interrupt. In both case, the
      TSPD passes control back to the normal world upon receipt of the TSP_PREEMPTED
      SMC. A Group 0 interrupt will immediately preempt execution to EL3 where it
      will be handled. This allows for unified interrupt handling in TSP for both
      GICv3 and GICv2 systems.
      
      Change-Id: I9895344db74b188021e3f6a694701ad272fb40d4
      404dba53
  26. 14 Sep, 2015 1 commit
    • Achin Gupta's avatar
      Make generic code work in presence of system caches · 54dc71e7
      Achin Gupta authored
      On the ARMv8 architecture, cache maintenance operations by set/way on the last
      level of integrated cache do not affect the system cache. This means that such a
      flush or clean operation could result in the data being pushed out to the system
      cache rather than main memory. Another CPU could access this data before it
      enables its data cache or MMU. Such accesses could be serviced from the main
      memory instead of the system cache. If the data in the sysem cache has not yet
      been flushed or evicted to main memory then there could be a loss of
      coherency. The only mechanism to guarantee that the main memory will be updated
      is to use cache maintenance operations to the PoC by MVA(See section D3.4.11
      (System level caches) of ARMv8-A Reference Manual (Issue A.g/ARM DDI0487A.G).
      
      This patch removes the reliance of Trusted Firmware on the flush by set/way
      operation to ensure visibility of data in the main memory. Cache maintenance
      operations by MVA are now used instead. The following are the broad category of
      changes:
      
      1. The RW areas of BL2/BL31/BL32 are invalidated by MVA before the C runtime is
         initialised. This ensures that any stale cache lines at any level of cache
         are removed.
      
      2. Updates to global data in runtime firmware (BL31) by the primary CPU are made
         visible to secondary CPUs using a cache clean operation by MVA.
      
      3. Cache maintenance by set/way operations are only used prior to power down.
      
      NOTE: NON-UPSTREAM TRUSTED FIRMWARE CODE SHOULD MAKE EQUIVALENT CHANGES IN
      ORDER TO FUNCTION CORRECTLY ON PLATFORMS WITH SUPPORT FOR SYSTEM CACHES.
      
      Fixes ARM-software/tf-issues#205
      
      Change-Id: I64f1b398de0432813a0e0881d70f8337681f6e9a
      54dc71e7
  27. 13 Aug, 2015 1 commit
    • Soby Mathew's avatar
      PSCI: Migrate SPDs and TSP to the new platform and framework API · fd650ff6
      Soby Mathew authored
      The new PSCI frameworks mandates that the platform APIs and the various
      frameworks in Trusted Firmware migrate away from MPIDR based core
      identification to one based on core index. Deprecated versions of the old
      APIs are still present to provide compatibility but their implementations
      are not optimal. This patch migrates the various SPDs exisiting within
      Trusted Firmware tree and TSP to the new APIs.
      
      Change-Id: Ifc37e7071c5769b5ded21d0b6a071c8c4cab7836
      fd650ff6
  28. 08 Apr, 2015 1 commit
    • Kévin Petit's avatar
      Add support to indicate size and end of assembly functions · 8b779620
      Kévin Petit authored
      
      
      In order for the symbol table in the ELF file to contain the size of
      functions written in assembly, it is necessary to report it to the
      assembler using the .size directive.
      
      To fulfil the above requirements, this patch introduces an 'endfunc'
      macro which contains the .endfunc and .size directives. It also adds
      a .func directive to the 'func' assembler macro.
      
      The .func/.endfunc have been used so the assembler can fail if
      endfunc is omitted.
      
      Fixes ARM-Software/tf-issues#295
      
      Change-Id: If8cb331b03d7f38fe7e3694d4de26f1075b278fc
      Signed-off-by: default avatarKévin Petit <kevin.petit@arm.com>
      8b779620
  29. 22 Jan, 2015 1 commit
    • Soby Mathew's avatar
      Remove coherent memory from the BL memory maps · ab8707e6
      Soby Mathew authored
      This patch extends the build option `USE_COHERENT_MEMORY` to
      conditionally remove coherent memory from the memory maps of
      all boot loader stages. The patch also adds necessary
      documentation for coherent memory removal in firmware-design,
      porting and user guides.
      
      Fixes ARM-Software/tf-issues#106
      
      Change-Id: I260e8768c6a5c2efc402f5804a80657d8ce38773
      ab8707e6
  30. 19 Aug, 2014 2 commits
    • Juan Castillo's avatar
      Add support for PSCI SYSTEM_OFF and SYSTEM_RESET APIs · d5f13093
      Juan Castillo authored
      This patch adds support for SYSTEM_OFF and SYSTEM_RESET PSCI
      operations. A platform should export handlers to complete the
      requested operation. The FVP port exports fvp_system_off() and
      fvp_system_reset() as an example.
      
      If the SPD provides a power management hook for system off and
      system reset, then the SPD is notified about the corresponding
      operation so it can do some bookkeeping. The TSPD exports
      tspd_system_off() and tspd_system_reset() for that purpose.
      
      Versatile Express shutdown and reset methods have been removed
      from the FDT as new PSCI sys_poweroff and sys_reset services
      have been added. For those kernels that do not support yet these
      PSCI services (i.e. GICv3 kernel), the original dtsi files have
      been renamed to *-no_psci.dtsi.
      
      Fixes ARM-software/tf-issues#218
      
      Change-Id: Ic8a3bf801db979099ab7029162af041c4e8330c8
      d5f13093
    • Dan Handley's avatar
      Clarify platform porting interface to TSP · 5a06bb7e
      Dan Handley authored
      * Move TSP platform porting functions to new file:
        include/bl32/tsp/platform_tsp.h.
      
      * Create new TSP_IRQ_SEC_PHY_TIMER definition for use by the generic
        TSP interrupt handling code, instead of depending on the FVP
        specific definition IRQ_SEC_PHY_TIMER.
      
      * Rename TSP platform porting functions from bl32_* to tsp_*, and
        definitions from BL32_* to TSP_*.
      
      * Update generic TSP code to use new platform porting function names
        and definitions.
      
      * Update FVP port accordingly and move all TSP source files to:
        plat/fvp/tsp/.
      
      * Update porting guide with above changes.
      
      Note: THIS CHANGE REQUIRES ALL PLATFORM PORTS OF THE TSP TO
            BE UPDATED
      
      Fixes ARM-software/tf-issues#167
      
      Change-Id: Ic0ff8caf72aebb378d378193d2f017599fc6b78f
      5a06bb7e
  31. 15 Aug, 2014 1 commit
    • Achin Gupta's avatar
      Unmask SError interrupt and clear SCR_EL3.EA bit · 0c8d4fef
      Achin Gupta authored
      This patch disables routing of external aborts from lower exception levels to
      EL3 and ensures that a SError interrupt generated as a result of execution in
      EL3 is taken locally instead of a lower exception level.
      
      The SError interrupt is enabled in the TSP code only when the operation has not
      been directly initiated by the normal world. This is to prevent the possibility
      of an asynchronous external abort which originated in normal world from being
      taken when execution is in S-EL1.
      
      Fixes ARM-software/tf-issues#153
      
      Change-Id: I157b996c75996d12fd86d27e98bc73dd8bce6cd5
      0c8d4fef
  32. 14 Aug, 2014 1 commit
  33. 01 Aug, 2014 1 commit
    • Juan Castillo's avatar
      Call platform_is_primary_cpu() only from reset handler · 53fdcebd
      Juan Castillo authored
      The purpose of platform_is_primary_cpu() is to determine after reset
      (BL1 or BL3-1 with reset handler) if the current CPU must follow the
      cold boot path (primary CPU), or wait in a safe state (secondary CPU)
      until the primary CPU has finished the system initialization.
      
      This patch removes redundant calls to platform_is_primary_cpu() in
      subsequent bootloader entrypoints since the reset handler already
      guarantees that code is executed exclusively on the primary CPU.
      
      Additionally, this patch removes the weak definition of
      platform_is_primary_cpu(), so the implementation of this function
      becomes mandatory. Removing the weak symbol avoids other
      bootloaders accidentally picking up an invalid definition in case the
      porting layer makes the real function available only to BL1.
      
      The define PRIMARY_CPU is no longer mandatory in the platform porting
      because platform_is_primary_cpu() hides the implementation details
      (for instance, there may be platforms that report the primary CPU in
      a system register). The primary CPU definition in FVP has been moved
      to fvp_def.h.
      
      The porting guide has been updated accordingly.
      
      Fixes ARM-software/tf-issues#219
      
      Change-Id: If675a1de8e8d25122b7fef147cb238d939f90b5e
      53fdcebd
  34. 28 Jul, 2014 1 commit
    • Achin Gupta's avatar
      Simplify management of SCTLR_EL3 and SCTLR_EL1 · ec3c1003
      Achin Gupta authored
      This patch reworks the manner in which the M,A, C, SA, I, WXN & EE bits of
      SCTLR_EL3 & SCTLR_EL1 are managed. The EE bit is cleared immediately after reset
      in EL3. The I, A and SA bits are set next in EL3 and immediately upon entry in
      S-EL1. These bits are no longer managed in the blX_arch_setup() functions. They
      do not have to be saved and restored either. The M, WXN and optionally the C
      bit are set in the enable_mmu_elX() function. This is done during both the warm
      and cold boot paths.
      
      Fixes ARM-software/tf-issues#226
      
      Change-Id: Ie894d1a07b8697c116960d858cd138c50bc7a069
      ec3c1003
  35. 19 Jul, 2014 2 commits
    • Achin Gupta's avatar
      Remove coherent stack usage from the warm boot path · b51da821
      Achin Gupta authored
      This patch uses stacks allocated in normal memory to enable the MMU early in the
      warm boot path thus removing the dependency on stacks allocated in coherent
      memory. Necessary cache and stack maintenance is performed when a cpu is being
      powered down and up. This avoids any coherency issues that can arise from
      reading speculatively fetched stale stack memory from another CPUs cache. These
      changes affect the warm boot path in both BL3-1 and BL3-2.
      
      The EL3 system registers responsible for preserving the MMU state are not saved
      and restored any longer. Static values are used to program these system
      registers when a cpu is powered on or resumed from suspend.
      
      Change-Id: I8357e2eb5eb6c5f448492c5094b82b8927603784
      b51da821
    • Achin Gupta's avatar
      Remove coherent stack usage from the cold boot path · 754a2b7a
      Achin Gupta authored
      This patch reworks the cold boot path across the BL1, BL2, BL3-1 and BL3-2 boot
      loader stages to not use stacks allocated in coherent memory for early platform
      setup and enabling the MMU. Stacks allocated in normal memory are used instead.
      
      Attributes for stack memory change from nGnRnE when the MMU is disabled to
      Normal WBWA Inner-shareable when the MMU and data cache are enabled. It is
      possible for the CPU to read stale stack memory after the MMU is enabled from
      another CPUs cache. Hence, it is unsafe to turn on the MMU and data cache while
      using normal stacks when multiple CPUs are a part of the same coherency
      domain. It is safe to do so in the cold boot path as only the primary cpu
      executes it. The secondary cpus are in a quiescent state.
      
      This patch does not remove the allocation of coherent stack memory. That is done
      in a subsequent patch.
      
      Change-Id: I12c80b7c7ab23506d425c5b3a8a7de693498f830
      754a2b7a
  36. 23 May, 2014 2 commits
    • Dan Handley's avatar
      Add enable mmu platform porting interfaces · dff8e47a
      Dan Handley authored
      Previously, the enable_mmu_elX() functions were implicitly part of
      the platform porting layer since they were included by generic
      code. These functions have been placed behind 2 new platform
      functions, bl31_plat_enable_mmu() and bl32_plat_enable_mmu().
      These are weakly defined so that they can be optionally overridden
      by platform ports.
      
      Also, the enable_mmu_elX() functions have been moved to
      lib/aarch64/xlat_tables.c for optional re-use by platform ports.
      These functions are tightly coupled with the translation table
      initialization code.
      
      Fixes ARM-software/tf-issues#152
      
      Change-Id: I0a2251ce76acfa3c27541f832a9efaa49135cc1c
      dff8e47a
    • Andrew Thoelke's avatar
      Use a vector table for TSP entrypoints · 399fb08f
      Andrew Thoelke authored
      The TSP has a number of entrypoints used by the TSP on different
      occasions. These were provided to the TSPD as a table of function
      pointers, and required the TSPD to read the entry in the table,
      which is in TSP memory, in order to program the exception return
      address.
      
      Ideally, the TSPD has no access to the TSP memory.
      
      This patch changes the table of function pointers into a vector
      table of single instruction entrypoints. This allows the TSPD to
      calculate the entrypoint address instead of read it.
      
      Fixes ARM-software/tf-issues#160
      
      Change-Id: Iec6e055d537ade78a45799fbc6f43765a4725ad3
      399fb08f