1. 29 Nov, 2017 2 commits
  2. 08 Nov, 2017 1 commit
  3. 23 Oct, 2017 1 commit
  4. 13 Oct, 2017 1 commit
    • David Cunado's avatar
      Init and save / restore of PMCR_EL0 / PMCR · 3e61b2b5
      David Cunado authored
      
      
      Currently TF does not initialise the PMCR_EL0 register in
      the secure context or save/restore the register.
      
      In particular, the DP field may not be set to one to prohibit
      cycle counting in the secure state, even though event counting
      generally is prohibited via the default setting of MDCR_EL3.SMPE
      to 0.
      
      This patch initialises PMCR_EL0.DP to one in the secure state
      to prohibit cycle counting and also initialises other fields
      that have an architectually UNKNOWN reset value.
      
      Additionally, PMCR_EL0 is added to the list of registers that are
      saved and restored during a world switch.
      
      Similar changes are made for PMCR for the AArch32 execution state.
      
      NOTE: secure world code at lower ELs that assume other values in PMCR_EL0
      will be impacted.
      
      Change-Id: Iae40e8c0a196d74053accf97063ebc257b4d2f3a
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      3e61b2b5
  5. 05 Sep, 2017 1 commit
    • David Cunado's avatar
      Set NS version SCTLR during warmboot path · 88ad1461
      David Cunado authored
      
      
      When ARM TF executes in AArch32 state, the NS version of SCTLR
      is not being set during warmboot flow. This results in secondary
      CPUs entering the Non-secure world with the default reset value
      in SCTLR.
      
      This patch explicitly sets the value of the NS version of SCTLR
      during the warmboot flow rather than relying on the h/w.
      
      Change-Id: I86bf52b6294baae0a5bd8af0cd0358cc4f55c416
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      88ad1461
  6. 21 Aug, 2017 1 commit
    • Julius Werner's avatar
      Fix x30 reporting for unhandled exceptions · 4d91838b
      Julius Werner authored
      
      
      Some error paths that lead to a crash dump will overwrite the value in
      the x30 register by calling functions with the no_ret macro, which
      resolves to a BL instruction. This is not very useful and not what the
      reader would expect, since a crash dump should usually show all
      registers in the state they were in when the exception happened. This
      patch replaces the offending function calls with a B instruction to
      preserve the value in x30.
      
      Change-Id: I2a3636f2943f79bab0cd911f89d070012e697c2a
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      4d91838b
  7. 15 Aug, 2017 1 commit
    • Julius Werner's avatar
      Add new alignment parameter to func assembler macro · 64726e6d
      Julius Werner authored
      
      
      Assembler programmers are used to being able to define functions with a
      specific aligment with a pattern like this:
      
          .align X
        myfunction:
      
      However, this pattern is subtly broken when instead of a direct label
      like 'myfunction:', you use the 'func myfunction' macro that's standard
      in Trusted Firmware. Since the func macro declares a new section for the
      function, the .align directive written above it actually applies to the
      *previous* section in the assembly file, and the function it was
      supposed to apply to is linked with default alignment.
      
      An extreme case can be seen in Rockchip's plat_helpers.S which contains
      this code:
      
        [...]
        endfunc plat_crash_console_putc
      
        .align 16
        func platform_cpu_warmboot
        [...]
      
      This assembles into the following plat_helpers.o:
      
        Sections:
        Idx Name                             Size  [...]  Algn
         9 .text.plat_crash_console_putc 00010000  [...]  2**16
        10 .text.platform_cpu_warmboot   00000080  [...]  2**3
      
      As can be seen, the *previous* function actually got the alignment
      constraint, and it is also 64KB big even though it contains only two
      instructions, because the .align directive at the end of its section
      forces the assembler to insert a giant sled of NOPs. The function we
      actually wanted to align has the default constraint. This code only
      works at all because the linker just happens to put the two functions
      right behind each other when linking the final image, and since the end
      of plat_crash_console_putc is aligned the start of platform_cpu_warmboot
      will also be. But it still wastes almost 64KB of image space
      unnecessarily, and it will break under certain circumstances (e.g. if
      the plat_crash_console_putc function becomes unused and its section gets
      garbage-collected out).
      
      There's no real way to fix this with the existing func macro. Code like
      
       func myfunc
       .align X
      
      happens to do the right thing, but is still not really correct code
      (because the function label is inserted before the .align directive, so
      the assembler is technically allowed to insert padding at the beginning
      of the function which would then get executed as instructions if the
      function was called). Therefore, this patch adds a new parameter with a
      default value to the func macro that allows overriding its alignment.
      
      Also fix up all existing instances of this dangerous antipattern.
      
      Change-Id: I5696a07e2fde896f21e0e83644c95b7b6ac79a10
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      64726e6d
  8. 09 Aug, 2017 1 commit
    • Etienne Carriere's avatar
      bl32: add secure interrupt handling in AArch32 sp_min · 71816096
      Etienne Carriere authored
      
      
      Add support for a minimal secure interrupt service in sp_min for
      the AArch32 implementation. Hard code that only FIQs are handled.
      
      Introduce bolean build directive SP_MIN_WITH_SECURE_FIQ to enable
      FIQ handling from SP_MIN.
      
      Configure SCR[FIQ] and SCR[FW] from generic code for both cold and
      warm boots to handle FIQ in secure state from monitor.
      
      Since SP_MIN architecture, FIQ are always trapped when system executes
      in non secure state. Hence discard relay of the secure/non-secure
      state in the FIQ handler.
      
      Change-Id: I1f7d1dc7b21f6f90011b7f3fcd921e455592f5e7
      Signed-off-by: default avatarEtienne Carriere <etienne.carriere@st.com>
      71816096
  9. 23 Jun, 2017 1 commit
  10. 21 Jun, 2017 1 commit
    • David Cunado's avatar
      Fully initialise essential control registers · 18f2efd6
      David Cunado authored
      
      
      This patch updates the el3_arch_init_common macro so that it fully
      initialises essential control registers rather then relying on hardware
      to set the reset values.
      
      The context management functions are also updated to fully initialise
      the appropriate control registers when initialising the non-secure and
      secure context structures and when preparing to leave EL3 for a lower
      EL.
      
      This gives better alignement with the ARM ARM which states that software
      must initialise RES0 and RES1 fields with 0 / 1.
      
      This patch also corrects the following typos:
      
      "NASCR definitions" -> "NSACR definitions"
      
      Change-Id: Ia8940b8351dc27bc09e2138b011e249655041cfc
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      18f2efd6
  11. 20 Jun, 2017 2 commits
  12. 12 May, 2017 1 commit
    • Soby Mathew's avatar
      AArch32: Rework SMC context save and restore mechanism · b6285d64
      Soby Mathew authored
      
      
      The current SMC context data structure `smc_ctx_t` and related helpers are
      optimized for case when SMC call does not result in world switch. This was
      the case for SP_MIN and BL1 cold boot flow. But the firmware update usecase
      requires world switch as a result of SMC and the current SMC context helpers
      were not helping very much in this regard. Therefore this patch does the
      following changes to improve this:
      
      1. Add monitor stack pointer, `spmon` to `smc_ctx_t`
      
      The C Runtime stack pointer in monitor mode, `sp_mon` is added to the
      SMC context, and the `smc_ctx_t` pointer is cached in `sp_mon` prior
      to exit from Monitor mode. This makes is easier to retrieve the
      context when the next SMC call happens. As a result of this change,
      the SMC context helpers no longer depend on the stack to save and
      restore the register.
      
      This aligns it with the context save and restore mechanism in AArch64.
      
      2. Add SCR in `smc_ctx_t`
      
      Adding the SCR register to `smc_ctx_t` makes it easier to manage this
      register state when switching between non secure and secure world as a
      result of an SMC call.
      
      Change-Id: I5e12a7056107c1701b457b8f7363fdbf892230bf
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      b6285d64
  13. 03 May, 2017 1 commit
  14. 26 Apr, 2017 1 commit
  15. 19 Apr, 2017 1 commit
    • Soby Mathew's avatar
      PSCI: Build option to enable D-Caches early in warmboot · bcc3c49c
      Soby Mathew authored
      
      
      This patch introduces a build option to enable D-cache early on the CPU
      after warm boot. This is applicable for platforms which do not require
      interconnect programming to enable cache coherency (eg: single cluster
      platforms). If this option is enabled, then warm boot path enables
      D-caches immediately after enabling MMU.
      
      Fixes ARM-Software/tf-issues#456
      
      Change-Id: I44c8787d116d7217837ced3bcf0b1d3441c8d80e
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      bcc3c49c
  16. 31 Mar, 2017 1 commit
    • Douglas Raillard's avatar
      Add support for GCC stack protection · 51faada7
      Douglas Raillard authored
      
      
      Introduce new build option ENABLE_STACK_PROTECTOR. It enables
      compilation of all BL images with one of the GCC -fstack-protector-*
      options.
      
      A new platform function plat_get_stack_protector_canary() is introduced.
      It returns a value that is used to initialize the canary for stack
      corruption detection. Returning a random value will prevent an attacker
      from predicting the value and greatly increase the effectiveness of the
      protection.
      
      A message is printed at the ERROR level when a stack corruption is
      detected.
      
      To be effective, the global data must be stored at an address
      lower than the base of the stacks. Failure to do so would allow an
      attacker to overwrite the canary as part of an attack which would void
      the protection.
      
      FVP implementation of plat_get_stack_protector_canary is weak as
      there is no real source of entropy on the FVP. It therefore relies on a
      timer's value, which could be predictable.
      
      Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      51faada7
  17. 08 Mar, 2017 1 commit
  18. 02 Mar, 2017 1 commit
  19. 06 Feb, 2017 2 commits
    • Douglas Raillard's avatar
      Replace some memset call by zeromem · 32f0d3c6
      Douglas Raillard authored
      
      
      Replace all use of memset by zeromem when zeroing moderately-sized
      structure by applying the following transformation:
      memset(x, 0, sizeof(x)) => zeromem(x, sizeof(x))
      
      As the Trusted Firmware is compiled with -ffreestanding, it forbids the
      compiler from using __builtin_memset and forces it to generate calls to
      the slow memset implementation. Zeromem is a near drop in replacement
      for this use case, with a more efficient implementation on both AArch32
      and AArch64.
      
      Change-Id: Ia7f3a90e888b96d056881be09f0b4d65b41aa79e
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      32f0d3c6
    • Douglas Raillard's avatar
      Introduce unified API to zero memory · 308d359b
      Douglas Raillard authored
      
      
      Introduce zeromem_dczva function on AArch64 that can handle unaligned
      addresses and make use of DC ZVA instruction to zero a whole block at a
      time. This zeroing takes place directly in the cache to speed it up
      without doing external memory access.
      
      Remove the zeromem16 function on AArch64 and replace it with an alias to
      zeromem. This zeromem16 function is now deprecated.
      
      Remove the 16-bytes alignment constraint on __BSS_START__ in
      firmware-design.md as it is now not mandatory anymore (it used to comply
      with zeromem16 requirements).
      
      Change the 16-bytes alignment constraints in SP min's linker script to a
      8-bytes alignment constraint as the AArch32 zeromem implementation is now
      more efficient on 8-bytes aligned addresses.
      
      Introduce zero_normalmem and zeromem helpers in platform agnostic header
      that are implemented this way:
      * AArch32:
      	* zero_normalmem: zero using usual data access
      	* zeromem: alias for zero_normalmem
      * AArch64:
      	* zero_normalmem: zero normal memory  using DC ZVA instruction
      	                  (needs MMU enabled)
      	* zeromem: zero using usual data access
      
      Usage guidelines: in most cases, zero_normalmem should be preferred.
      
      There are 2 scenarios where zeromem (or memset) must be used instead:
      * Code that must run with MMU disabled (which means all memory is
        considered device memory for data accesses).
      * Code that fills device memory with null bytes.
      
      Optionally, the following rule can be applied if performance is
      important:
      * Code zeroing small areas (few bytes) that are not secrets should use
        memset to take advantage of compiler optimizations.
      
        Note: Code zeroing security-related critical information should use
        zero_normalmem/zeromem instead of memset to avoid removal by
        compilers' optimizations in some cases or misbehaving versions of GCC.
      
      Fixes ARM-software/tf-issues#408
      
      Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      308d359b
  20. 23 Dec, 2016 1 commit
    • Douglas Raillard's avatar
      Abort preempted TSP STD SMC after PSCI CPU suspend · 3df6012a
      Douglas Raillard authored
      
      
      Standard SMC requests that are handled in the secure-world by the Secure
      Payload can be preempted by interrupts that must be handled in the
      normal world. When the TSP is preempted the secure context is stored and
      control is passed to the normal world to handle the non-secure
      interrupt. Once completed the preempted secure context is restored. When
      restoring the preempted context, the dispatcher assumes that the TSP
      preempted context is still stored as the SECURE context by the context
      management library.
      
      However, PSCI power management operations causes synchronous entry into
      TSP. This overwrites the preempted SECURE context in the context
      management library. When restoring back the SECURE context, the Secure
      Payload crashes because this context is not the preempted context
      anymore.
      
      This patch avoids corruption of the preempted SECURE context by aborting
      any preempted SMC during PSCI power management calls. The
      abort_std_smc_entry hook of the TSP is called when aborting the SMC
      request.
      
      It also exposes this feature as a FAST SMC callable from normal world to
      abort preempted SMC with FID TSP_FID_ABORT.
      
      Change-Id: I7a70347e9293f47d87b5de20484b4ffefb56b770
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      3df6012a
  21. 12 Dec, 2016 1 commit
    • Soby Mathew's avatar
      AArch32: Fix the stack alignment issue · 9f3ee61c
      Soby Mathew authored
      
      
      The AArch32 Procedure call Standard mandates that the stack must be aligned
      to 8 byte boundary at external interfaces. This patch does the required
      changes.
      
      This problem was detected when a crash was encountered in
      `psci_print_power_domain_map()` while printing 64 bit values. Aligning
      the stack to 8 byte boundary resolved the problem.
      
      Fixes ARM-Software/tf-issues#437
      
      Change-Id: I517bd8203601bb88e9311bd36d477fb7b3efb292
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      9f3ee61c
  22. 05 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Define and use no_ret macro where no return is expected · a806dad5
      Jeenu Viswambharan authored
      
      
      There are many instances in ARM Trusted Firmware where control is
      transferred to functions from which return isn't expected. Such jumps
      are made using 'bl' instruction to provide the callee with the location
      from which it was jumped to. Additionally, debuggers infer the caller by
      examining where 'lr' register points to. If a 'bl' of the nature
      described above falls at the end of an assembly function, 'lr' will be
      left pointing to a location outside of the function range. This misleads
      the debugger back trace.
      
      This patch defines a 'no_ret' macro to be used when jumping to functions
      from which return isn't expected. The macro ensures to use 'bl'
      instruction for the jump, and also, for debug builds, places a 'nop'
      instruction immediately thereafter (unless instructed otherwise) so as
      to leave 'lr' pointing within the function range.
      
      Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0
      Co-authored-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      a806dad5
  23. 22 Sep, 2016 2 commits
    • Soby Mathew's avatar
      PSCI: Do psci_setup() as part of std_svc_setup() · 58e946ae
      Soby Mathew authored
      This patch moves the invocation of `psci_setup()` from BL31 and SP_MIN
      into `std_svc_setup()` as part of ARM Standard Service initialization.
      This allows us to consolidate ARM Standard Service initializations which
      will be added to in the future. A new function `get_arm_std_svc_args()`
      is introduced to get arguments corresponding to each standard service.
      This function must be implemented by the EL3 Runtime Firmware and both
      SP_MIN and BL31 implement it.
      
      Change-Id: I38e1b644f797fa4089b20574bd4a10f0419de184
      58e946ae
    • Soby Mathew's avatar
      PSCI: Introduce PSCI Library argument structure · f426fc05
      Soby Mathew authored
      This patch introduces a `psci_lib_args_t` structure which must be
      passed into `psci_setup()` which is then used to initialize the PSCI
      library. The `psci_lib_args_t` is a versioned structure so as to enable
      compatibility checks during library initialization. Both BL31 and SP_MIN
      are modified to use the new structure.
      
      SP_MIN is also modified to add version string and build message as part
      of its cold boot log just like the other BLs in Trusted Firmware.
      
      NOTE: Please be aware that this patch modifies the prototype of
      `psci_setup()`, which breaks compatibility with EL3 Runtime Firmware
      (excluding BL31 and SP_MIN) integrated with the PSCI Library.
      
      Change-Id: Ic3761db0b790760a7ad664d8a437c72ea5edbcd6
      f426fc05
  24. 21 Sep, 2016 2 commits
    • Yatharth Kochar's avatar
      AArch32: Support in SP_MIN to receive arguments from BL2 · d9915518
      Yatharth Kochar authored
      This patch adds support in SP_MIN to receive generic and
      platform specific arguments from BL2.
      
      The new signature is as following:
          void sp_min_early_platform_setup(void *from_bl2,
               void *plat_params_from_bl2);
      
      ARM platforms have been modified to use this support.
      
      Note: Platforms may break if using old signature.
            Default value for RESET_TO_SP_MIN is changed to 0.
      
      Change-Id: I008d4b09fd3803c7b6231587ebf02a047bdba8d0
      d9915518
    • Yatharth Kochar's avatar
      AArch32: Refactor SP_MIN to support RESET_TO_SP_MIN · 3bdf0e5d
      Yatharth Kochar authored
      This patch uses the `el3_entrypoint_common` macro to initialize
      CPU registers, in SP_MIN entrypoint.s file, in both cold and warm
      boot path. It also adds conditional compilation, in cold and warm
      boot entry path, based on RESET_TO_SP_MIN.
      
      Change-Id: Id493ca840dc7b9e26948dc78ee928e9fdb76b9e4
      3bdf0e5d
  25. 10 Aug, 2016 1 commit
    • Soby Mathew's avatar
      AArch32: add a minimal secure payload (SP_MIN) · c11ba852
      Soby Mathew authored
      This patch adds a minimal AArch32 secure payload SP_MIN. It relies on PSCI
      library to initialize the normal world context. It runs in Monitor mode
      and uses the runtime service framework to handle SMCs. It is added as
      a BL32 component in the Trusted Firmware source tree.
      
      Change-Id: Icc04fa6b242025a769c1f6c7022fde19459c43e9
      c11ba852
  26. 09 Aug, 2016 1 commit
    • Soby Mathew's avatar
      Move spinlock library code to AArch64 folder · 12ab697e
      Soby Mathew authored
      This patch moves the assembly exclusive lock library code
      `spinlock.S` into architecture specific folder `aarch64`.
      A stub file which includes the file from new location is
      retained at the original location for compatibility. The BL
      makefiles are also modified to include the file from the new
      location.
      
      Change-Id: Ide0b601b79c439e390c3a017d93220a66be73543
      12ab697e
  27. 08 Jul, 2016 2 commits
    • Sandrine Bailleux's avatar
      TSP: Print BL32_BASE rather than __RO_START__ · a604623c
      Sandrine Bailleux authored
      In debug builds, the TSP prints its image base address and size.
      The base address displayed corresponds to the start address of the
      read-only section, as defined in the linker script.
      
      This patch changes this to use the BL32_BASE address instead, which is
      the same address as __RO_START__ at the moment but has the advantage
      to be independent of the linker symbols defined in the linker script
      as well as the layout and order of the sections.
      
      Change-Id: I032d8d50df712c014cbbcaa84a9615796ec902cc
      a604623c
    • Sandrine Bailleux's avatar
      Introduce SEPARATE_CODE_AND_RODATA build flag · 5d1c104f
      Sandrine Bailleux authored
      At the moment, all BL images share a similar memory layout: they start
      with their code section, followed by their read-only data section.
      The two sections are contiguous in memory. Therefore, the end of the
      code section and the beginning of the read-only data one might share
      a memory page. This forces both to be mapped with the same memory
      attributes. As the code needs to be executable, this means that the
      read-only data stored on the same memory page as the code are
      executable as well. This could potentially be exploited as part of
      a security attack.
      
      This patch introduces a new build flag called
      SEPARATE_CODE_AND_RODATA, which isolates the code and read-only data
      on separate memory pages. This in turn allows independent control of
      the access permissions for the code and read-only data.
      
      This has an impact on memory footprint, as padding bytes need to be
      introduced between the code and read-only data to ensure the
      segragation of the two. To limit the memory cost, the memory layout
      of the read-only section has been changed in this case.
      
       - When SEPARATE_CODE_AND_RODATA=0, the layout is unchanged, i.e.
         the read-only section still looks like this (padding omitted):
      
         |        ...        |
         +-------------------+
         | Exception vectors |
         +-------------------+
         |  Read-only data   |
         +-------------------+
         |       Code        |
         +-------------------+ BLx_BASE
      
         In this case, the linker script provides the limits of the whole
         read-only section.
      
       - When SEPARATE_CODE_AND_RODATA=1, the exception vectors and
         read-only data are swapped, such that the code and exception
         vectors are contiguous, followed by the read-only data. This
         gives the following new layout (padding omitted):
      
         |        ...        |
         +-------------------+
         |  Read-only data   |
         +-------------------+
         | Exception vectors |
         +-------------------+
         |       Code        |
         +-------------------+ BLx_BASE
      
         In this case, the linker script now exports 2 sets of addresses
         instead: the limits of the code and the limits of the read-only
         data. Refer to the Firmware Design guide for more details. This
         provides platform code with a finer-grained view of the image
         layout and allows it to map these 2 regions with the appropriate
         access permissions.
      
      Note that SEPARATE_CODE_AND_RODATA applies to all BL images.
      
      Change-Id: I936cf80164f6b66b6ad52b8edacadc532c935a49
      5d1c104f
  28. 26 May, 2016 1 commit
    • Sandrine Bailleux's avatar
      Introduce some helper macros for exception vectors · e0ae9fab
      Sandrine Bailleux authored
      This patch introduces some assembler macros to simplify the
      declaration of the exception vectors. It abstracts the section
      the exception code is put into as well as the alignments
      constraints mandated by the ARMv8 architecture. For all TF images,
      the exception code has been updated to make use of these macros.
      
      This patch also updates some invalid comments in the exception
      vector code.
      
      Change-Id: I35737b8f1c8c24b6da89b0a954c8152a4096fa95
      e0ae9fab
  29. 01 Apr, 2016 1 commit
    • Evan Lloyd's avatar
      Make:Remove calls to shell from makefiles. · 231c1470
      Evan Lloyd authored
      As an initial stage of making Trusted Firmware build environment more
      portable, we remove most uses of the $(shell ) function and replace them
      with more portable make function based solutions.
      
      Note that the setting of BUILD_STRING still uses $(shell ) since it's
      not possible to reimplement this as a make function. Avoiding invocation
      of this on incompatible host platforms will be implemented separately.
      
      Change-Id: I768e2f9a265c78814a4adf2edee4cc46cda0f5b8
      231c1470
  30. 14 Mar, 2016 1 commit
    • Antonio Nino Diaz's avatar
      Remove all non-configurable dead loops · 1c3ea103
      Antonio Nino Diaz authored
      Added a new platform porting function plat_panic_handler, to allow
      platforms to handle unexpected error situations. It must be
      implemented in assembly as it may be called before the C environment
      is initialized. A default implementation is provided, which simply
      spins.
      
      Corrected all dead loops in generic code to call this function
      instead. This includes the dead loop that occurs at the end of the
      call to panic().
      
      All unnecesary wfis from bl32/tsp/aarch64/tsp_exceptions.S have
      been removed.
      
      Change-Id: I67cb85f6112fa8e77bd62f5718efcef4173d8134
      1c3ea103
  31. 14 Dec, 2015 1 commit
  32. 09 Dec, 2015 1 commit
    • Soby Mathew's avatar
      TSP: Allow preemption of synchronous S-EL1 interrupt handling · 63b8440f
      Soby Mathew authored
      Earlier the TSP only ever expected to be preempted during Standard SMC
      processing. If a S-EL1 interrupt triggered while in the normal world, it
      will routed to S-EL1 `synchronously` for handling. The `synchronous` S-EL1
      interrupt handler `tsp_sel1_intr_entry` used to panic if this S-EL1 interrupt
      was preempted by another higher priority pending interrupt which should be
      handled in EL3 e.g. Group0 interrupt in GICv3.
      
      With this patch, the `tsp_sel1_intr_entry` now expects `TSP_PREEMPTED` as the
      return code from the `tsp_common_int_handler` in addition to 0 (interrupt
      successfully handled) and in both cases it issues an SMC with id
      `TSP_HANDLED_S_EL1_INTR`. The TSPD switches the context and returns back
      to normal world. In case a higher priority EL3 interrupt was pending, the
      execution will be routed to EL3 where interrupt will be handled. On return
      back to normal world, the pending S-EL1 interrupt which was preempted will
      get routed to S-EL1 to be handled `synchronously` via `tsp_sel1_intr_entry`.
      
      Change-Id: I2087c7fedb37746fbd9200cdda9b6dba93e16201
      63b8440f
  33. 04 Dec, 2015 2 commits
    • Soby Mathew's avatar
      Enable use of FIQs and IRQs as TSP interrupts · 02446137
      Soby Mathew authored
      On a GICv2 system, interrupts that should be handled in the secure world are
      typically signalled as FIQs. On a GICv3 system, these interrupts are signalled
      as IRQs instead. The mechanism for handling both types of interrupts is the same
      in both cases. This patch enables the TSP to run on a GICv3 system by:
      
      1. adding support for handling IRQs in the exception handling code.
      2. removing use of "fiq" in the names of data structures, macros and functions.
      
      The build option TSPD_ROUTE_IRQ_TO_EL3 is deprecated and is replaced with a
      new build flag TSP_NS_INTR_ASYNC_PREEMPT. For compatibility reasons, if the
      former build flag is defined, it will be used to define the value for the
      new build flag. The documentation is also updated accordingly.
      
      Change-Id: I1807d371f41c3656322dd259340a57649833065e
      02446137
    • Soby Mathew's avatar
      Unify interrupt return paths from TSP into the TSPD · 404dba53
      Soby Mathew authored
      The TSP is expected to pass control back to EL3 if it gets preempted due to
      an interrupt while handling a Standard SMC in the following scenarios:
      
      1. An FIQ preempts Standard SMC execution and that FIQ is not a TSP Secure
         timer interrupt or is preempted by a higher priority interrupt by the time
         the TSP acknowledges it. In this case, the TSP issues an SMC with the ID
         as `TSP_EL3_FIQ`. Currently this case is never expected to happen as only
         the TSP Secure Timer is expected to generate FIQ.
      
      2. An IRQ preempts Standard SMC execution and in this case the TSP issues
         an SMC with the ID as `TSP_PREEMPTED`.
      
      In both the cases, the TSPD hands control back to the normal world and returns
      returns an error code to the normal world to indicate that the standard SMC it
      had issued has been preempted but not completed.
      
      This patch unifies the handling of these two cases in the TSPD and ensures that
      the TSP only uses TSP_PREEMPTED instead of separate SMC IDs. Also instead of 2
      separate error codes, SMC_PREEMPTED and TSP_EL3_FIQ, only SMC_PREEMPTED is
      returned as error code back to the normal world.
      
      Background information: On a GICv3 system, when the secure world has affinity
      routing enabled, in 2. an FIQ will preempt TSP execution instead of an IRQ. The
      FIQ could be a result of a Group 0 or a Group 1 NS interrupt. In both case, the
      TSPD passes control back to the normal world upon receipt of the TSP_PREEMPTED
      SMC. A Group 0 interrupt will immediately preempt execution to EL3 where it
      will be handled. This allows for unified interrupt handling in TSP for both
      GICv3 and GICv2 systems.
      
      Change-Id: I9895344db74b188021e3f6a694701ad272fb40d4
      404dba53