1. 22 Jan, 2020 1 commit
  2. 13 Sep, 2019 1 commit
    • Alexei Fedorov's avatar
      Refactor ARMv8.3 Pointer Authentication support code · ed108b56
      Alexei Fedorov authored
      
      
      This patch provides the following features and makes modifications
      listed below:
      - Individual APIAKey key generation for each CPU.
      - New key generation on every BL31 warm boot and TSP CPU On event.
      - Per-CPU storage of APIAKey added in percpu_data[]
        of cpu_data structure.
      - `plat_init_apiakey()` function replaced with `plat_init_apkey()`
        which returns 128-bit value and uses Generic timer physical counter
        value to increase the randomness of the generated key.
        The new function can be used for generation of all ARMv8.3-PAuth keys
      - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
      - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
        generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
        pauth_disable_el1()` and `pauth_disable_el3()` functions disable
        PAuth for EL1 and EL3 respectively;
        `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
        cpu-data structure.
      - Combined `save_gp_pauth_registers()` function replaces calls to
        `save_gp_registers()` and `pauth_context_save()`;
        `restore_gp_pauth_registers()` replaces `pauth_context_restore()`
        and `restore_gp_registers()` calls.
      - `restore_gp_registers_eret()` function removed with corresponding
        code placed in `el3_exit()`.
      - Fixed the issue when `pauth_t pauth_ctx` structure allocated space
        for 12 uint64_t PAuth registers instead of 10 by removal of macro
        CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
        and assigning its value to CTX_PAUTH_REGS_END.
      - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
        in `msr	spsel`  instruction instead of hard-coded values.
      - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.
      
      Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      ed108b56
  3. 09 Sep, 2019 1 commit
    • Justin Chadwell's avatar
      Enable MTE support in both secure and non-secure worlds · 9dd94382
      Justin Chadwell authored
      
      
      This patch adds support for the new Memory Tagging Extension arriving in
      ARMv8.5. MTE support is now enabled by default on systems that support
      at EL0. To enable it at ELx for both the non-secure and the secure
      world, the compiler flag CTX_INCLUDE_MTE_REGS includes register saving
      and restoring when necessary in order to prevent register leakage
      between the worlds.
      
      Change-Id: I2d4ea993d6b11654ea0d4757d00ca20d23acf36c
      Signed-off-by: default avatarJustin Chadwell <justin.chadwell@arm.com>
      9dd94382
  4. 01 Aug, 2019 1 commit
    • Julius Werner's avatar
      Replace __ASSEMBLY__ with compiler-builtin __ASSEMBLER__ · d5dfdeb6
      Julius Werner authored
      
      
      NOTE: __ASSEMBLY__ macro is now deprecated in favor of __ASSEMBLER__.
      
      All common C compilers predefine a macro called __ASSEMBLER__ when
      preprocessing a .S file. There is no reason for TF-A to define it's own
      __ASSEMBLY__ macro for this purpose instead. To unify code with the
      export headers (which use __ASSEMBLER__ to avoid one extra dependency),
      let's deprecate __ASSEMBLY__ and switch the code base over to the
      predefined standard.
      
      Change-Id: Id7d0ec8cf330195da80499c68562b65cb5ab7417
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      d5dfdeb6
  5. 24 May, 2019 1 commit
    • Alexei Fedorov's avatar
      Add support for Branch Target Identification · 9fc59639
      Alexei Fedorov authored
      
      
      This patch adds the functionality needed for platforms to provide
      Branch Target Identification (BTI) extension, introduced to AArch64
      in Armv8.5-A by adding BTI instruction used to mark valid targets
      for indirect branches. The patch sets new GP bit [50] to the stage 1
      Translation Table Block and Page entries to denote guarded EL3 code
      pages which will cause processor to trap instructions in protected
      pages trying to perform an indirect branch to any instruction other
      than BTI.
      BTI feature is selected by BRANCH_PROTECTION option which supersedes
      the previous ENABLE_PAUTH used for Armv8.3-A Pointer Authentication
      and is disabled by default. Enabling BTI requires compiler support
      and was tested with GCC versions 9.0.0, 9.0.1 and 10.0.0.
      The assembly macros and helpers are modified to accommodate the BTI
      instruction.
      This is an experimental feature.
      Note. The previous ENABLE_PAUTH build option to enable PAuth in EL3
      is now made as an internal flag and BRANCH_PROTECTION flag should be
      used instead to enable Pointer Authentication.
      Note. USE_LIBROM=1 option is currently not supported.
      
      Change-Id: Ifaf4438609b16647dc79468b70cd1f47a623362e
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      9fc59639
  6. 12 Mar, 2019 1 commit
    • John Tsichritzis's avatar
      Apply stricter speculative load restriction · 02b57943
      John Tsichritzis authored
      
      
      The SCTLR.DSSBS bit is zero by default thus disabling speculative loads.
      However, we also explicitly set it to zero for BL2 and TSP images when
      each image initialises its context. This is done to ensure that the
      image environment is initialised in a safe state, regardless of the
      reset value of the bit.
      
      Change-Id: If25a8396641edb640f7f298b8d3309d5cba3cd79
      Signed-off-by: default avatarJohn Tsichritzis <john.tsichritzis@arm.com>
      02b57943
  7. 27 Feb, 2019 1 commit
    • Antonio Nino Diaz's avatar
      TSP: Enable pointer authentication support · 67b6ff9f
      Antonio Nino Diaz authored
      
      
      The size increase after enabling options related to ARMv8.3-PAuth is:
      
      +----------------------------+-------+-------+-------+--------+
      |                            |  text |  bss  |  data | rodata |
      +----------------------------+-------+-------+-------+--------+
      | CTX_INCLUDE_PAUTH_REGS = 1 |   +40 |   +0  |   +0  |   +0   |
      |                            |  0.4% |       |       |        |
      +----------------------------+-------+-------+-------+--------+
      | ENABLE_PAUTH = 1           |  +352 |    +0 |  +16  |   +0   |
      |                            |  3.1% |       | 15.8% |        |
      +----------------------------+-------+-------+-------+--------+
      
      Results calculated with the following build configuration:
      
          make PLAT=fvp SPD=tspd DEBUG=1 \
          SDEI_SUPPORT=1                 \
          EL3_EXCEPTION_HANDLING=1       \
          TSP_NS_INTR_ASYNC_PREEMPT=1    \
          CTX_INCLUDE_PAUTH_REGS=1       \
          ENABLE_PAUTH=1
      
      Change-Id: I6cc1fe0b2345c547dcef66f98758c4eb55fe5ee4
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      67b6ff9f
  8. 01 Feb, 2019 1 commit
  9. 15 Jan, 2019 1 commit
    • Paul Beesley's avatar
      Correct typographical errors · 8aabea33
      Paul Beesley authored
      
      
      Corrects typos in core code, documentation files, drivers, Arm
      platforms and services.
      
      None of the corrections affect code; changes are limited to comments
      and other documentation.
      
      Change-Id: I5c1027b06ef149864f315ccc0ea473e2a16bfd1d
      Signed-off-by: default avatarPaul Beesley <paul.beesley@arm.com>
      8aabea33
  10. 04 Jan, 2019 1 commit
    • Antonio Nino Diaz's avatar
      Sanitise includes across codebase · 09d40e0e
      Antonio Nino Diaz authored
      Enforce full include path for includes. Deprecate old paths.
      
      The following folders inside include/lib have been left unchanged:
      
      - include/lib/cpus/${ARCH}
      - include/lib/el3_runtime/${ARCH}
      
      The reason for this change is that having a global namespace for
      includes isn't a good idea. It defeats one of the advantages of having
      folders and it introduces problems that are sometimes subtle (because
      you may not know the header you are actually including if there are two
      of them).
      
      For example, this patch had to be created because two headers were
      called the same way: e0ea0928 ("Fix gpio includes of mt8173 platform
      to avoid collision."). More recently, this patch has had similar
      problems: 46f9b2c3 ("drivers: add tzc380 support").
      
      This problem was introduced in commit 4ecca339
      
       ("Move include and
      source files to logical locations"). At that time, there weren't too
      many headers so it wasn't a real issue. However, time has shown that
      this creates problems.
      
      Platforms that want to preserve the way they include headers may add the
      removed paths to PLAT_INCLUDES, but this is discouraged.
      
      Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      09d40e0e
  11. 08 Nov, 2018 1 commit
    • Antonio Nino Diaz's avatar
      Standardise header guards across codebase · c3cf06f1
      Antonio Nino Diaz authored
      
      
      All identifiers, regardless of use, that start with two underscores are
      reserved. This means they can't be used in header guards.
      
      The style that this project is now to use the full name of the file in
      capital letters followed by 'H'. For example, for a file called
      "uart_example.h", the header guard is UART_EXAMPLE_H.
      
      The exceptions are files that are imported from other projects:
      
      - CryptoCell driver
      - dt-bindings folders
      - zlib headers
      
      Change-Id: I50561bf6c88b491ec440d0c8385c74650f3c106e
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      c3cf06f1
  12. 11 Jul, 2018 2 commits
    • Roberto Vargas's avatar
      Add end_vector_entry assembler macro · a9203eda
      Roberto Vargas authored
      
      
      Check_vector_size checks if the size of the vector fits
      in the size reserved for it. This check creates problems in
      the Clang assembler. A new macro, end_vector_entry, is added
      and check_vector_size is deprecated.
      
      This new macro fills the current exception vector until the next
      exception vector. If the size of the current vector is bigger
      than 32 instructions then it gives an error.
      
      Change-Id: Ie8545cf1003a1e31656a1018dd6b4c28a4eaf671
      Signed-off-by: default avatarRoberto Vargas <roberto.vargas@arm.com>
      a9203eda
    • Roberto Vargas's avatar
      Use ALIGN instead of NEXT in linker scripts · 5629b2b1
      Roberto Vargas authored
      
      
      Clang linker doesn't support NEXT. As we are not using the MEMORY command
      to define discontinuous memory for the output file in any of the linker
      scripts, ALIGN and NEXT are equivalent.
      
      Change-Id: I867ffb9c9a76d4e81c9ca7998280b2edf10efea0
      Signed-off-by: default avatarRoberto Vargas <roberto.vargas@arm.com>
      5629b2b1
  13. 27 Jun, 2018 1 commit
  14. 27 Apr, 2018 1 commit
    • Masahiro Yamada's avatar
      types: use int-ll64 for both aarch32 and aarch64 · 0a2d5b43
      Masahiro Yamada authored
      Since commit 031dbb12
      
       ("AArch32: Add essential Arch helpers"),
      it is difficult to use consistent format strings for printf() family
      between aarch32 and aarch64.
      
      For example, uint64_t is defined as 'unsigned long long' for aarch32
      and as 'unsigned long' for aarch64.  Likewise, uintptr_t is defined
      as 'unsigned int' for aarch32, and as 'unsigned long' for aarch64.
      
      A problem typically arises when you use printf() in common code.
      
      One solution could be, to cast the arguments to a type long enough
      for both architectures.  For example, if 'val' is uint64_t type,
      like this:
      
        printf("val = %llx\n", (unsigned long long)val);
      
      Or, somebody may suggest to use a macro provided by <inttypes.h>,
      like this:
      
        printf("val = %" PRIx64 "\n", val);
      
      But, both would make the code ugly.
      
      The solution adopted in Linux kernel is to use the same typedefs for
      all architectures.  The fixed integer types in the kernel-space have
      been unified into int-ll64, like follows:
      
          typedef signed char           int8_t;
          typedef unsigned char         uint8_t;
      
          typedef signed short          int16_t;
          typedef unsigned short        uint16_t;
      
          typedef signed int            int32_t;
          typedef unsigned int          uint32_t;
      
          typedef signed long long      int64_t;
          typedef unsigned long long    uint64_t;
      
      [ Linux commit: 0c79a8e29b5fcbcbfd611daf9d500cfad8370fcf ]
      
      This gets along with the codebase shared between 32 bit and 64 bit,
      with the data model called ILP32, LP64, respectively.
      
      The width for primitive types is defined as follows:
      
                         ILP32           LP64
          int            32              32
          long           32              64
          long long      64              64
          pointer        32              64
      
      'long long' is 64 bit for both, so it is used for defining uint64_t.
      'long' has the same width as pointer, so for uintptr_t.
      
      We still need an ifdef conditional for (s)size_t.
      
      All 64 bit architectures use "unsigned long" size_t, and most 32 bit
      architectures use "unsigned int" size_t.  H8/300, S/390 are known as
      exceptions; they use "unsigned long" size_t despite their architecture
      is 32 bit.
      
      One idea for simplification might be to define size_t as 'unsigned long'
      across architectures, then forbid the use of "%z" string format.
      However, this would cause a distortion between size_t and sizeof()
      operator.  We have unknowledge about the native type of sizeof(), so
      we need a guess of it anyway.  I want the following formula to always
      return 1:
      
        __builtin_types_compatible_p(size_t, typeof(sizeof(int)))
      
      Fortunately, ARM is probably a majority case.  As far as I know, all
      32 bit ARM compilers use "unsigned int" size_t.
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      0a2d5b43
  15. 13 Apr, 2018 2 commits
  16. 27 Feb, 2018 1 commit
    • Antonio Nino Diaz's avatar
      Add comments about mismatched TCR_ELx and xlat tables · 883d1b5d
      Antonio Nino Diaz authored
      
      
      When the MMU is enabled and the translation tables are mapped, data
      read/writes to the translation tables are made using the attributes
      specified in the translation tables themselves. However, the MMU
      performs table walks with the attributes specified in TCR_ELx. They are
      completely independent, so special care has to be taken to make sure
      that they are the same.
      
      This has to be done manually because it is not practical to have a test
      in the code. Such a test would need to know the virtual memory region
      that contains the translation tables and check that for all of the
      tables the attributes match the ones in TCR_ELx. As the tables may not
      even be mapped at all, this isn't a test that can be made generic.
      
      The flags used by enable_mmu_xxx() have been moved to the same header
      where the functions are.
      
      Also, some comments in the linker scripts related to the translation
      tables have been fixed.
      
      Change-Id: I1754768bffdae75f53561b1c4a5baf043b45a304
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      883d1b5d
  17. 29 Nov, 2017 1 commit
    • Antonio Nino Diaz's avatar
      Replace magic numbers in linkerscripts by PAGE_SIZE · a2aedac2
      Antonio Nino Diaz authored
      
      
      When defining different sections in linker scripts it is needed to align
      them to multiples of the page size. In most linker scripts this is done
      by aligning to the hardcoded value 4096 instead of PAGE_SIZE.
      
      This may be confusing when taking a look at all the codebase, as 4096
      is used in some parts that aren't meant to be a multiple of the page
      size.
      
      Change-Id: I36c6f461c7782437a58d13d37ec8b822a1663ec1
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      a2aedac2
  18. 21 Aug, 2017 1 commit
    • Julius Werner's avatar
      Fix x30 reporting for unhandled exceptions · 4d91838b
      Julius Werner authored
      
      
      Some error paths that lead to a crash dump will overwrite the value in
      the x30 register by calling functions with the no_ret macro, which
      resolves to a BL instruction. This is not very useful and not what the
      reader would expect, since a crash dump should usually show all
      registers in the state they were in when the exception happened. This
      patch replaces the offending function calls with a B instruction to
      preserve the value in x30.
      
      Change-Id: I2a3636f2943f79bab0cd911f89d070012e697c2a
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      4d91838b
  19. 15 Aug, 2017 1 commit
    • Julius Werner's avatar
      Add new alignment parameter to func assembler macro · 64726e6d
      Julius Werner authored
      
      
      Assembler programmers are used to being able to define functions with a
      specific aligment with a pattern like this:
      
          .align X
        myfunction:
      
      However, this pattern is subtly broken when instead of a direct label
      like 'myfunction:', you use the 'func myfunction' macro that's standard
      in Trusted Firmware. Since the func macro declares a new section for the
      function, the .align directive written above it actually applies to the
      *previous* section in the assembly file, and the function it was
      supposed to apply to is linked with default alignment.
      
      An extreme case can be seen in Rockchip's plat_helpers.S which contains
      this code:
      
        [...]
        endfunc plat_crash_console_putc
      
        .align 16
        func platform_cpu_warmboot
        [...]
      
      This assembles into the following plat_helpers.o:
      
        Sections:
        Idx Name                             Size  [...]  Algn
         9 .text.plat_crash_console_putc 00010000  [...]  2**16
        10 .text.platform_cpu_warmboot   00000080  [...]  2**3
      
      As can be seen, the *previous* function actually got the alignment
      constraint, and it is also 64KB big even though it contains only two
      instructions, because the .align directive at the end of its section
      forces the assembler to insert a giant sled of NOPs. The function we
      actually wanted to align has the default constraint. This code only
      works at all because the linker just happens to put the two functions
      right behind each other when linking the final image, and since the end
      of plat_crash_console_putc is aligned the start of platform_cpu_warmboot
      will also be. But it still wastes almost 64KB of image space
      unnecessarily, and it will break under certain circumstances (e.g. if
      the plat_crash_console_putc function becomes unused and its section gets
      garbage-collected out).
      
      There's no real way to fix this with the existing func macro. Code like
      
       func myfunc
       .align X
      
      happens to do the right thing, but is still not really correct code
      (because the function label is inserted before the .align directive, so
      the assembler is technically allowed to insert padding at the beginning
      of the function which would then get executed as instructions if the
      function was called). Therefore, this patch adds a new parameter with a
      default value to the func macro that allows overriding its alignment.
      
      Also fix up all existing instances of this dangerous antipattern.
      
      Change-Id: I5696a07e2fde896f21e0e83644c95b7b6ac79a10
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      64726e6d
  20. 03 May, 2017 1 commit
  21. 26 Apr, 2017 1 commit
  22. 31 Mar, 2017 1 commit
    • Douglas Raillard's avatar
      Add support for GCC stack protection · 51faada7
      Douglas Raillard authored
      
      
      Introduce new build option ENABLE_STACK_PROTECTOR. It enables
      compilation of all BL images with one of the GCC -fstack-protector-*
      options.
      
      A new platform function plat_get_stack_protector_canary() is introduced.
      It returns a value that is used to initialize the canary for stack
      corruption detection. Returning a random value will prevent an attacker
      from predicting the value and greatly increase the effectiveness of the
      protection.
      
      A message is printed at the ERROR level when a stack corruption is
      detected.
      
      To be effective, the global data must be stored at an address
      lower than the base of the stacks. Failure to do so would allow an
      attacker to overwrite the canary as part of an attack which would void
      the protection.
      
      FVP implementation of plat_get_stack_protector_canary is weak as
      there is no real source of entropy on the FVP. It therefore relies on a
      timer's value, which could be predictable.
      
      Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      51faada7
  23. 08 Mar, 2017 1 commit
  24. 06 Feb, 2017 1 commit
    • Douglas Raillard's avatar
      Introduce unified API to zero memory · 308d359b
      Douglas Raillard authored
      
      
      Introduce zeromem_dczva function on AArch64 that can handle unaligned
      addresses and make use of DC ZVA instruction to zero a whole block at a
      time. This zeroing takes place directly in the cache to speed it up
      without doing external memory access.
      
      Remove the zeromem16 function on AArch64 and replace it with an alias to
      zeromem. This zeromem16 function is now deprecated.
      
      Remove the 16-bytes alignment constraint on __BSS_START__ in
      firmware-design.md as it is now not mandatory anymore (it used to comply
      with zeromem16 requirements).
      
      Change the 16-bytes alignment constraints in SP min's linker script to a
      8-bytes alignment constraint as the AArch32 zeromem implementation is now
      more efficient on 8-bytes aligned addresses.
      
      Introduce zero_normalmem and zeromem helpers in platform agnostic header
      that are implemented this way:
      * AArch32:
      	* zero_normalmem: zero using usual data access
      	* zeromem: alias for zero_normalmem
      * AArch64:
      	* zero_normalmem: zero normal memory  using DC ZVA instruction
      	                  (needs MMU enabled)
      	* zeromem: zero using usual data access
      
      Usage guidelines: in most cases, zero_normalmem should be preferred.
      
      There are 2 scenarios where zeromem (or memset) must be used instead:
      * Code that must run with MMU disabled (which means all memory is
        considered device memory for data accesses).
      * Code that fills device memory with null bytes.
      
      Optionally, the following rule can be applied if performance is
      important:
      * Code zeroing small areas (few bytes) that are not secrets should use
        memset to take advantage of compiler optimizations.
      
        Note: Code zeroing security-related critical information should use
        zero_normalmem/zeromem instead of memset to avoid removal by
        compilers' optimizations in some cases or misbehaving versions of GCC.
      
      Fixes ARM-software/tf-issues#408
      
      Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      308d359b
  25. 23 Dec, 2016 1 commit
    • Douglas Raillard's avatar
      Abort preempted TSP STD SMC after PSCI CPU suspend · 3df6012a
      Douglas Raillard authored
      
      
      Standard SMC requests that are handled in the secure-world by the Secure
      Payload can be preempted by interrupts that must be handled in the
      normal world. When the TSP is preempted the secure context is stored and
      control is passed to the normal world to handle the non-secure
      interrupt. Once completed the preempted secure context is restored. When
      restoring the preempted context, the dispatcher assumes that the TSP
      preempted context is still stored as the SECURE context by the context
      management library.
      
      However, PSCI power management operations causes synchronous entry into
      TSP. This overwrites the preempted SECURE context in the context
      management library. When restoring back the SECURE context, the Secure
      Payload crashes because this context is not the preempted context
      anymore.
      
      This patch avoids corruption of the preempted SECURE context by aborting
      any preempted SMC during PSCI power management calls. The
      abort_std_smc_entry hook of the TSP is called when aborting the SMC
      request.
      
      It also exposes this feature as a FAST SMC callable from normal world to
      abort preempted SMC with FID TSP_FID_ABORT.
      
      Change-Id: I7a70347e9293f47d87b5de20484b4ffefb56b770
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      3df6012a
  26. 05 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Define and use no_ret macro where no return is expected · a806dad5
      Jeenu Viswambharan authored
      
      
      There are many instances in ARM Trusted Firmware where control is
      transferred to functions from which return isn't expected. Such jumps
      are made using 'bl' instruction to provide the callee with the location
      from which it was jumped to. Additionally, debuggers infer the caller by
      examining where 'lr' register points to. If a 'bl' of the nature
      described above falls at the end of an assembly function, 'lr' will be
      left pointing to a location outside of the function range. This misleads
      the debugger back trace.
      
      This patch defines a 'no_ret' macro to be used when jumping to functions
      from which return isn't expected. The macro ensures to use 'bl'
      instruction for the jump, and also, for debug builds, places a 'nop'
      instruction immediately thereafter (unless instructed otherwise) so as
      to leave 'lr' pointing within the function range.
      
      Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0
      Co-authored-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      a806dad5
  27. 09 Aug, 2016 1 commit
    • Soby Mathew's avatar
      Move spinlock library code to AArch64 folder · 12ab697e
      Soby Mathew authored
      This patch moves the assembly exclusive lock library code
      `spinlock.S` into architecture specific folder `aarch64`.
      A stub file which includes the file from new location is
      retained at the original location for compatibility. The BL
      makefiles are also modified to include the file from the new
      location.
      
      Change-Id: Ide0b601b79c439e390c3a017d93220a66be73543
      12ab697e
  28. 08 Jul, 2016 2 commits
    • Sandrine Bailleux's avatar
      TSP: Print BL32_BASE rather than __RO_START__ · a604623c
      Sandrine Bailleux authored
      In debug builds, the TSP prints its image base address and size.
      The base address displayed corresponds to the start address of the
      read-only section, as defined in the linker script.
      
      This patch changes this to use the BL32_BASE address instead, which is
      the same address as __RO_START__ at the moment but has the advantage
      to be independent of the linker symbols defined in the linker script
      as well as the layout and order of the sections.
      
      Change-Id: I032d8d50df712c014cbbcaa84a9615796ec902cc
      a604623c
    • Sandrine Bailleux's avatar
      Introduce SEPARATE_CODE_AND_RODATA build flag · 5d1c104f
      Sandrine Bailleux authored
      At the moment, all BL images share a similar memory layout: they start
      with their code section, followed by their read-only data section.
      The two sections are contiguous in memory. Therefore, the end of the
      code section and the beginning of the read-only data one might share
      a memory page. This forces both to be mapped with the same memory
      attributes. As the code needs to be executable, this means that the
      read-only data stored on the same memory page as the code are
      executable as well. This could potentially be exploited as part of
      a security attack.
      
      This patch introduces a new build flag called
      SEPARATE_CODE_AND_RODATA, which isolates the code and read-only data
      on separate memory pages. This in turn allows independent control of
      the access permissions for the code and read-only data.
      
      This has an impact on memory footprint, as padding bytes need to be
      introduced between the code and read-only data to ensure the
      segragation of the two. To limit the memory cost, the memory layout
      of the read-only section has been changed in this case.
      
       - When SEPARATE_CODE_AND_RODATA=0, the layout is unchanged, i.e.
         the read-only section still looks like this (padding omitted):
      
         |        ...        |
         +-------------------+
         | Exception vectors |
         +-------------------+
         |  Read-only data   |
         +-------------------+
         |       Code        |
         +-------------------+ BLx_BASE
      
         In this case, the linker script provides the limits of the whole
         read-only section.
      
       - When SEPARATE_CODE_AND_RODATA=1, the exception vectors and
         read-only data are swapped, such that the code and exception
         vectors are contiguous, followed by the read-only data. This
         gives the following new layout (padding omitted):
      
         |        ...        |
         +-------------------+
         |  Read-only data   |
         +-------------------+
         | Exception vectors |
         +-------------------+
         |       Code        |
         +-------------------+ BLx_BASE
      
         In this case, the linker script now exports 2 sets of addresses
         instead: the limits of the code and the limits of the read-only
         data. Refer to the Firmware Design guide for more details. This
         provides platform code with a finer-grained view of the image
         layout and allows it to map these 2 regions with the appropriate
         access permissions.
      
      Note that SEPARATE_CODE_AND_RODATA applies to all BL images.
      
      Change-Id: I936cf80164f6b66b6ad52b8edacadc532c935a49
      5d1c104f
  29. 26 May, 2016 1 commit
    • Sandrine Bailleux's avatar
      Introduce some helper macros for exception vectors · e0ae9fab
      Sandrine Bailleux authored
      This patch introduces some assembler macros to simplify the
      declaration of the exception vectors. It abstracts the section
      the exception code is put into as well as the alignments
      constraints mandated by the ARMv8 architecture. For all TF images,
      the exception code has been updated to make use of these macros.
      
      This patch also updates some invalid comments in the exception
      vector code.
      
      Change-Id: I35737b8f1c8c24b6da89b0a954c8152a4096fa95
      e0ae9fab
  30. 01 Apr, 2016 1 commit
    • Evan Lloyd's avatar
      Make:Remove calls to shell from makefiles. · 231c1470
      Evan Lloyd authored
      As an initial stage of making Trusted Firmware build environment more
      portable, we remove most uses of the $(shell ) function and replace them
      with more portable make function based solutions.
      
      Note that the setting of BUILD_STRING still uses $(shell ) since it's
      not possible to reimplement this as a make function. Avoiding invocation
      of this on incompatible host platforms will be implemented separately.
      
      Change-Id: I768e2f9a265c78814a4adf2edee4cc46cda0f5b8
      231c1470
  31. 14 Mar, 2016 1 commit
    • Antonio Nino Diaz's avatar
      Remove all non-configurable dead loops · 1c3ea103
      Antonio Nino Diaz authored
      Added a new platform porting function plat_panic_handler, to allow
      platforms to handle unexpected error situations. It must be
      implemented in assembly as it may be called before the C environment
      is initialized. A default implementation is provided, which simply
      spins.
      
      Corrected all dead loops in generic code to call this function
      instead. This includes the dead loop that occurs at the end of the
      call to panic().
      
      All unnecesary wfis from bl32/tsp/aarch64/tsp_exceptions.S have
      been removed.
      
      Change-Id: I67cb85f6112fa8e77bd62f5718efcef4173d8134
      1c3ea103
  32. 14 Dec, 2015 1 commit
  33. 09 Dec, 2015 1 commit
    • Soby Mathew's avatar
      TSP: Allow preemption of synchronous S-EL1 interrupt handling · 63b8440f
      Soby Mathew authored
      Earlier the TSP only ever expected to be preempted during Standard SMC
      processing. If a S-EL1 interrupt triggered while in the normal world, it
      will routed to S-EL1 `synchronously` for handling. The `synchronous` S-EL1
      interrupt handler `tsp_sel1_intr_entry` used to panic if this S-EL1 interrupt
      was preempted by another higher priority pending interrupt which should be
      handled in EL3 e.g. Group0 interrupt in GICv3.
      
      With this patch, the `tsp_sel1_intr_entry` now expects `TSP_PREEMPTED` as the
      return code from the `tsp_common_int_handler` in addition to 0 (interrupt
      successfully handled) and in both cases it issues an SMC with id
      `TSP_HANDLED_S_EL1_INTR`. The TSPD switches the context and returns back
      to normal world. In case a higher priority EL3 interrupt was pending, the
      execution will be routed to EL3 where interrupt will be handled. On return
      back to normal world, the pending S-EL1 interrupt which was preempted will
      get routed to S-EL1 to be handled `synchronously` via `tsp_sel1_intr_entry`.
      
      Change-Id: I2087c7fedb37746fbd9200cdda9b6dba93e16201
      63b8440f
  34. 04 Dec, 2015 2 commits
    • Soby Mathew's avatar
      Enable use of FIQs and IRQs as TSP interrupts · 02446137
      Soby Mathew authored
      On a GICv2 system, interrupts that should be handled in the secure world are
      typically signalled as FIQs. On a GICv3 system, these interrupts are signalled
      as IRQs instead. The mechanism for handling both types of interrupts is the same
      in both cases. This patch enables the TSP to run on a GICv3 system by:
      
      1. adding support for handling IRQs in the exception handling code.
      2. removing use of "fiq" in the names of data structures, macros and functions.
      
      The build option TSPD_ROUTE_IRQ_TO_EL3 is deprecated and is replaced with a
      new build flag TSP_NS_INTR_ASYNC_PREEMPT. For compatibility reasons, if the
      former build flag is defined, it will be used to define the value for the
      new build flag. The documentation is also updated accordingly.
      
      Change-Id: I1807d371f41c3656322dd259340a57649833065e
      02446137
    • Soby Mathew's avatar
      Unify interrupt return paths from TSP into the TSPD · 404dba53
      Soby Mathew authored
      The TSP is expected to pass control back to EL3 if it gets preempted due to
      an interrupt while handling a Standard SMC in the following scenarios:
      
      1. An FIQ preempts Standard SMC execution and that FIQ is not a TSP Secure
         timer interrupt or is preempted by a higher priority interrupt by the time
         the TSP acknowledges it. In this case, the TSP issues an SMC with the ID
         as `TSP_EL3_FIQ`. Currently this case is never expected to happen as only
         the TSP Secure Timer is expected to generate FIQ.
      
      2. An IRQ preempts Standard SMC execution and in this case the TSP issues
         an SMC with the ID as `TSP_PREEMPTED`.
      
      In both the cases, the TSPD hands control back to the normal world and returns
      returns an error code to the normal world to indicate that the standard SMC it
      had issued has been preempted but not completed.
      
      This patch unifies the handling of these two cases in the TSPD and ensures that
      the TSP only uses TSP_PREEMPTED instead of separate SMC IDs. Also instead of 2
      separate error codes, SMC_PREEMPTED and TSP_EL3_FIQ, only SMC_PREEMPTED is
      returned as error code back to the normal world.
      
      Background information: On a GICv3 system, when the secure world has affinity
      routing enabled, in 2. an FIQ will preempt TSP execution instead of an IRQ. The
      FIQ could be a result of a Group 0 or a Group 1 NS interrupt. In both case, the
      TSPD passes control back to the normal world upon receipt of the TSP_PREEMPTED
      SMC. A Group 0 interrupt will immediately preempt execution to EL3 where it
      will be handled. This allows for unified interrupt handling in TSP for both
      GICv3 and GICv2 systems.
      
      Change-Id: I9895344db74b188021e3f6a694701ad272fb40d4
      404dba53
  35. 14 Sep, 2015 1 commit
    • Achin Gupta's avatar
      Make generic code work in presence of system caches · 54dc71e7
      Achin Gupta authored
      On the ARMv8 architecture, cache maintenance operations by set/way on the last
      level of integrated cache do not affect the system cache. This means that such a
      flush or clean operation could result in the data being pushed out to the system
      cache rather than main memory. Another CPU could access this data before it
      enables its data cache or MMU. Such accesses could be serviced from the main
      memory instead of the system cache. If the data in the sysem cache has not yet
      been flushed or evicted to main memory then there could be a loss of
      coherency. The only mechanism to guarantee that the main memory will be updated
      is to use cache maintenance operations to the PoC by MVA(See section D3.4.11
      (System level caches) of ARMv8-A Reference Manual (Issue A.g/ARM DDI0487A.G).
      
      This patch removes the reliance of Trusted Firmware on the flush by set/way
      operation to ensure visibility of data in the main memory. Cache maintenance
      operations by MVA are now used instead. The following are the broad category of
      changes:
      
      1. The RW areas of BL2/BL31/BL32 are invalidated by MVA before the C runtime is
         initialised. This ensures that any stale cache lines at any level of cache
         are removed.
      
      2. Updates to global data in runtime firmware (BL31) by the primary CPU are made
         visible to secondary CPUs using a cache clean operation by MVA.
      
      3. Cache maintenance by set/way operations are only used prior to power down.
      
      NOTE: NON-UPSTREAM TRUSTED FIRMWARE CODE SHOULD MAKE EQUIVALENT CHANGES IN
      ORDER TO FUNCTION CORRECTLY ON PLATFORMS WITH SUPPORT FOR SYSTEM CACHES.
      
      Fixes ARM-software/tf-issues#205
      
      Change-Id: I64f1b398de0432813a0e0881d70f8337681f6e9a
      54dc71e7
  36. 10 Sep, 2015 1 commit
    • Achin Gupta's avatar
      Pass the target suspend level to SPD suspend hooks · f1054c93
      Achin Gupta authored
      In certain Trusted OS implementations it is a requirement to pass them the
      highest power level which will enter a power down state during a PSCI
      CPU_SUSPEND or SYSTEM_SUSPEND API invocation. This patch passes this power level
      to the SPD in the "max_off_pwrlvl" parameter of the svc_suspend() hook.
      
      Currently, the highest power level which was requested to be placed in a low
      power state (retention or power down) is passed to the SPD svc_suspend_finish()
      hook. This hook is called after emerging from the low power state. It is more
      useful to pass the highest power level which was powered down instead. This
      patch does this by changing the semantics of the parameter passed to an SPD's
      svc_suspend_finish() hook. The name of the parameter has been changed from
      "suspend_level" to "max_off_pwrlvl" as well. Same changes have been made to the
      parameter passed to the tsp_cpu_resume_main() function.
      
      NOTE: THIS PATCH CHANGES THE SEMANTICS OF THE EXISTING "svc_suspend_finish()"
            API BETWEEN THE PSCI AND SPD/SP IMPLEMENTATIONS. THE LATTER MIGHT NEED
            UPDATES TO ENSURE CORRECT BEHAVIOUR.
      
      Change-Id: If3a9d39b13119bbb6281f508a91f78a2f46a8b90
      f1054c93