1. 11 Jul, 2018 2 commits
  2. 27 Jun, 2018 2 commits
    • Jeenu Viswambharan's avatar
      TSP: Enable cache along with MMU · bb00ea5b
      Jeenu Viswambharan authored
      
      
      Previously, data caches were disabled while enabling MMU only because of
      active stack. Now that we can enable MMU without using stack, we can
      enable both MMU and data caches at the same time.
      
      Change-Id: I73f3b8bae5178610e17e9ad06f81f8f6f97734a6
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      bb00ea5b
    • Jeenu Viswambharan's avatar
      DynamIQ: Enable MMU without using stack · 64ee263e
      Jeenu Viswambharan authored
      
      
      Having an active stack while enabling MMU has shown coherency problems.
      This patch builds on top of translation library changes that introduces
      MMU-enabling without using stacks.
      
      Previously, with HW_ASSISTED_COHERENCY, data caches were disabled while
      enabling MMU only because of active stack. Now that we can enable MMU
      without using stack, we can enable both MMU and data caches at the same
      time.
      
      NOTE: Since this feature depends on using translation table library v2,
      disallow using translation table library v1 with HW_ASSISTED_COHERENCY.
      
      Fixes ARM-software/tf-issues#566
      
      Change-Id: Ie55aba0c23ee9c5109eb3454cb8fa45d74f8bbb2
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      64ee263e
  3. 23 May, 2018 1 commit
  4. 27 Apr, 2018 1 commit
    • Masahiro Yamada's avatar
      types: use int-ll64 for both aarch32 and aarch64 · 0a2d5b43
      Masahiro Yamada authored
      Since commit 031dbb12
      
       ("AArch32: Add essential Arch helpers"),
      it is difficult to use consistent format strings for printf() family
      between aarch32 and aarch64.
      
      For example, uint64_t is defined as 'unsigned long long' for aarch32
      and as 'unsigned long' for aarch64.  Likewise, uintptr_t is defined
      as 'unsigned int' for aarch32, and as 'unsigned long' for aarch64.
      
      A problem typically arises when you use printf() in common code.
      
      One solution could be, to cast the arguments to a type long enough
      for both architectures.  For example, if 'val' is uint64_t type,
      like this:
      
        printf("val = %llx\n", (unsigned long long)val);
      
      Or, somebody may suggest to use a macro provided by <inttypes.h>,
      like this:
      
        printf("val = %" PRIx64 "\n", val);
      
      But, both would make the code ugly.
      
      The solution adopted in Linux kernel is to use the same typedefs for
      all architectures.  The fixed integer types in the kernel-space have
      been unified into int-ll64, like follows:
      
          typedef signed char           int8_t;
          typedef unsigned char         uint8_t;
      
          typedef signed short          int16_t;
          typedef unsigned short        uint16_t;
      
          typedef signed int            int32_t;
          typedef unsigned int          uint32_t;
      
          typedef signed long long      int64_t;
          typedef unsigned long long    uint64_t;
      
      [ Linux commit: 0c79a8e29b5fcbcbfd611daf9d500cfad8370fcf ]
      
      This gets along with the codebase shared between 32 bit and 64 bit,
      with the data model called ILP32, LP64, respectively.
      
      The width for primitive types is defined as follows:
      
                         ILP32           LP64
          int            32              32
          long           32              64
          long long      64              64
          pointer        32              64
      
      'long long' is 64 bit for both, so it is used for defining uint64_t.
      'long' has the same width as pointer, so for uintptr_t.
      
      We still need an ifdef conditional for (s)size_t.
      
      All 64 bit architectures use "unsigned long" size_t, and most 32 bit
      architectures use "unsigned int" size_t.  H8/300, S/390 are known as
      exceptions; they use "unsigned long" size_t despite their architecture
      is 32 bit.
      
      One idea for simplification might be to define size_t as 'unsigned long'
      across architectures, then forbid the use of "%z" string format.
      However, this would cause a distortion between size_t and sizeof()
      operator.  We have unknowledge about the native type of sizeof(), so
      we need a guess of it anyway.  I want the following formula to always
      return 1:
      
        __builtin_types_compatible_p(size_t, typeof(sizeof(int)))
      
      Fortunately, ARM is probably a majority case.  As far as I know, all
      32 bit ARM compilers use "unsigned int" size_t.
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      0a2d5b43
  5. 13 Apr, 2018 2 commits
  6. 21 Mar, 2018 1 commit
    • Antonio Nino Diaz's avatar
      Rename 'smcc' to 'smccc' · 085e80ec
      Antonio Nino Diaz authored
      
      
      When the source code says 'SMCC' it is talking about the SMC Calling
      Convention. The correct acronym is SMCCC. This affects a few definitions
      and file names.
      
      Some files have been renamed (smcc.h, smcc_helpers.h and smcc_macros.S)
      but the old files have been kept for compatibility, they include the
      new ones with an ERROR_DEPRECATED guard.
      
      Change-Id: I78f94052a502436fdd97ca32c0fe86bd58173f2f
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      085e80ec
  7. 02 Mar, 2018 1 commit
    • Soby Mathew's avatar
      Remove sp_min functions from plat_common.c · 0ed8c001
      Soby Mathew authored
      
      
      This patch removes default platform implementations of sp_min
      platform APIs from plat/common/aarch32/plat_common.c. The APIs
      are now implemented in `plat_sp_min_common.c` file within the
      same folder.
      
      The ARM platform layer had a weak definition of sp_min_platform_setup2()
      which conflicted with the weak definition in the common file. Hence this
      patch fixes that by introducing a `plat_arm_` version of the API thus
      allowing individual boards within ARM platforms to override it if they
      wish to.
      
      Fixes ARM-software/tf-issues#559
      
      Change-Id: I11a74ecae8191878ccc7ea03f12bdd5ae88faba5
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      0ed8c001
  8. 27 Feb, 2018 1 commit
    • Antonio Nino Diaz's avatar
      Add comments about mismatched TCR_ELx and xlat tables · 883d1b5d
      Antonio Nino Diaz authored
      
      
      When the MMU is enabled and the translation tables are mapped, data
      read/writes to the translation tables are made using the attributes
      specified in the translation tables themselves. However, the MMU
      performs table walks with the attributes specified in TCR_ELx. They are
      completely independent, so special care has to be taken to make sure
      that they are the same.
      
      This has to be done manually because it is not practical to have a test
      in the code. Such a test would need to know the virtual memory region
      that contains the translation tables and check that for all of the
      tables the attributes match the ones in TCR_ELx. As the tables may not
      even be mapped at all, this isn't a test that can be made generic.
      
      The flags used by enable_mmu_xxx() have been moved to the same header
      where the functions are.
      
      Also, some comments in the linker scripts related to the translation
      tables have been fixed.
      
      Change-Id: I1754768bffdae75f53561b1c4a5baf043b45a304
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      883d1b5d
  9. 26 Feb, 2018 1 commit
    • Soby Mathew's avatar
      Introduce the new BL handover interface · a6f340fe
      Soby Mathew authored
      
      
      This patch introduces a new BL handover interface. It essentially allows
      passing 4 arguments between the different BL stages. Effort has been made
      so as to be compatible with the previous handover interface. The previous
      blx_early_platform_setup() platform API is now deprecated and the new
      blx_early_platform_setup2() variant is introduced. The weak compatiblity
      implementation for the new API is done in the `plat_bl_common.c` file.
      Some of the new arguments in the new API will be reserved for generic
      code use when dynamic configuration support is implemented. Otherwise
      the other registers are available for platform use.
      
      Change-Id: Ifddfe2ea8e32497fe1beb565cac155ad9d50d404
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      a6f340fe
  10. 05 Feb, 2018 1 commit
    • Etienne Carriere's avatar
      aarch32: optee: define the OP-TEE secure payload · 10c66958
      Etienne Carriere authored
      
      
      AArch32 only platforms can boot the OP-TEE secure firmware as
      a BL32 secure payload. Such configuration can be defined through
      AARCH32_SP=optee.
      
      The source files can rely on AARCH32_SP_OPTEE to condition
      OP-TEE boot specific instruction sequences.
      
      OP-TEE does not expect ARM Trusted Firmware formatted structure
      as boot argument. Load sequence is expected to have already loaded
      to OP-TEE boot arguments into the bl32 entrypoint info structure.
      
      Last, AArch32 platform can only boot AArch32 OP-TEE images.
      
      Change-Id: Ic28eec5004315fc9111051add6bb1a1d607fc815
      Signed-off-by: default avatarEtienne Carriere <etienne.carriere@linaro.org>
      10c66958
  11. 31 Jan, 2018 1 commit
  12. 18 Jan, 2018 1 commit
    • Dimitris Papastamos's avatar
      sp_min: Implement workaround for CVE-2017-5715 · 7343505d
      Dimitris Papastamos authored
      
      
      This patch introduces two workarounds for ARMv7 systems.  The
      workarounds need to be applied prior to any `branch` instruction in
      secure world.  This is achieved using a custom vector table where each
      entry is an `add sp, sp, #1` instruction.
      
      On entry to monitor mode, once the sequence of `ADD` instructions is
      executed, the branch target buffer (BTB) is invalidated.  The bottom
      bits of `SP` are then used to decode the exception entry type.
      
      A side effect of this change is that the exception vectors are
      installed before the CPU specific reset function.  This is now
      consistent with how it is done on AArch64.
      
      Note, on AArch32 systems, the exception vectors are typically tightly
      integrated with the secure payload (e.g. the Trusted OS).  This
      workaround will need porting to each secure payload that requires it.
      
      The patch to modify the AArch32 per-cpu vbar to the corresponding
      workaround vector table according to the CPU type will be done in a
      later patch.
      
      Change-Id: I5786872497d359e496ebe0757e8017fa98f753fa
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      7343505d
  13. 29 Nov, 2017 3 commits
    • Soby Mathew's avatar
      ARM platforms: Fixup AArch32 builds · 5744e874
      Soby Mathew authored
      
      
      This patch fixes a couple of issues for AArch32 builds on ARM reference
      platforms :
      
      1. The arm_def.h previously defined the same BL32_BASE value for AArch64 and
         AArch32 build. Since BL31 is not present in AArch32 mode, this meant that
         the BL31 memory is empty when built for AArch32. Hence this patch allocates
         BL32 to the memory region occupied by BL31 for AArch32 builds.
      
         As a side-effect of this change, the ARM_TSP_RAM_LOCATION macro cannot
         be used to control the load address of BL32 in AArch32 mode which was
         never the intention of the macro anyway.
      
      2. A static assert is added to sp_min linker script to check that the progbits
         are within the bounds expected when overlaid with other images.
      
      3. Fix specifying `SPD` when building Juno for AArch32 mode. Due to the quirks
         involved when building Juno for AArch32 mode, the build option SPD needed to
         specifed. This patch corrects this and also updates the documentation in the
         user-guide.
      
      4. Exclude BL31 from the build and FIP when building Juno for AArch32 mode. As
         a result the previous assumption that BL31 must be always present is removed
         and the certificates for BL31 is only generated if `NEED_BL31` is defined.
      
      Change-Id: I1c39bbc0abd2be8fbe9f2dea2e9cb4e3e3e436a8
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      5744e874
    • Antonio Nino Diaz's avatar
      Replace magic numbers in linkerscripts by PAGE_SIZE · a2aedac2
      Antonio Nino Diaz authored
      
      
      When defining different sections in linker scripts it is needed to align
      them to multiples of the page size. In most linker scripts this is done
      by aligning to the hardcoded value 4096 instead of PAGE_SIZE.
      
      This may be confusing when taking a look at all the codebase, as 4096
      is used in some parts that aren't meant to be a multiple of the page
      size.
      
      Change-Id: I36c6f461c7782437a58d13d37ec8b822a1663ec1
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      a2aedac2
    • Dimitris Papastamos's avatar
      AMU: Implement support for aarch32 · ef69e1ea
      Dimitris Papastamos authored
      
      
      The `ENABLE_AMU` build option can be used to enable the
      architecturally defined AMU counters.  At present, there is no support
      for the auxiliary counter group.
      
      Change-Id: Ifc7532ef836f83e629f2a146739ab61e75c4abc8
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      ef69e1ea
  14. 08 Nov, 2017 1 commit
  15. 23 Oct, 2017 1 commit
  16. 13 Oct, 2017 1 commit
    • David Cunado's avatar
      Init and save / restore of PMCR_EL0 / PMCR · 3e61b2b5
      David Cunado authored
      
      
      Currently TF does not initialise the PMCR_EL0 register in
      the secure context or save/restore the register.
      
      In particular, the DP field may not be set to one to prohibit
      cycle counting in the secure state, even though event counting
      generally is prohibited via the default setting of MDCR_EL3.SMPE
      to 0.
      
      This patch initialises PMCR_EL0.DP to one in the secure state
      to prohibit cycle counting and also initialises other fields
      that have an architectually UNKNOWN reset value.
      
      Additionally, PMCR_EL0 is added to the list of registers that are
      saved and restored during a world switch.
      
      Similar changes are made for PMCR for the AArch32 execution state.
      
      NOTE: secure world code at lower ELs that assume other values in PMCR_EL0
      will be impacted.
      
      Change-Id: Iae40e8c0a196d74053accf97063ebc257b4d2f3a
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      3e61b2b5
  17. 05 Sep, 2017 1 commit
    • David Cunado's avatar
      Set NS version SCTLR during warmboot path · 88ad1461
      David Cunado authored
      
      
      When ARM TF executes in AArch32 state, the NS version of SCTLR
      is not being set during warmboot flow. This results in secondary
      CPUs entering the Non-secure world with the default reset value
      in SCTLR.
      
      This patch explicitly sets the value of the NS version of SCTLR
      during the warmboot flow rather than relying on the h/w.
      
      Change-Id: I86bf52b6294baae0a5bd8af0cd0358cc4f55c416
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      88ad1461
  18. 21 Aug, 2017 1 commit
    • Julius Werner's avatar
      Fix x30 reporting for unhandled exceptions · 4d91838b
      Julius Werner authored
      
      
      Some error paths that lead to a crash dump will overwrite the value in
      the x30 register by calling functions with the no_ret macro, which
      resolves to a BL instruction. This is not very useful and not what the
      reader would expect, since a crash dump should usually show all
      registers in the state they were in when the exception happened. This
      patch replaces the offending function calls with a B instruction to
      preserve the value in x30.
      
      Change-Id: I2a3636f2943f79bab0cd911f89d070012e697c2a
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      4d91838b
  19. 15 Aug, 2017 1 commit
    • Julius Werner's avatar
      Add new alignment parameter to func assembler macro · 64726e6d
      Julius Werner authored
      
      
      Assembler programmers are used to being able to define functions with a
      specific aligment with a pattern like this:
      
          .align X
        myfunction:
      
      However, this pattern is subtly broken when instead of a direct label
      like 'myfunction:', you use the 'func myfunction' macro that's standard
      in Trusted Firmware. Since the func macro declares a new section for the
      function, the .align directive written above it actually applies to the
      *previous* section in the assembly file, and the function it was
      supposed to apply to is linked with default alignment.
      
      An extreme case can be seen in Rockchip's plat_helpers.S which contains
      this code:
      
        [...]
        endfunc plat_crash_console_putc
      
        .align 16
        func platform_cpu_warmboot
        [...]
      
      This assembles into the following plat_helpers.o:
      
        Sections:
        Idx Name                             Size  [...]  Algn
         9 .text.plat_crash_console_putc 00010000  [...]  2**16
        10 .text.platform_cpu_warmboot   00000080  [...]  2**3
      
      As can be seen, the *previous* function actually got the alignment
      constraint, and it is also 64KB big even though it contains only two
      instructions, because the .align directive at the end of its section
      forces the assembler to insert a giant sled of NOPs. The function we
      actually wanted to align has the default constraint. This code only
      works at all because the linker just happens to put the two functions
      right behind each other when linking the final image, and since the end
      of plat_crash_console_putc is aligned the start of platform_cpu_warmboot
      will also be. But it still wastes almost 64KB of image space
      unnecessarily, and it will break under certain circumstances (e.g. if
      the plat_crash_console_putc function becomes unused and its section gets
      garbage-collected out).
      
      There's no real way to fix this with the existing func macro. Code like
      
       func myfunc
       .align X
      
      happens to do the right thing, but is still not really correct code
      (because the function label is inserted before the .align directive, so
      the assembler is technically allowed to insert padding at the beginning
      of the function which would then get executed as instructions if the
      function was called). Therefore, this patch adds a new parameter with a
      default value to the func macro that allows overriding its alignment.
      
      Also fix up all existing instances of this dangerous antipattern.
      
      Change-Id: I5696a07e2fde896f21e0e83644c95b7b6ac79a10
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      64726e6d
  20. 09 Aug, 2017 1 commit
    • Etienne Carriere's avatar
      bl32: add secure interrupt handling in AArch32 sp_min · 71816096
      Etienne Carriere authored
      
      
      Add support for a minimal secure interrupt service in sp_min for
      the AArch32 implementation. Hard code that only FIQs are handled.
      
      Introduce bolean build directive SP_MIN_WITH_SECURE_FIQ to enable
      FIQ handling from SP_MIN.
      
      Configure SCR[FIQ] and SCR[FW] from generic code for both cold and
      warm boots to handle FIQ in secure state from monitor.
      
      Since SP_MIN architecture, FIQ are always trapped when system executes
      in non secure state. Hence discard relay of the secure/non-secure
      state in the FIQ handler.
      
      Change-Id: I1f7d1dc7b21f6f90011b7f3fcd921e455592f5e7
      Signed-off-by: default avatarEtienne Carriere <etienne.carriere@st.com>
      71816096
  21. 23 Jun, 2017 1 commit
  22. 21 Jun, 2017 1 commit
    • David Cunado's avatar
      Fully initialise essential control registers · 18f2efd6
      David Cunado authored
      
      
      This patch updates the el3_arch_init_common macro so that it fully
      initialises essential control registers rather then relying on hardware
      to set the reset values.
      
      The context management functions are also updated to fully initialise
      the appropriate control registers when initialising the non-secure and
      secure context structures and when preparing to leave EL3 for a lower
      EL.
      
      This gives better alignement with the ARM ARM which states that software
      must initialise RES0 and RES1 fields with 0 / 1.
      
      This patch also corrects the following typos:
      
      "NASCR definitions" -> "NSACR definitions"
      
      Change-Id: Ia8940b8351dc27bc09e2138b011e249655041cfc
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      18f2efd6
  23. 20 Jun, 2017 2 commits
  24. 12 May, 2017 1 commit
    • Soby Mathew's avatar
      AArch32: Rework SMC context save and restore mechanism · b6285d64
      Soby Mathew authored
      
      
      The current SMC context data structure `smc_ctx_t` and related helpers are
      optimized for case when SMC call does not result in world switch. This was
      the case for SP_MIN and BL1 cold boot flow. But the firmware update usecase
      requires world switch as a result of SMC and the current SMC context helpers
      were not helping very much in this regard. Therefore this patch does the
      following changes to improve this:
      
      1. Add monitor stack pointer, `spmon` to `smc_ctx_t`
      
      The C Runtime stack pointer in monitor mode, `sp_mon` is added to the
      SMC context, and the `smc_ctx_t` pointer is cached in `sp_mon` prior
      to exit from Monitor mode. This makes is easier to retrieve the
      context when the next SMC call happens. As a result of this change,
      the SMC context helpers no longer depend on the stack to save and
      restore the register.
      
      This aligns it with the context save and restore mechanism in AArch64.
      
      2. Add SCR in `smc_ctx_t`
      
      Adding the SCR register to `smc_ctx_t` makes it easier to manage this
      register state when switching between non secure and secure world as a
      result of an SMC call.
      
      Change-Id: I5e12a7056107c1701b457b8f7363fdbf892230bf
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      b6285d64
  25. 03 May, 2017 1 commit
  26. 26 Apr, 2017 1 commit
  27. 19 Apr, 2017 1 commit
    • Soby Mathew's avatar
      PSCI: Build option to enable D-Caches early in warmboot · bcc3c49c
      Soby Mathew authored
      
      
      This patch introduces a build option to enable D-cache early on the CPU
      after warm boot. This is applicable for platforms which do not require
      interconnect programming to enable cache coherency (eg: single cluster
      platforms). If this option is enabled, then warm boot path enables
      D-caches immediately after enabling MMU.
      
      Fixes ARM-Software/tf-issues#456
      
      Change-Id: I44c8787d116d7217837ced3bcf0b1d3441c8d80e
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      bcc3c49c
  28. 31 Mar, 2017 1 commit
    • Douglas Raillard's avatar
      Add support for GCC stack protection · 51faada7
      Douglas Raillard authored
      
      
      Introduce new build option ENABLE_STACK_PROTECTOR. It enables
      compilation of all BL images with one of the GCC -fstack-protector-*
      options.
      
      A new platform function plat_get_stack_protector_canary() is introduced.
      It returns a value that is used to initialize the canary for stack
      corruption detection. Returning a random value will prevent an attacker
      from predicting the value and greatly increase the effectiveness of the
      protection.
      
      A message is printed at the ERROR level when a stack corruption is
      detected.
      
      To be effective, the global data must be stored at an address
      lower than the base of the stacks. Failure to do so would allow an
      attacker to overwrite the canary as part of an attack which would void
      the protection.
      
      FVP implementation of plat_get_stack_protector_canary is weak as
      there is no real source of entropy on the FVP. It therefore relies on a
      timer's value, which could be predictable.
      
      Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      51faada7
  29. 08 Mar, 2017 1 commit
  30. 02 Mar, 2017 1 commit
  31. 06 Feb, 2017 2 commits
    • Douglas Raillard's avatar
      Replace some memset call by zeromem · 32f0d3c6
      Douglas Raillard authored
      
      
      Replace all use of memset by zeromem when zeroing moderately-sized
      structure by applying the following transformation:
      memset(x, 0, sizeof(x)) => zeromem(x, sizeof(x))
      
      As the Trusted Firmware is compiled with -ffreestanding, it forbids the
      compiler from using __builtin_memset and forces it to generate calls to
      the slow memset implementation. Zeromem is a near drop in replacement
      for this use case, with a more efficient implementation on both AArch32
      and AArch64.
      
      Change-Id: Ia7f3a90e888b96d056881be09f0b4d65b41aa79e
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      32f0d3c6
    • Douglas Raillard's avatar
      Introduce unified API to zero memory · 308d359b
      Douglas Raillard authored
      
      
      Introduce zeromem_dczva function on AArch64 that can handle unaligned
      addresses and make use of DC ZVA instruction to zero a whole block at a
      time. This zeroing takes place directly in the cache to speed it up
      without doing external memory access.
      
      Remove the zeromem16 function on AArch64 and replace it with an alias to
      zeromem. This zeromem16 function is now deprecated.
      
      Remove the 16-bytes alignment constraint on __BSS_START__ in
      firmware-design.md as it is now not mandatory anymore (it used to comply
      with zeromem16 requirements).
      
      Change the 16-bytes alignment constraints in SP min's linker script to a
      8-bytes alignment constraint as the AArch32 zeromem implementation is now
      more efficient on 8-bytes aligned addresses.
      
      Introduce zero_normalmem and zeromem helpers in platform agnostic header
      that are implemented this way:
      * AArch32:
      	* zero_normalmem: zero using usual data access
      	* zeromem: alias for zero_normalmem
      * AArch64:
      	* zero_normalmem: zero normal memory  using DC ZVA instruction
      	                  (needs MMU enabled)
      	* zeromem: zero using usual data access
      
      Usage guidelines: in most cases, zero_normalmem should be preferred.
      
      There are 2 scenarios where zeromem (or memset) must be used instead:
      * Code that must run with MMU disabled (which means all memory is
        considered device memory for data accesses).
      * Code that fills device memory with null bytes.
      
      Optionally, the following rule can be applied if performance is
      important:
      * Code zeroing small areas (few bytes) that are not secrets should use
        memset to take advantage of compiler optimizations.
      
        Note: Code zeroing security-related critical information should use
        zero_normalmem/zeromem instead of memset to avoid removal by
        compilers' optimizations in some cases or misbehaving versions of GCC.
      
      Fixes ARM-software/tf-issues#408
      
      Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      308d359b
  32. 23 Dec, 2016 1 commit
    • Douglas Raillard's avatar
      Abort preempted TSP STD SMC after PSCI CPU suspend · 3df6012a
      Douglas Raillard authored
      
      
      Standard SMC requests that are handled in the secure-world by the Secure
      Payload can be preempted by interrupts that must be handled in the
      normal world. When the TSP is preempted the secure context is stored and
      control is passed to the normal world to handle the non-secure
      interrupt. Once completed the preempted secure context is restored. When
      restoring the preempted context, the dispatcher assumes that the TSP
      preempted context is still stored as the SECURE context by the context
      management library.
      
      However, PSCI power management operations causes synchronous entry into
      TSP. This overwrites the preempted SECURE context in the context
      management library. When restoring back the SECURE context, the Secure
      Payload crashes because this context is not the preempted context
      anymore.
      
      This patch avoids corruption of the preempted SECURE context by aborting
      any preempted SMC during PSCI power management calls. The
      abort_std_smc_entry hook of the TSP is called when aborting the SMC
      request.
      
      It also exposes this feature as a FAST SMC callable from normal world to
      abort preempted SMC with FID TSP_FID_ABORT.
      
      Change-Id: I7a70347e9293f47d87b5de20484b4ffefb56b770
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      3df6012a
  33. 12 Dec, 2016 1 commit
    • Soby Mathew's avatar
      AArch32: Fix the stack alignment issue · 9f3ee61c
      Soby Mathew authored
      
      
      The AArch32 Procedure call Standard mandates that the stack must be aligned
      to 8 byte boundary at external interfaces. This patch does the required
      changes.
      
      This problem was detected when a crash was encountered in
      `psci_print_power_domain_map()` while printing 64 bit values. Aligning
      the stack to 8 byte boundary resolved the problem.
      
      Fixes ARM-Software/tf-issues#437
      
      Change-Id: I517bd8203601bb88e9311bd36d477fb7b3efb292
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      9f3ee61c