1. 11 Jul, 2018 2 commits
    • Roberto Vargas's avatar
      Add end_vector_entry assembler macro · a9203eda
      Roberto Vargas authored
      
      
      Check_vector_size checks if the size of the vector fits
      in the size reserved for it. This check creates problems in
      the Clang assembler. A new macro, end_vector_entry, is added
      and check_vector_size is deprecated.
      
      This new macro fills the current exception vector until the next
      exception vector. If the size of the current vector is bigger
      than 32 instructions then it gives an error.
      
      Change-Id: Ie8545cf1003a1e31656a1018dd6b4c28a4eaf671
      Signed-off-by: default avatarRoberto Vargas <roberto.vargas@arm.com>
      a9203eda
    • Roberto Vargas's avatar
      Remove .func and .endfunc assembler directives · b2805dab
      Roberto Vargas authored
      
      
      These directives are only used when stabs debugging information
      is used, but we use ELF which uses DWARF debugging information.
      Clang assembler doesn't support these directives, and removing
      them makes the code more compatible with clang.
      
      Change-Id: I2803f22ebd24c0fe248e04ef1b17de9cec5f89c4
      Signed-off-by: default avatarRoberto Vargas <roberto.vargas@arm.com>
      b2805dab
  2. 27 Jun, 2018 1 commit
    • Jeenu Viswambharan's avatar
      xlat v2: Split MMU setup and enable · 0cc7aa89
      Jeenu Viswambharan authored
      
      
      At present, the function provided by the translation library to enable
      MMU constructs appropriate values for translation library, and programs
      them to the right registers. The construction of initial values,
      however, is only required once as both the primary and secondaries
      program the same values.
      
      Additionally, the MMU-enabling function is written in C, which means
      there's an active stack at the time of enabling MMU. On some systems,
      like Arm DynamIQ, having active stack while enabling MMU during warm
      boot might lead to coherency problems.
      
      This patch addresses both the above problems by:
      
        - Splitting the MMU-enabling function into two: one that sets up
          values to be programmed into the registers, and another one that
          takes the pre-computed values and writes to the appropriate
          registers. With this, the primary effectively calls both functions
          to have the MMU enabled, but secondaries only need to call the
          latter.
      
        - Rewriting the function that enables MMU in assembly so that it
          doesn't use stack.
      
      This patch fixes a bunch of MISRA issues on the way.
      
      Change-Id: I0faca97263a970ffe765f0e731a1417e43fbfc45
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      0cc7aa89
  3. 22 May, 2018 1 commit
  4. 04 May, 2018 1 commit
    • Jeenu Viswambharan's avatar
      AArch64: Introduce RAS handling · 14c6016a
      Jeenu Viswambharan authored
      
      
      RAS extensions are mandatory for ARMv8.2 CPUs, but are also optional
      extensions to base ARMv8.0 architecture.
      
      This patch adds build system support to enable RAS features in ARM
      Trusted Firmware. A boolean build option RAS_EXTENSION is introduced for
      this.
      
      With RAS_EXTENSION, an Exception Synchronization Barrier (ESB) is
      inserted at all EL3 vector entry and exit. ESBs will synchronize pending
      external aborts before entering EL3, and therefore will contain and
      attribute errors to lower EL execution. Any errors thus synchronized are
      detected via. DISR_EL1 register.
      
      When RAS_EXTENSION is set to 1, HANDLE_EL3_EA_FIRST must also be set to 1.
      
      Change-Id: I38a19d84014d4d8af688bd81d61ba582c039383a
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      14c6016a
  5. 07 Apr, 2018 2 commits
    • Jiafei Pan's avatar
      fix instruction address range limitation · b4ad9768
      Jiafei Pan authored
      
      
      For the adr instruction, it require the label's offset from the
      address of this instruction must be in the range +/-1MB. If the
      option "BL2_IN_XIP_MEM" is set to '1', in some cases, BL2's RW
      memory will not in the range of +/-1MB from BL2's RO memory region.
      so we need to use ldr instruction to cover this case.
      Signed-off-by: default avatarJiafei Pan <Jiafei.Pan@nxp.com>
      b4ad9768
    • Jiafei Pan's avatar
      Add support for BL2 in XIP memory · 7d173fc5
      Jiafei Pan authored
      
      
      In some use-cases BL2 will be stored in eXecute In Place (XIP) memory,
      like BL1. In these use-cases, it is necessary to initialize the RW sections
      in RAM, while leaving the RO sections in place. This patch enable this
      use-case with a new build option, BL2_IN_XIP_MEM. For now, this option
      is only supported when BL2_AT_EL3 is 1.
      Signed-off-by: default avatarJiafei Pan <Jiafei.Pan@nxp.com>
      7d173fc5
  6. 18 Jan, 2018 1 commit
    • Roberto Vargas's avatar
      bl2-el3: Add BL2_EL3 image · b1d27b48
      Roberto Vargas authored
      
      
      This patch enables BL2 to execute at the highest exception level
      without any dependancy on TF BL1. This enables platforms which already
      have a non-TF Boot ROM to directly load and execute BL2 and subsequent BL
      stages without need for BL1.  This is not currently possible because
      BL2 executes at S-EL1 and cannot jump straight to EL3.
      
      Change-Id: Ief1efca4598560b1b8c8e61fbe26d1f44e929d69
      Signed-off-by: default avatarRoberto Vargas <roberto.vargas@arm.com>
      b1d27b48
  7. 11 Jan, 2018 1 commit
    • Dimitris Papastamos's avatar
      Workaround for CVE-2017-5715 on Cortex A57 and A72 · f62ad322
      Dimitris Papastamos authored
      
      
      Invalidate the Branch Target Buffer (BTB) on entry to EL3 by disabling
      and enabling the MMU.  To achieve this without performing any branch
      instruction, a per-cpu vbar is installed which executes the workaround
      and then branches off to the corresponding vector entry in the main
      vector table.  A side effect of this change is that the main vbar is
      configured before any reset handling.  This is to allow the per-cpu
      reset function to override the vbar setting.
      
      This workaround is enabled by default on the affected CPUs.
      
      Change-Id: I97788d38463a5840a410e3cea85ed297a1678265
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      f62ad322
  8. 12 Dec, 2017 1 commit
    • Julius Werner's avatar
      Add new function-pointer-based console API · 9536bae6
      Julius Werner authored
      
      
      This patch overhauls the console API to allow for multiple console
      instances of different drivers that are active at the same time. Instead
      of binding to well-known function names (like console_core_init),
      consoles now provide a register function (e.g. console_16550_register())
      that will hook them into the list of active consoles. All console
      operations will be dispatched to all consoles currently in the list.
      
      The new API will be selected by the build-time option MULTI_CONSOLE_API,
      which defaults to ${ERROR_DEPRECATED} for now. The old console API code
      will be retained to stay backwards-compatible to older platforms, but
      should no longer be used for any newly added platforms and can hopefully
      be removed at some point in the future.
      
      The new console API is intended to be used for both normal (bootup) and
      crash use cases, freeing platforms of the need to set up the crash
      console separately. Consoles can be individually configured to be active
      active at boot (until first handoff to EL2), at runtime (after first
      handoff to EL2), and/or after a crash. Console drivers should set a sane
      default upon registration that can be overridden with the
      console_set_scope() call. Code to hook up the crash reporting mechanism
      to this framework will be added with a later patch.
      
      This patch only affects AArch64, but the new API could easily be ported
      to AArch32 as well if desired.
      
      Change-Id: I35c5aa2cb3f719cfddd15565eb13c7cde4162549
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      9536bae6
  9. 30 Nov, 2017 1 commit
    • David Cunado's avatar
      Enable SVE for Non-secure world · 1a853370
      David Cunado authored
      
      
      This patch adds a new build option, ENABLE_SVE_FOR_NS, which when set
      to one EL3 will check to see if the Scalable Vector Extension (SVE) is
      implemented when entering and exiting the Non-secure world.
      
      If SVE is implemented, EL3 will do the following:
      
      - Entry to Non-secure world: SIMD, FP and SVE functionality is enabled.
      
      - Exit from Non-secure world: SIMD, FP and SVE functionality is
        disabled. As SIMD and FP registers are part of the SVE Z-registers
        then any use of SIMD / FP functionality would corrupt the SVE
        registers.
      
      The build option default is 1. The SVE functionality is only supported
      on AArch64 and so the build option is set to zero when the target
      archiecture is AArch32.
      
      This build option is not compatible with the CTX_INCLUDE_FPREGS - an
      assert will be raised on platforms where SVE is implemented and both
      ENABLE_SVE_FOR_NS and CTX_INCLUDE_FPREGS are set to 1.
      
      Also note this change prevents secure world use of FP&SIMD registers on
      SVE-enabled platforms. Existing Secure-EL1 Payloads will not work on
      such platforms unless ENABLE_SVE_FOR_NS is set to 0.
      
      Additionally, on the first entry into the Non-secure world the SVE
      functionality is enabled and the SVE Z-register length is set to the
      maximum size allowed by the architecture. This includes the use case
      where EL2 is implemented but not used.
      
      Change-Id: Ie2d733ddaba0b9bef1d7c9765503155188fe7dae
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      1a853370
  10. 20 Nov, 2017 1 commit
    • Dimitris Papastamos's avatar
      Refactor Statistical Profiling Extensions implementation · 281a08cc
      Dimitris Papastamos authored
      
      
      Factor out SPE operations in a separate file.  Use the publish
      subscribe framework to drain the SPE buffers before entering secure
      world.  Additionally, enable SPE before entering normal world.
      
      A side effect of this change is that the profiling buffers are now
      only drained when a transition from normal world to secure world
      happens.  Previously they were drained also on return from secure
      world, which is unnecessary as SPE is not supported in S-EL1.
      
      Change-Id: I17582c689b4b525770dbb6db098b3a0b5777b70a
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      281a08cc
  11. 08 Nov, 2017 1 commit
    • Antonio Nino Diaz's avatar
      SPM: Introduce Secure Partition Manager · 2fccb228
      Antonio Nino Diaz authored
      
      
      A Secure Partition is a software execution environment instantiated in
      S-EL0 that can be used to implement simple management and security
      services. Since S-EL0 is an unprivileged exception level, a Secure
      Partition relies on privileged firmware e.g. ARM Trusted Firmware to be
      granted access to system and processor resources. Essentially, it is a
      software sandbox that runs under the control of privileged software in
      the Secure World and accesses the following system resources:
      
      - Memory and device regions in the system address map.
      - PE system registers.
      - A range of asynchronous exceptions e.g. interrupts.
      - A range of synchronous exceptions e.g. SMC function identifiers.
      
      A Secure Partition enables privileged firmware to implement only the
      absolutely essential secure services in EL3 and instantiate the rest in
      a partition. Since the partition executes in S-EL0, its implementation
      cannot be overly complex.
      
      The component in ARM Trusted Firmware responsible for managing a Secure
      Partition is called the Secure Partition Manager (SPM). The SPM is
      responsible for the following:
      
      - Validating and allocating resources requested by a Secure Partition.
      - Implementing a well defined interface that is used for initialising a
        Secure Partition.
      - Implementing a well defined interface that is used by the normal world
        and other secure services for accessing the services exported by a
        Secure Partition.
      - Implementing a well defined interface that is used by a Secure
        Partition to fulfil service requests.
      - Instantiating the software execution environment required by a Secure
        Partition to fulfil a service request.
      
      Change-Id: I6f7862d6bba8732db5b73f54e789d717a35e802f
      Co-authored-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      Co-authored-by: default avatarSandrine Bailleux <sandrine.bailleux@arm.com>
      Co-authored-by: default avatarAchin Gupta <achin.gupta@arm.com>
      Co-authored-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      2fccb228
  12. 31 Aug, 2017 1 commit
    • Douglas Raillard's avatar
      Add CFI debug info to vector entries · 31823b69
      Douglas Raillard authored
      
      
      Add Call Frame Information assembler directives to vector entries so
      that debuggers display the backtrace of functions that triggered a
      synchronous exception. For example, a function triggering a data abort
      will be easier to debug if the backtrace can be displayed from a
      breakpoint at the beginning of the synchronous exception vector.
      
      DS-5 needs CFI otherwise it will not attempt to display the backtrace.
      Other debuggers might have other needs. These debug information are
      stored in the ELF file but not in the final binary.
      
      Change-Id: I32dc4e4b7af02546c93c1a45c71a1f6d710d36b1
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      31823b69
  13. 22 Jun, 2017 1 commit
    • dp-arm's avatar
      aarch64: Enable Statistical Profiling Extensions for lower ELs · d832aee9
      dp-arm authored
      
      
      SPE is only supported in non-secure state.  Accesses to SPE specific
      registers from SEL1 will trap to EL3.  During a world switch, before
      `TTBR` is modified the SPE profiling buffers are drained.  This is to
      avoid a potential invalid memory access in SEL1.
      
      SPE is architecturally specified only for AArch64.
      
      Change-Id: I04a96427d9f9d586c331913d815fdc726855f6b0
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      d832aee9
  14. 21 Jun, 2017 1 commit
    • David Cunado's avatar
      Fully initialise essential control registers · 18f2efd6
      David Cunado authored
      
      
      This patch updates the el3_arch_init_common macro so that it fully
      initialises essential control registers rather then relying on hardware
      to set the reset values.
      
      The context management functions are also updated to fully initialise
      the appropriate control registers when initialising the non-secure and
      secure context structures and when preparing to leave EL3 for a lower
      EL.
      
      This gives better alignement with the ARM ARM which states that software
      must initialise RES0 and RES1 fields with 0 / 1.
      
      This patch also corrects the following typos:
      
      "NASCR definitions" -> "NSACR definitions"
      
      Change-Id: Ia8940b8351dc27bc09e2138b011e249655041cfc
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      18f2efd6
  15. 03 May, 2017 1 commit
  16. 31 Mar, 2017 1 commit
    • Douglas Raillard's avatar
      Add support for GCC stack protection · 51faada7
      Douglas Raillard authored
      
      
      Introduce new build option ENABLE_STACK_PROTECTOR. It enables
      compilation of all BL images with one of the GCC -fstack-protector-*
      options.
      
      A new platform function plat_get_stack_protector_canary() is introduced.
      It returns a value that is used to initialize the canary for stack
      corruption detection. Returning a random value will prevent an attacker
      from predicting the value and greatly increase the effectiveness of the
      protection.
      
      A message is printed at the ERROR level when a stack corruption is
      detected.
      
      To be effective, the global data must be stored at an address
      lower than the base of the stacks. Failure to do so would allow an
      attacker to overwrite the canary as part of an attack which would void
      the protection.
      
      FVP implementation of plat_get_stack_protector_canary is weak as
      there is no real source of entropy on the FVP. It therefore relies on a
      timer's value, which could be predictable.
      
      Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      51faada7
  17. 15 Feb, 2017 1 commit
    • dp-arm's avatar
      Disable secure self-hosted debug via MDCR_EL3/SDCR · 85e93ba0
      dp-arm authored
      
      
      Trusted Firmware currently has no support for secure self-hosted
      debug.  To avoid unexpected exceptions, disable software debug
      exceptions, other than software breakpoint instruction exceptions,
      from all exception levels in secure state.  This applies to both
      AArch32 and AArch64 EL3 initialization.
      
      Change-Id: Id097e54a6bbcd0ca6a2be930df5d860d8d09e777
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      85e93ba0
  18. 06 Feb, 2017 1 commit
    • Douglas Raillard's avatar
      Introduce unified API to zero memory · 308d359b
      Douglas Raillard authored
      
      
      Introduce zeromem_dczva function on AArch64 that can handle unaligned
      addresses and make use of DC ZVA instruction to zero a whole block at a
      time. This zeroing takes place directly in the cache to speed it up
      without doing external memory access.
      
      Remove the zeromem16 function on AArch64 and replace it with an alias to
      zeromem. This zeromem16 function is now deprecated.
      
      Remove the 16-bytes alignment constraint on __BSS_START__ in
      firmware-design.md as it is now not mandatory anymore (it used to comply
      with zeromem16 requirements).
      
      Change the 16-bytes alignment constraints in SP min's linker script to a
      8-bytes alignment constraint as the AArch32 zeromem implementation is now
      more efficient on 8-bytes aligned addresses.
      
      Introduce zero_normalmem and zeromem helpers in platform agnostic header
      that are implemented this way:
      * AArch32:
      	* zero_normalmem: zero using usual data access
      	* zeromem: alias for zero_normalmem
      * AArch64:
      	* zero_normalmem: zero normal memory  using DC ZVA instruction
      	                  (needs MMU enabled)
      	* zeromem: zero using usual data access
      
      Usage guidelines: in most cases, zero_normalmem should be preferred.
      
      There are 2 scenarios where zeromem (or memset) must be used instead:
      * Code that must run with MMU disabled (which means all memory is
        considered device memory for data accesses).
      * Code that fills device memory with null bytes.
      
      Optionally, the following rule can be applied if performance is
      important:
      * Code zeroing small areas (few bytes) that are not secrets should use
        memset to take advantage of compiler optimizations.
      
        Note: Code zeroing security-related critical information should use
        zero_normalmem/zeromem instead of memset to avoid removal by
        compilers' optimizations in some cases or misbehaving versions of GCC.
      
      Fixes ARM-software/tf-issues#408
      
      Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      308d359b
  19. 30 Jan, 2017 1 commit
  20. 23 Jan, 2017 1 commit
    • Masahiro Yamada's avatar
      Use #ifdef for IMAGE_BL* instead of #if · 3d8256b2
      Masahiro Yamada authored
      
      
      One nasty part of ATF is some of boolean macros are always defined
      as 1 or 0, and the rest of them are only defined under certain
      conditions.
      
      For the former group, "#if FOO" or "#if !FOO" must be used because
      "#ifdef FOO" is always true.  (Options passed by $(call add_define,)
      are the cases.)
      
      For the latter, "#ifdef FOO" or "#ifndef FOO" should be used because
      checking the value of an undefined macro is strange.
      
      Here, IMAGE_BL* is handled by make_helpers/build_macro.mk like
      follows:
      
        $(eval IMAGE := IMAGE_BL$(call uppercase,$(3)))
      
        $(OBJ): $(2)
                @echo "  CC      $$<"
                $$(Q)$$(CC) $$(TF_CFLAGS) $$(CFLAGS) -D$(IMAGE) -c $$< -o $$@
      
      This means, IMAGE_BL* is defined when building the corresponding
      image, but *undefined* for the other images.
      
      So, IMAGE_BL* belongs to the latter group where we should use #ifdef
      or #ifndef.
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      3d8256b2
  21. 05 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Define and use no_ret macro where no return is expected · a806dad5
      Jeenu Viswambharan authored
      
      
      There are many instances in ARM Trusted Firmware where control is
      transferred to functions from which return isn't expected. Such jumps
      are made using 'bl' instruction to provide the callee with the location
      from which it was jumped to. Additionally, debuggers infer the caller by
      examining where 'lr' register points to. If a 'bl' of the nature
      described above falls at the end of an assembly function, 'lr' will be
      left pointing to a location outside of the function range. This misleads
      the debugger back trace.
      
      This patch defines a 'no_ret' macro to be used when jumping to functions
      from which return isn't expected. The macro ensures to use 'bl'
      instruction for the jump, and also, for debug builds, places a 'nop'
      instruction immediately thereafter (unless instructed otherwise) so as
      to leave 'lr' pointing within the function range.
      
      Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0
      Co-authored-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      a806dad5
  22. 09 Nov, 2016 1 commit
    • David Cunado's avatar
      Reset debug registers MDCR-EL3/SDCR and MDCR_EL2/HDCR · 495f3d3c
      David Cunado authored
      
      
      In order to avoid unexpected traps into EL3/MON mode, this patch
      resets the debug registers, MDCR_EL3 and MDCR_EL2 for AArch64,
      and SDCR and HDCR for AArch32.
      
      MDCR_EL3/SDCR is zero'ed when EL3/MON mode is entered, at the
      start of BL1 and BL31/SMP_MIN.
      
      For MDCR_EL2/HDCR, this patch zero's the bits that are
      architecturally UNKNOWN values on reset. This is done when
      exiting from EL3/MON mode but only on platforms that support
      EL2/HYP mode but choose to exit to EL1/SVC mode.
      
      Fixes ARM-software/tf-issues#430
      
      Change-Id: Idb992232163c072faa08892251b5626ae4c3a5b6
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      495f3d3c
  23. 19 Jul, 2016 1 commit
    • Soby Mathew's avatar
      Rearrange assembly helper macros · 738b1fd7
      Soby Mathew authored
      This patch moves assembler macros which are not architecture specific
      to a new file `asm_macros_common.S` and moves the `el3_common_macros.S`
      into `aarch64` specific folder.
      
      Change-Id: I444a1ee3346597bf26a8b827480cd9640b38c826
      738b1fd7
  24. 18 Jul, 2016 1 commit
    • Soby Mathew's avatar
      Introduce `el3_runtime` and `PSCI` libraries · 532ed618
      Soby Mathew authored
      This patch moves the PSCI services and BL31 frameworks like context
      management and per-cpu data into new library components `PSCI` and
      `el3_runtime` respectively. This enables PSCI to be built independently from
      BL31. A new `psci_lib.mk` makefile is introduced which adds the relevant
      PSCI library sources and gets included by `bl31.mk`. Other changes which
      are done as part of this patch are:
      
      * The runtime services framework is now moved to the `common/` folder to
        enable reuse.
      * The `asm_macros.S` and `assert_macros.S` helpers are moved to architecture
        specific folder.
      * The `plat_psci_common.c` is moved from the `plat/common/aarch64/` folder
        to `plat/common` folder. The original file location now has a stub which
        just includes the file from new location to maintain platform compatibility.
      
      Most of the changes wouldn't affect platform builds as they just involve
      changes to the generic bl1.mk and bl31.mk makefiles.
      
      NOTE: THE `plat_psci_common.c` FILE HAS MOVED LOCATION AND THE STUB FILE AT
      THE ORIGINAL LOCATION IS NOW DEPRECATED. PLATFORMS SHOULD MODIFY THEIR
      MAKEFILES TO INCLUDE THE FILE FROM THE NEW LOCATION.
      
      Change-Id: I6bd87d5b59424995c6a65ef8076d4fda91ad5e86
      532ed618