1. 11 Jan, 2018 2 commits
    • Dimitris Papastamos's avatar
      Workaround for CVE-2017-5715 on Cortex A73 and A75 · a1781a21
      Dimitris Papastamos authored
      
      
      Invalidate the Branch Target Buffer (BTB) on entry to EL3 by
      temporarily dropping into AArch32 Secure-EL1 and executing the
      `BPIALL` instruction.
      
      This is achieved by using 3 vector tables.  There is the runtime
      vector table which is used to handle exceptions and 2 additional
      tables which are required to implement this workaround.  The
      additional tables are `vbar0` and `vbar1`.
      
      The sequence of events for handling a single exception is
      as follows:
      
      1) Install vector table `vbar0` which saves the CPU context on entry
         to EL3 and sets up the Secure-EL1 context to execute in AArch32 mode
         with the MMU disabled and I$ enabled.  This is the default vector table.
      
      2) Before doing an ERET into Secure-EL1, switch vbar to point to
         another vector table `vbar1`.  This is required to restore EL3 state
         when returning from the workaround, before proceeding with normal EL3
         exception handling.
      
      3) While in Secure-EL1, the `BPIALL` instruction is executed and an
         SMC call back to EL3 is performed.
      
      4) On entry to EL3 from Secure-EL1, the saved context from step 1) is
         restored.  The vbar is switched to point to `vbar0` in preparation to
         handle further exceptions.  Finally a branch to the runtime vector
         table entry is taken to complete the handling of the original
         exception.
      
      This workaround is enabled by default on the affected CPUs.
      
      NOTE
      ====
      
      There are 4 different stubs in Secure-EL1.  Each stub corresponds to
      an exception type such as Sync/IRQ/FIQ/SError.  Each stub will move a
      different value in `R0` before doing an SMC call back into EL3.
      Without this piece of information it would not be possible to know
      what the original exception type was as we cannot use `ESR_EL3` to
      distinguish between IRQs and FIQs.
      
      Change-Id: I90b32d14a3735290b48685d43c70c99daaa4b434
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      a1781a21
    • Dimitris Papastamos's avatar
      Workaround for CVE-2017-5715 on Cortex A57 and A72 · f62ad322
      Dimitris Papastamos authored
      
      
      Invalidate the Branch Target Buffer (BTB) on entry to EL3 by disabling
      and enabling the MMU.  To achieve this without performing any branch
      instruction, a per-cpu vbar is installed which executes the workaround
      and then branches off to the corresponding vector entry in the main
      vector table.  A side effect of this change is that the main vbar is
      configured before any reset handling.  This is to allow the per-cpu
      reset function to override the vbar setting.
      
      This workaround is enabled by default on the affected CPUs.
      
      Change-Id: I97788d38463a5840a410e3cea85ed297a1678265
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      f62ad322
  2. 30 Nov, 2017 1 commit
    • David Cunado's avatar
      Enable SVE for Non-secure world · 1a853370
      David Cunado authored
      
      
      This patch adds a new build option, ENABLE_SVE_FOR_NS, which when set
      to one EL3 will check to see if the Scalable Vector Extension (SVE) is
      implemented when entering and exiting the Non-secure world.
      
      If SVE is implemented, EL3 will do the following:
      
      - Entry to Non-secure world: SIMD, FP and SVE functionality is enabled.
      
      - Exit from Non-secure world: SIMD, FP and SVE functionality is
        disabled. As SIMD and FP registers are part of the SVE Z-registers
        then any use of SIMD / FP functionality would corrupt the SVE
        registers.
      
      The build option default is 1. The SVE functionality is only supported
      on AArch64 and so the build option is set to zero when the target
      archiecture is AArch32.
      
      This build option is not compatible with the CTX_INCLUDE_FPREGS - an
      assert will be raised on platforms where SVE is implemented and both
      ENABLE_SVE_FOR_NS and CTX_INCLUDE_FPREGS are set to 1.
      
      Also note this change prevents secure world use of FP&SIMD registers on
      SVE-enabled platforms. Existing Secure-EL1 Payloads will not work on
      such platforms unless ENABLE_SVE_FOR_NS is set to 0.
      
      Additionally, on the first entry into the Non-secure world the SVE
      functionality is enabled and the SVE Z-register length is set to the
      maximum size allowed by the architecture. This includes the use case
      where EL2 is implemented but not used.
      
      Change-Id: Ie2d733ddaba0b9bef1d7c9765503155188fe7dae
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      1a853370
  3. 29 Nov, 2017 2 commits
  4. 20 Nov, 2017 1 commit
    • Dimitris Papastamos's avatar
      Refactor Statistical Profiling Extensions implementation · 281a08cc
      Dimitris Papastamos authored
      
      
      Factor out SPE operations in a separate file.  Use the publish
      subscribe framework to drain the SPE buffers before entering secure
      world.  Additionally, enable SPE before entering normal world.
      
      A side effect of this change is that the profiling buffers are now
      only drained when a transition from normal world to secure world
      happens.  Previously they were drained also on return from secure
      world, which is unnecessary as SPE is not supported in S-EL1.
      
      Change-Id: I17582c689b4b525770dbb6db098b3a0b5777b70a
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      281a08cc
  5. 15 Nov, 2017 1 commit
    • David Cunado's avatar
      Move FPEXC32_EL2 to FP Context · 91089f36
      David Cunado authored
      
      
      The FPEXC32_EL2 register controls SIMD and FP functionality when the
      lower ELs are executing in AArch32 mode. It is architecturally mapped
      to AArch32 system register FPEXC.
      
      This patch removes FPEXC32_EL2 register from the System Register context
      and adds it to the floating-point context. EL3 only saves / restores the
      floating-point context if the build option CTX_INCLUDE_FPREGS is set to 1.
      
      The rationale for this change is that if the Secure world is using FP
      functionality and EL3 is not managing the FP context, then the Secure
      world will save / restore the appropriate FP registers.
      
      NOTE - this is a break in behaviour in the unlikely case that
      CTX_INCLUDE_FPREGS is set to 0 and the platform contains an AArch32
      Secure Payload that modifies FPEXC, but does not save and restore
      this register
      
      Change-Id: Iab80abcbfe302752d52b323b4abcc334b585c184
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      91089f36
  6. 13 Nov, 2017 3 commits
    • Jeenu Viswambharan's avatar
      BL31: Add SDEI dispatcher · b7cb133e
      Jeenu Viswambharan authored
      The implementation currently supports only interrupt-based SDEI events,
      and supports all interfaces as defined by SDEI specification version
      1.0 [1].
      
      Introduce the build option SDEI_SUPPORT to include SDEI dispatcher in
      BL31.
      
      Update user guide and porting guide. SDEI documentation to follow.
      
      [1] http://infocenter.arm.com/help/topic/com.arm.doc.den0054a/ARM_DEN0054A_Software_Delegated_Exception_Interface.pdf
      
      
      
      Change-Id: I758b733084e4ea3b27ac77d0259705565842241a
      Co-authored-by: default avatarYousuf A <yousuf.sait@arm.com>
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      b7cb133e
    • Jeenu Viswambharan's avatar
      BL31: Program Priority Mask for SMC handling · 3d732e23
      Jeenu Viswambharan authored
      
      
      On GICv3 systems, as a side effect of adding provision to handle EL3
      interrupts (unconditionally routing FIQs to EL3), pending Non-secure
      interrupts (signalled as FIQs) may preempt execution in lower Secure ELs
      [1]. This will inadvertently disrupt the semantics of Fast SMC
      (previously called Atomic SMC) calls.
      
      To retain semantics of Fast SMCs, the GIC PMR must be programmed to
      prevent Non-secure interrupts from preempting Secure execution. To that
      effect, two new functions in the Exception Handling Framework subscribe
      to events introduced in an earlier commit:
      
        - Upon 'cm_exited_normal_world', the Non-secure PMR is stashed, and
          the PMR is programmed to the highest Non-secure interrupt priority.
      
        - Upon 'cm_entering_normal_world', the previously stashed Non-secure
          PMR is restored.
      
      The above sequence however prevents Yielding SMCs from being preempted
      by Non-secure interrupts as intended. To facilitate this, the public API
      exc_allow_ns_preemption() is introduced that programs the PMR to the
      original Non-secure PMR value. Another API
      exc_is_ns_preemption_allowed() is also introduced to check if
      exc_allow_ns_preemption() had been called previously.
      
      API documentation to follow.
      
      [1] On GICv2 systems, this isn't a problem as, unlike GICv3, pending NS
          IRQs during Secure execution are signalled as IRQs, which aren't
          routed to EL3.
      
      Change-Id: Ief96b162b0067179b1012332cd991ee1b3051dd0
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      3d732e23
    • Jeenu Viswambharan's avatar
      BL31: Introduce Exception Handling Framework · 21b818c0
      Jeenu Viswambharan authored
      
      
      EHF is a framework that allows dispatching of EL3 interrupts to their
      respective handlers in EL3.
      
      This framework facilitates the firmware-first error handling policy in
      which asynchronous exceptions may be routed to EL3. Such exceptions may
      be handed over to respective exception handlers. Individual handlers
      might further delegate exception handling to lower ELs.
      
      The framework associates the delegated execution to lower ELs with a
      priority value. For interrupts, this corresponds to the priorities
      programmed in GIC; for other types of exceptions, viz. SErrors or
      Synchronous External Aborts, individual dispatchers shall explicitly
      associate delegation to a secure priority. In order to prevent lower
      priority interrupts from preempting higher priority execution, the
      framework provides helpers to control preemption by virtue of
      programming Priority Mask register in the interrupt controller.
      
      This commit allows for handling interrupts targeted at EL3. Exception
      handlers own interrupts by assigning them a range of secure priorities,
      and registering handlers for each priority range it owns.
      
      Support for exception handling in BL31 image is enabled by setting the
      build option EL3_EXCEPTION_HANDLING=1.
      
      Documentation to follow.
      
      NOTE: The framework assumes the priority scheme supported by platform
      interrupt controller is compliant with that of ARM GIC architecture (v2
      or later).
      
      Change-Id: I7224337e4cea47c6ca7d7a4ca22a3716939f7e42
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      21b818c0
  7. 08 Nov, 2017 1 commit
    • Antonio Nino Diaz's avatar
      SPM: Introduce Secure Partition Manager · 2fccb228
      Antonio Nino Diaz authored
      
      
      A Secure Partition is a software execution environment instantiated in
      S-EL0 that can be used to implement simple management and security
      services. Since S-EL0 is an unprivileged exception level, a Secure
      Partition relies on privileged firmware e.g. ARM Trusted Firmware to be
      granted access to system and processor resources. Essentially, it is a
      software sandbox that runs under the control of privileged software in
      the Secure World and accesses the following system resources:
      
      - Memory and device regions in the system address map.
      - PE system registers.
      - A range of asynchronous exceptions e.g. interrupts.
      - A range of synchronous exceptions e.g. SMC function identifiers.
      
      A Secure Partition enables privileged firmware to implement only the
      absolutely essential secure services in EL3 and instantiate the rest in
      a partition. Since the partition executes in S-EL0, its implementation
      cannot be overly complex.
      
      The component in ARM Trusted Firmware responsible for managing a Secure
      Partition is called the Secure Partition Manager (SPM). The SPM is
      responsible for the following:
      
      - Validating and allocating resources requested by a Secure Partition.
      - Implementing a well defined interface that is used for initialising a
        Secure Partition.
      - Implementing a well defined interface that is used by the normal world
        and other secure services for accessing the services exported by a
        Secure Partition.
      - Implementing a well defined interface that is used by a Secure
        Partition to fulfil service requests.
      - Instantiating the software execution environment required by a Secure
        Partition to fulfil a service request.
      
      Change-Id: I6f7862d6bba8732db5b73f54e789d717a35e802f
      Co-authored-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      Co-authored-by: default avatarSandrine Bailleux <sandrine.bailleux@arm.com>
      Co-authored-by: default avatarAchin Gupta <achin.gupta@arm.com>
      Co-authored-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      2fccb228
  8. 23 Oct, 2017 1 commit
  9. 21 Aug, 2017 1 commit
    • Julius Werner's avatar
      Fix x30 reporting for unhandled exceptions · 4d91838b
      Julius Werner authored
      
      
      Some error paths that lead to a crash dump will overwrite the value in
      the x30 register by calling functions with the no_ret macro, which
      resolves to a BL instruction. This is not very useful and not what the
      reader would expect, since a crash dump should usually show all
      registers in the state they were in when the exception happened. This
      patch replaces the offending function calls with a B instruction to
      preserve the value in x30.
      
      Change-Id: I2a3636f2943f79bab0cd911f89d070012e697c2a
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      4d91838b
  10. 12 Jul, 2017 1 commit
    • Isla Mitchell's avatar
      Fix order of #includes · 2a4b4b71
      Isla Mitchell authored
      
      
      This fix modifies the order of system includes to meet the ARM TF coding
      standard. There are some exceptions in order to retain header groupings,
      minimise changes to imported headers, and where there are headers within
      the #if and #ifndef statements.
      
      Change-Id: I65085a142ba6a83792b26efb47df1329153f1624
      Signed-off-by: default avatarIsla Mitchell <isla.mitchell@arm.com>
      2a4b4b71
  11. 21 Jun, 2017 1 commit
    • David Cunado's avatar
      Fully initialise essential control registers · 18f2efd6
      David Cunado authored
      
      
      This patch updates the el3_arch_init_common macro so that it fully
      initialises essential control registers rather then relying on hardware
      to set the reset values.
      
      The context management functions are also updated to fully initialise
      the appropriate control registers when initialising the non-secure and
      secure context structures and when preparing to leave EL3 for a lower
      EL.
      
      This gives better alignement with the ARM ARM which states that software
      must initialise RES0 and RES1 fields with 0 / 1.
      
      This patch also corrects the following typos:
      
      "NASCR definitions" -> "NSACR definitions"
      
      Change-Id: Ia8940b8351dc27bc09e2138b011e249655041cfc
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      18f2efd6
  12. 03 May, 2017 1 commit
  13. 02 May, 2017 1 commit
  14. 19 Apr, 2017 1 commit
    • Soby Mathew's avatar
      PSCI: Build option to enable D-Caches early in warmboot · bcc3c49c
      Soby Mathew authored
      
      
      This patch introduces a build option to enable D-cache early on the CPU
      after warm boot. This is applicable for platforms which do not require
      interconnect programming to enable cache coherency (eg: single cluster
      platforms). If this option is enabled, then warm boot path enables
      D-caches immediately after enabling MMU.
      
      Fixes ARM-Software/tf-issues#456
      
      Change-Id: I44c8787d116d7217837ced3bcf0b1d3441c8d80e
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      bcc3c49c
  15. 31 Mar, 2017 3 commits
    • Douglas Raillard's avatar
      Add support for GCC stack protection · 51faada7
      Douglas Raillard authored
      
      
      Introduce new build option ENABLE_STACK_PROTECTOR. It enables
      compilation of all BL images with one of the GCC -fstack-protector-*
      options.
      
      A new platform function plat_get_stack_protector_canary() is introduced.
      It returns a value that is used to initialize the canary for stack
      corruption detection. Returning a random value will prevent an attacker
      from predicting the value and greatly increase the effectiveness of the
      protection.
      
      A message is printed at the ERROR level when a stack corruption is
      detected.
      
      To be effective, the global data must be stored at an address
      lower than the base of the stacks. Failure to do so would allow an
      attacker to overwrite the canary as part of an attack which would void
      the protection.
      
      FVP implementation of plat_get_stack_protector_canary is weak as
      there is no real source of entropy on the FVP. It therefore relies on a
      timer's value, which could be predictable.
      
      Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      51faada7
    • Antonio Nino Diaz's avatar
      Flush console where necessary · 0b32628e
      Antonio Nino Diaz authored
      
      
      Call console_flush() before execution either terminates or leaves an
      exception level.
      
      Fixes: ARM-software/tf-issues#123
      
      Change-Id: I64eeb92effb039f76937ce89f877b68e355588e3
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      0b32628e
    • Antonio Nino Diaz's avatar
      Add and use plat_crash_console_flush() API · 801cf93c
      Antonio Nino Diaz authored
      
      
      This API makes sure that all the characters sent to the crash console
      are output before returning from it.
      
      Porting guide updated.
      
      Change-Id: I1785f970a40f6aacfbe592b6a911b1f249bb2735
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      801cf93c
  16. 20 Mar, 2017 1 commit
  17. 08 Mar, 2017 1 commit
  18. 02 Mar, 2017 1 commit
  19. 06 Feb, 2017 1 commit
    • Douglas Raillard's avatar
      Introduce unified API to zero memory · 308d359b
      Douglas Raillard authored
      
      
      Introduce zeromem_dczva function on AArch64 that can handle unaligned
      addresses and make use of DC ZVA instruction to zero a whole block at a
      time. This zeroing takes place directly in the cache to speed it up
      without doing external memory access.
      
      Remove the zeromem16 function on AArch64 and replace it with an alias to
      zeromem. This zeromem16 function is now deprecated.
      
      Remove the 16-bytes alignment constraint on __BSS_START__ in
      firmware-design.md as it is now not mandatory anymore (it used to comply
      with zeromem16 requirements).
      
      Change the 16-bytes alignment constraints in SP min's linker script to a
      8-bytes alignment constraint as the AArch32 zeromem implementation is now
      more efficient on 8-bytes aligned addresses.
      
      Introduce zero_normalmem and zeromem helpers in platform agnostic header
      that are implemented this way:
      * AArch32:
      	* zero_normalmem: zero using usual data access
      	* zeromem: alias for zero_normalmem
      * AArch64:
      	* zero_normalmem: zero normal memory  using DC ZVA instruction
      	                  (needs MMU enabled)
      	* zeromem: zero using usual data access
      
      Usage guidelines: in most cases, zero_normalmem should be preferred.
      
      There are 2 scenarios where zeromem (or memset) must be used instead:
      * Code that must run with MMU disabled (which means all memory is
        considered device memory for data accesses).
      * Code that fills device memory with null bytes.
      
      Optionally, the following rule can be applied if performance is
      important:
      * Code zeroing small areas (few bytes) that are not secrets should use
        memset to take advantage of compiler optimizations.
      
        Note: Code zeroing security-related critical information should use
        zero_normalmem/zeromem instead of memset to avoid removal by
        compilers' optimizations in some cases or misbehaving versions of GCC.
      
      Fixes ARM-software/tf-issues#408
      
      Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      308d359b
  20. 05 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Define and use no_ret macro where no return is expected · a806dad5
      Jeenu Viswambharan authored
      
      
      There are many instances in ARM Trusted Firmware where control is
      transferred to functions from which return isn't expected. Such jumps
      are made using 'bl' instruction to provide the callee with the location
      from which it was jumped to. Additionally, debuggers infer the caller by
      examining where 'lr' register points to. If a 'bl' of the nature
      described above falls at the end of an assembly function, 'lr' will be
      left pointing to a location outside of the function range. This misleads
      the debugger back trace.
      
      This patch defines a 'no_ret' macro to be used when jumping to functions
      from which return isn't expected. The macro ensures to use 'bl'
      instruction for the jump, and also, for debug builds, places a 'nop'
      instruction immediately thereafter (unless instructed otherwise) so as
      to leave 'lr' pointing within the function range.
      
      Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0
      Co-authored-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      a806dad5
  21. 14 Nov, 2016 1 commit
    • Douglas Raillard's avatar
      Cosmetic change to exception table · a6ef4393
      Douglas Raillard authored
      
      
      * Move comments on unhandled exceptions at the right place.
      * Reformat the existing comments to highlight the start of
        each block of 4 entries in the exception table to ease
        navigation (lines of dash reserved for head comments).
      * Reflow comments to 80 columns.
      
      Change-Id: I5ab88a93d0628af8e151852cb5b597eb34437677
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      a6ef4393
  22. 24 Oct, 2016 1 commit
    • Caesar Wang's avatar
      rockchip: optimize the link mechanism for SRAM code · ec693569
      Caesar Wang authored
      
      
      Add the common extra.ld.S and customized rk3399.ld.S to extend
      to more features for different platforms.
      For example, we can add SRAM section and specific address to
      load there if we need it, and the common bl31.ld.S not need to
      be modified.
      
      Therefore, we can remove the unused codes which copying explicitly
      from the function pmusram_prepare(). It looks like more clear.
      
      Change-Id: Ibffa2da5e8e3d1d2fca80085ebb296ceb967fce8
      Signed-off-by: default avatarXing Zheng <zhengxing@rock-chips.com>
      Signed-off-by: default avatarCaesar Wang <wxt@rock-chips.com>
      ec693569
  23. 12 Oct, 2016 1 commit
    • dp-arm's avatar
      Add PMF instrumentation points in TF · 872be88a
      dp-arm authored
      
      
      In order to quantify the overall time spent in the PSCI software
      implementation, an initial collection of PMF instrumentation points
      has been added.
      
      Instrumentation has been added to the following code paths:
      
      - Entry to PSCI SMC handler.  The timestamp is captured as early
        as possible during the runtime exception and stored in memory
        before entering the PSCI SMC handler.
      
      - Exit from PSCI SMC handler.  The timestamp is captured after
        normal return from the PSCI SMC handler or if a low power state
        was requested it is captured in the bl31 warm boot path before
        return to normal world.
      
      - Entry to low power state.  The timestamp is captured before entry
        to a low power state which implies either standby or power down.
        As these power states are mutually exclusive, only one timestamp
        is defined to describe both.  It is possible to differentiate between
        the two power states using the PSCI STAT interface.
      
      - Exit from low power state.  The timestamp is captured after a standby
        or power up operation has completed.
      
      To calculate the number of cycles spent running code in Trusted Firmware
      one can perform the following calculation:
      
      (exit_psci - enter_psci) - (exit_low_pwr - enter_low_pwr).
      
      The resulting number of cycles can be converted to time given the
      frequency of the counter.
      
      Change-Id: Ie3b8f3d16409b6703747093b3a2d5c7429ad0166
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      872be88a
  24. 22 Sep, 2016 2 commits
    • Soby Mathew's avatar
      PSCI: Do psci_setup() as part of std_svc_setup() · 58e946ae
      Soby Mathew authored
      This patch moves the invocation of `psci_setup()` from BL31 and SP_MIN
      into `std_svc_setup()` as part of ARM Standard Service initialization.
      This allows us to consolidate ARM Standard Service initializations which
      will be added to in the future. A new function `get_arm_std_svc_args()`
      is introduced to get arguments corresponding to each standard service.
      This function must be implemented by the EL3 Runtime Firmware and both
      SP_MIN and BL31 implement it.
      
      Change-Id: I38e1b644f797fa4089b20574bd4a10f0419de184
      58e946ae
    • Soby Mathew's avatar
      PSCI: Introduce PSCI Library argument structure · f426fc05
      Soby Mathew authored
      This patch introduces a `psci_lib_args_t` structure which must be
      passed into `psci_setup()` which is then used to initialize the PSCI
      library. The `psci_lib_args_t` is a versioned structure so as to enable
      compatibility checks during library initialization. Both BL31 and SP_MIN
      are modified to use the new structure.
      
      SP_MIN is also modified to add version string and build message as part
      of its cold boot log just like the other BLs in Trusted Firmware.
      
      NOTE: Please be aware that this patch modifies the prototype of
      `psci_setup()`, which breaks compatibility with EL3 Runtime Firmware
      (excluding BL31 and SP_MIN) integrated with the PSCI Library.
      
      Change-Id: Ic3761db0b790760a7ad664d8a437c72ea5edbcd6
      f426fc05
  25. 19 Jul, 2016 1 commit
    • Soby Mathew's avatar
      Introduce PSCI Library Interface · cf0b1492
      Soby Mathew authored
      This patch introduces the PSCI Library interface. The major changes
      introduced are as follows:
      
      * Earlier BL31 was responsible for Architectural initialization during cold
      boot via bl31_arch_setup() whereas PSCI was responsible for the same during
      warm boot. This functionality is now consolidated by the PSCI library
      and it does Architectural initialization via psci_arch_setup() during both
      cold and warm boots.
      
      * Earlier the warm boot entry point was always `psci_entrypoint()`. This was
      not flexible enough as a library interface. Now PSCI expects the runtime
      firmware to provide the entry point via `psci_setup()`. A new function
      `bl31_warm_entrypoint` is introduced in BL31 and the previous
      `psci_entrypoint()` is deprecated.
      
      * The `smc_helpers.h` is reorganized to separate the SMC Calling Convention
      defines from the Trusted Firmware SMC helpers. The former is now in a new
      header file `smcc.h` and the SMC helpers are moved to Architecture specific
      header.
      
      * The CPU context is used by PSCI for context initialization and
      restoration after power down (PSCI Context). It is also used by BL31 for SMC
      handling and context management during Normal-Secure world switch (SMC
      Context). The `psci_smc_handler()` interface is redefined to not use SMC
      helper macros thus enabling to decouple the PSCI context from EL3 runtime
      firmware SMC context. This enables PSCI to be integrated with other runtime
      firmware using a different SMC context.
      
      NOTE: With this patch the architectural setup done in `bl31_arch_setup()`
      is done as part of `psci_setup()` and hence `bl31_platform_setup()` will be
      invoked prior to architectural setup. It is highly unlikely that the platform
      setup will depend on architectural setup and cause any failure. Please be
      be aware of this change in sequence.
      
      Change-Id: I7f497a08d33be234bbb822c28146250cb20dab73
      cf0b1492
  26. 18 Jul, 2016 2 commits
    • Soby Mathew's avatar
      Introduce `el3_runtime` and `PSCI` libraries · 532ed618
      Soby Mathew authored
      This patch moves the PSCI services and BL31 frameworks like context
      management and per-cpu data into new library components `PSCI` and
      `el3_runtime` respectively. This enables PSCI to be built independently from
      BL31. A new `psci_lib.mk` makefile is introduced which adds the relevant
      PSCI library sources and gets included by `bl31.mk`. Other changes which
      are done as part of this patch are:
      
      * The runtime services framework is now moved to the `common/` folder to
        enable reuse.
      * The `asm_macros.S` and `assert_macros.S` helpers are moved to architecture
        specific folder.
      * The `plat_psci_common.c` is moved from the `plat/common/aarch64/` folder
        to `plat/common` folder. The original file location now has a stub which
        just includes the file from new location to maintain platform compatibility.
      
      Most of the changes wouldn't affect platform builds as they just involve
      changes to the generic bl1.mk and bl31.mk makefiles.
      
      NOTE: THE `plat_psci_common.c` FILE HAS MOVED LOCATION AND THE STUB FILE AT
      THE ORIGINAL LOCATION IS NOW DEPRECATED. PLATFORMS SHOULD MODIFY THEIR
      MAKEFILES TO INCLUDE THE FILE FROM THE NEW LOCATION.
      
      Change-Id: I6bd87d5b59424995c6a65ef8076d4fda91ad5e86
      532ed618
    • Soby Mathew's avatar
      Rework type usage in Trusted Firmware · 4c0d0390
      Soby Mathew authored
      This patch reworks type usage in generic code, drivers and ARM platform files
      to make it more portable. The major changes done with respect to
      type usage are as listed below:
      
      * Use uintptr_t for storing address instead of uint64_t or unsigned long.
      * Review usage of unsigned long as it can no longer be assumed to be 64 bit.
      * Use u_register_t for register values whose width varies depending on
        whether AArch64 or AArch32.
      * Use generic C types where-ever possible.
      
      In addition to the above changes, this patch also modifies format specifiers
      in print invocations so that they are AArch64/AArch32 agnostic. Only files
      related to upcoming feature development have been reworked.
      
      Change-Id: I9f8c78347c5a52ba7027ff389791f1dad63ee5f8
      4c0d0390
  27. 08 Jul, 2016 1 commit
    • Sandrine Bailleux's avatar
      Introduce SEPARATE_CODE_AND_RODATA build flag · 5d1c104f
      Sandrine Bailleux authored
      At the moment, all BL images share a similar memory layout: they start
      with their code section, followed by their read-only data section.
      The two sections are contiguous in memory. Therefore, the end of the
      code section and the beginning of the read-only data one might share
      a memory page. This forces both to be mapped with the same memory
      attributes. As the code needs to be executable, this means that the
      read-only data stored on the same memory page as the code are
      executable as well. This could potentially be exploited as part of
      a security attack.
      
      This patch introduces a new build flag called
      SEPARATE_CODE_AND_RODATA, which isolates the code and read-only data
      on separate memory pages. This in turn allows independent control of
      the access permissions for the code and read-only data.
      
      This has an impact on memory footprint, as padding bytes need to be
      introduced between the code and read-only data to ensure the
      segragation of the two. To limit the memory cost, the memory layout
      of the read-only section has been changed in this case.
      
       - When SEPARATE_CODE_AND_RODATA=0, the layout is unchanged, i.e.
         the read-only section still looks like this (padding omitted):
      
         |        ...        |
         +-------------------+
         | Exception vectors |
         +-------------------+
         |  Read-only data   |
         +-------------------+
         |       Code        |
         +-------------------+ BLx_BASE
      
         In this case, the linker script provides the limits of the whole
         read-only section.
      
       - When SEPARATE_CODE_AND_RODATA=1, the exception vectors and
         read-only data are swapped, such that the code and exception
         vectors are contiguous, followed by the read-only data. This
         gives the following new layout (padding omitted):
      
         |        ...        |
         +-------------------+
         |  Read-only data   |
         +-------------------+
         | Exception vectors |
         +-------------------+
         |       Code        |
         +-------------------+ BLx_BASE
      
         In this case, the linker script now exports 2 sets of addresses
         instead: the limits of the code and the limits of the read-only
         data. Refer to the Firmware Design guide for more details. This
         provides platform code with a finer-grained view of the image
         layout and allows it to map these 2 regions with the appropriate
         access permissions.
      
      Note that SEPARATE_CODE_AND_RODATA applies to all BL images.
      
      Change-Id: I936cf80164f6b66b6ad52b8edacadc532c935a49
      5d1c104f
  28. 16 Jun, 2016 2 commits
    • Yatharth Kochar's avatar
      Add optional PSCI STAT residency & count functions · 170fb93d
      Yatharth Kochar authored
      This patch adds following optional PSCI STAT functions:
      
      - PSCI_STAT_RESIDENCY: This call returns the amount of time spent
        in power_state in microseconds, by the node represented by the
        `target_cpu` and the highest level of `power_state`.
      
      - PSCI_STAT_COUNT: This call returns the number of times a
        `power_state` has been used by the node represented by the
        `target_cpu` and the highest power level of `power_state`.
      
      These APIs provides residency statistics for power states that has
      been used by the platform. They are implemented according to v1.0
      of the PSCI specification.
      
      By default this optional feature is disabled in the PSCI
      implementation. To enable it, set the boolean flag
      `ENABLE_PSCI_STAT` to 1. This also sets `ENABLE_PMF` to 1.
      
      Change-Id: Ie62e9d37d6d416ccb1813acd7f616d1ddd3e8aff
      170fb93d
    • Yatharth Kochar's avatar
      Add Performance Measurement Framework(PMF) · a31d8983
      Yatharth Kochar authored
      This patch adds Performance Measurement Framework(PMF) in the
      ARM Trusted Firmware. PMF is implemented as a library and the
      SMC interface is provided through ARM SiP service.
      
      The PMF provides capturing, storing, dumping and retrieving the
      time-stamps, by enabling the development of services by different
      providers, that can be easily integrated into ARM Trusted Firmware.
      The PMF capture and retrieval APIs can also do appropriate cache
      maintenance operations to the timestamp memory when the caller
      indicates so.
      
      `pmf_main.c` consists of core functions that implement service
      registration, initialization, storing, dumping and retrieving
      the time-stamp.
      `pmf_smc.c` consists SMC handling for registered PMF services.
      `pmf.h` consists of the macros that can be used by the PMF service
      providers to register service and declare time-stamp functions.
      `pmf_helpers.h` consists of internal macros that are used by `pmf.h`
      
      By default this feature is disabled in the ARM trusted firmware.
      To enable it set the boolean flag `ENABLE_PMF` to 1.
      
      NOTE: The caller is responsible for specifying the appropriate cache
      maintenance flags and for acquiring/releasing appropriate locks
      before/after capturing/retrieving the time-stamps.
      
      Change-Id: Ib45219ac07c2a81b9726ef6bd9c190cc55e81854
      a31d8983
  29. 03 Jun, 2016 1 commit
    • Soby Mathew's avatar
      Build option to include AArch32 registers in cpu context · 8cd16e6b
      Soby Mathew authored
      The system registers that are saved and restored in CPU context include
      AArch32 systems registers like SPSR_ABT, SPSR_UND, SPSR_IRQ, SPSR_FIQ,
      DACR32_EL2, IFSR32_EL2 and FPEXC32_EL2. Accessing these registers on an
      AArch64-only (i.e. on hardware that does not implement AArch32, or at
      least not at EL1 and higher ELs) platform leads to an exception. This patch
      introduces the build option `CTX_INCLUDE_AARCH32_REGS` to specify whether to
      include these AArch32 systems registers in the cpu context or not. By default
      this build option is set to 1 to ensure compatibility. AArch64-only platforms
      must set it to 0. A runtime check is added in BL1 and BL31 cold boot path to
      verify this.
      
      Fixes ARM-software/tf-issues#386
      
      Change-Id: I720cdbd7ed7f7d8516635a2ec80d025f478b95ee
      8cd16e6b
  30. 26 May, 2016 1 commit
    • Sandrine Bailleux's avatar
      Introduce some helper macros for exception vectors · e0ae9fab
      Sandrine Bailleux authored
      This patch introduces some assembler macros to simplify the
      declaration of the exception vectors. It abstracts the section
      the exception code is put into as well as the alignments
      constraints mandated by the ARMv8 architecture. For all TF images,
      the exception code has been updated to make use of these macros.
      
      This patch also updates some invalid comments in the exception
      vector code.
      
      Change-Id: I35737b8f1c8c24b6da89b0a954c8152a4096fa95
      e0ae9fab
  31. 20 May, 2016 1 commit
    • Antonio Nino Diaz's avatar
      Add 32 bit version of plat_get_syscnt_freq · d4486391
      Antonio Nino Diaz authored
      Added plat_get_syscnt_freq2, which is a 32 bit variant of the 64 bit
      plat_get_syscnt_freq. The old one has been flagged as deprecated.
      Common code has been updated to use this new version. Porting guide
      has been updated.
      
      Change-Id: I9e913544926c418970972bfe7d81ee88b4da837e
      d4486391