1. 04 May, 2017 1 commit
    • Jeenu Viswambharan's avatar
      Introduce ARM SiP service to switch execution state · b10d4499
      Jeenu Viswambharan authored
      
      
      In AArch64, privileged exception levels control the execution state
      (a.k.a. register width) of the immediate lower Exception Level; i.e.
      whether the lower exception level executes in AArch64 or AArch32 state.
      For an exception level to have its execution state changed at run time,
      it must request the change by raising a synchronous exception to the
      higher exception level.
      
      This patch implements and adds such a provision to the ARM SiP service,
      by which an immediate lower exception level can request to switch its
      execution state. The execution state is switched if the request is:
      
        - raised from non-secure world;
      
        - raised on the primary CPU, before any secondaries are brought online
          with CPU_ON PSCI call;
      
        - raised from an exception level immediately below EL3: EL2, if
          implemented; otherwise NS EL1.
      
      If successful, the SMC doesn't return to the caller, but to the entry
      point supplied with the call. Otherwise, the caller will observe the SMC
      returning with STATE_SW_E_DENIED code. If ARM Trusted Firmware is built
      for AArch32, the feature is not supported, and the call will always
      fail.
      
      For the ARM SiP service:
      
        - Add SMC function IDs for both AArch32 and AArch64;
        - Increment the SiP service minor version to 2;
        - Adjust the number of supported SiP service calls.
      
      Add documentation for ARM SiP service.
      
      Fixes ARM-software/tf-issues#436
      
      Change-Id: I4347f2d6232e69fbfbe333b340fcd0caed0a4cea
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      b10d4499
  2. 03 May, 2017 1 commit
  3. 19 Apr, 2017 1 commit
    • Soby Mathew's avatar
      PSCI: Build option to enable D-Caches early in warmboot · bcc3c49c
      Soby Mathew authored
      
      
      This patch introduces a build option to enable D-cache early on the CPU
      after warm boot. This is applicable for platforms which do not require
      interconnect programming to enable cache coherency (eg: single cluster
      platforms). If this option is enabled, then warm boot path enables
      D-caches immediately after enabling MMU.
      
      Fixes ARM-Software/tf-issues#456
      
      Change-Id: I44c8787d116d7217837ced3bcf0b1d3441c8d80e
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      bcc3c49c
  4. 31 Mar, 2017 1 commit
  5. 02 Mar, 2017 2 commits
    • Jeenu Viswambharan's avatar
      PSCI: Optimize call paths if all participants are cache-coherent · b0408e87
      Jeenu Viswambharan authored
      
      
      The current PSCI implementation can apply certain optimizations upon the
      assumption that all PSCI participants are cache-coherent.
      
        - Skip performing cache maintenance during power-up.
      
        - Skip performing cache maintenance during power-down:
      
          At present, on the power-down path, CPU driver disables caches and
          MMU, and performs cache maintenance in preparation for powering down
          the CPU. This means that PSCI must perform additional cache
          maintenance on the extant stack for correct functioning.
      
          If all participating CPUs are cache-coherent, CPU driver would
          neither disable MMU nor perform cache maintenance. The CPU being
          powered down, therefore, remain cache-coherent throughout all PSCI
          call paths. This in turn means that PSCI cache maintenance
          operations are not required during power down.
      
        - Choose spin locks instead of bakery locks:
      
          The current PSCI implementation must synchronize both cache-coherent
          and non-cache-coherent participants. Mutual exclusion primitives are
          not guaranteed to function on non-coherent memory. For this reason,
          the current PSCI implementation had to resort to bakery locks.
      
          If all participants are cache-coherent, the implementation can
          enable MMU and data caches early, and substitute bakery locks for
          spin locks. Spin locks make use of architectural mutual exclusion
          primitives, and are lighter and faster.
      
      The optimizations are applied when HW_ASSISTED_COHERENCY build option is
      enabled, as it's expected that all PSCI participants are cache-coherent
      in those systems.
      
      Change-Id: Iac51c3ed318ea7e2120f6b6a46fd2db2eae46ede
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      b0408e87
    • Jeenu Viswambharan's avatar
      PSCI: Introduce cache and barrier wrappers · a10d3632
      Jeenu Viswambharan authored
      
      
      The PSCI implementation performs cache maintenance operations on its
      data structures to ensure their visibility to both cache-coherent and
      non-cache-coherent participants. These cache maintenance operations
      can be skipped if all PSCI participants are cache-coherent. When
      HW_ASSISTED_COHERENCY build option is enabled, we assume PSCI
      participants are cache-coherent.
      
      For usage abstraction, this patch introduces wrappers for PSCI cache
      maintenance and barrier operations used for state coordination: they are
      effectively NOPs when HW_ASSISTED_COHERENCY is enabled, but are
      applied otherwise.
      
      Also refactor local state usage and associated cache operations to make
      it clearer.
      
      Change-Id: I77f17a90cba41085b7188c1345fe5731c99fad87
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      a10d3632
  6. 13 Feb, 2017 2 commits
    • dp-arm's avatar
      PSCI: Do stat accounting for retention/standby states · e5bbd16a
      dp-arm authored
      
      
      Perform stat accounting for retention/standby states also when
      requested at multiple power levels.
      
      Change-Id: I2c495ea7cdff8619bde323fb641cd84408eb5762
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      e5bbd16a
    • dp-arm's avatar
      PSCI: Decouple PSCI stat residency calculation from PMF · 04c1db1e
      dp-arm authored
      
      
      This patch introduces the following three platform interfaces:
      
      * void plat_psci_stat_accounting_start(const psci_power_state_t *state_info)
      
        This is an optional hook that platforms can implement in order
        to perform accounting before entering a low power state.  This
        typically involves capturing a timestamp.
      
      * void plat_psci_stat_accounting_stop(const psci_power_state_t *state_info)
      
        This is an optional hook that platforms can implement in order
        to perform accounting after exiting from a low power state.  This
        typically involves capturing a timestamp.
      
      * u_register_t plat_psci_stat_get_residency(unsigned int lvl,
      	const psci_power_state_t *state_info,
      	unsigned int last_cpu_index)
      
        This is an optional hook that platforms can implement in order
        to calculate the PSCI stat residency.
      
      If any of these interfaces are overridden by the platform, it is
      recommended that all of them are.
      
      By default `ENABLE_PSCI_STAT` is disabled.  If `ENABLE_PSCI_STAT`
      is set but `ENABLE_PMF` is not set then an alternative PSCI stat
      collection backend must be provided.  If both are set, then default
      weak definitions of these functions are provided, using PMF to
      calculate the residency.
      
      NOTE: Previously, platforms did not have to explicitly set
      `ENABLE_PMF` since this was automatically done by the top-level
      Makefile.
      
      Change-Id: I17b47804dea68c77bc284df15ee1ccd66bc4b79b
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      04c1db1e
  7. 06 Feb, 2017 1 commit
    • Douglas Raillard's avatar
      Replace some memset call by zeromem · 32f0d3c6
      Douglas Raillard authored
      
      
      Replace all use of memset by zeromem when zeroing moderately-sized
      structure by applying the following transformation:
      memset(x, 0, sizeof(x)) => zeromem(x, sizeof(x))
      
      As the Trusted Firmware is compiled with -ffreestanding, it forbids the
      compiler from using __builtin_memset and forces it to generate calls to
      the slow memset implementation. Zeromem is a near drop in replacement
      for this use case, with a more efficient implementation on both AArch32
      and AArch64.
      
      Change-Id: Ia7f3a90e888b96d056881be09f0b4d65b41aa79e
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      32f0d3c6
  8. 30 Jan, 2017 1 commit
    • Jeenu Viswambharan's avatar
      Report errata workaround status to console · 10bcd761
      Jeenu Viswambharan authored
      
      
      The errata reporting policy is as follows:
      
        - If an errata workaround is enabled:
      
          - If it applies (i.e. the CPU is affected by the errata), an INFO
            message is printed, confirming that the errata workaround has been
            applied.
      
          - If it does not apply, a VERBOSE message is printed, confirming
            that the errata workaround has been skipped.
      
        - If an errata workaround is not enabled, but would have applied had
          it been, a WARN message is printed, alerting that errata workaround
          is missing.
      
      The CPU errata messages are printed by both BL1 (primary CPU only) and
      runtime firmware on debug builds, once for each CPU/errata combination.
      
      Relevant output from Juno r1 console when ARM Trusted Firmware is built
      with PLAT=juno LOG_LEVEL=50 DEBUG=1:
      
        VERBOSE: BL1: cortex_a57: errata workaround for 806969 was not applied
        VERBOSE: BL1: cortex_a57: errata workaround for 813420 was not applied
        INFO:    BL1: cortex_a57: errata workaround for disable_ldnp_overread was applied
        WARNING: BL1: cortex_a57: errata workaround for 826974 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 826977 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 828024 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 829520 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 833471 was missing!
        ...
        VERBOSE: BL31: cortex_a57: errata workaround for 806969 was not applied
        VERBOSE: BL31: cortex_a57: errata workaround for 813420 was not applied
        INFO:    BL31: cortex_a57: errata workaround for disable_ldnp_overread was applied
        WARNING: BL31: cortex_a57: errata workaround for 826974 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 826977 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 828024 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 829520 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 833471 was missing!
        ...
        VERBOSE: BL31: cortex_a53: errata workaround for 826319 was not applied
        INFO:    BL31: cortex_a53: errata workaround for disable_non_temporal_hint was applied
      
      Also update documentation.
      
      Change-Id: Iccf059d3348adb876ca121cdf5207bdbbacf2aba
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      10bcd761
  9. 15 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Add provision to extend CPU operations at more levels · 5dd9dbb5
      Jeenu Viswambharan authored
      
      
      Various CPU drivers in ARM Trusted Firmware register functions to handle
      power-down operations. At present, separate functions are registered to
      power down individual cores and clusters.
      
      This scheme operates on the basis of core and cluster, and doesn't cater
      for extending the hierarchy for power-down operations. For example,
      future CPUs might support multiple threads which might need powering
      down individually.
      
      This patch therefore reworks the CPU operations framework to allow for
      registering power down handlers on specific level basis. Henceforth:
      
        - Generic code invokes CPU power down operations by the level
          required.
      
        - CPU drivers explicitly mention CPU_NO_RESET_FUNC when the CPU has no
          reset function.
      
        - CPU drivers register power down handlers as a list: a mandatory
          handler for level 0, and optional handlers for higher levels.
      
      All existing CPU drivers are adapted to the new CPU operations framework
      without needing any functional changes within.
      
      Also update firmware design guide.
      
      Change-Id: I1826842d37a9e60a9e85fdcee7b4b8f6bc1ad043
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      5dd9dbb5
  10. 14 Dec, 2016 1 commit
  11. 12 Dec, 2016 1 commit
    • Soby Mathew's avatar
      AArch32: Fix the stack alignment issue · 9f3ee61c
      Soby Mathew authored
      
      
      The AArch32 Procedure call Standard mandates that the stack must be aligned
      to 8 byte boundary at external interfaces. This patch does the required
      changes.
      
      This problem was detected when a crash was encountered in
      `psci_print_power_domain_map()` while printing 64 bit values. Aligning
      the stack to 8 byte boundary resolved the problem.
      
      Fixes ARM-Software/tf-issues#437
      
      Change-Id: I517bd8203601bb88e9311bd36d477fb7b3efb292
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      9f3ee61c
  12. 05 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Define and use no_ret macro where no return is expected · a806dad5
      Jeenu Viswambharan authored
      
      
      There are many instances in ARM Trusted Firmware where control is
      transferred to functions from which return isn't expected. Such jumps
      are made using 'bl' instruction to provide the callee with the location
      from which it was jumped to. Additionally, debuggers infer the caller by
      examining where 'lr' register points to. If a 'bl' of the nature
      described above falls at the end of an assembly function, 'lr' will be
      left pointing to a location outside of the function range. This misleads
      the debugger back trace.
      
      This patch defines a 'no_ret' macro to be used when jumping to functions
      from which return isn't expected. The macro ensures to use 'bl'
      instruction for the jump, and also, for debug builds, places a 'nop'
      instruction immediately thereafter (unless instructed otherwise) so as
      to leave 'lr' pointing within the function range.
      
      Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0
      Co-authored-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      a806dad5
  13. 12 Oct, 2016 1 commit
    • dp-arm's avatar
      Add PMF instrumentation points in TF · 872be88a
      dp-arm authored
      
      
      In order to quantify the overall time spent in the PSCI software
      implementation, an initial collection of PMF instrumentation points
      has been added.
      
      Instrumentation has been added to the following code paths:
      
      - Entry to PSCI SMC handler.  The timestamp is captured as early
        as possible during the runtime exception and stored in memory
        before entering the PSCI SMC handler.
      
      - Exit from PSCI SMC handler.  The timestamp is captured after
        normal return from the PSCI SMC handler or if a low power state
        was requested it is captured in the bl31 warm boot path before
        return to normal world.
      
      - Entry to low power state.  The timestamp is captured before entry
        to a low power state which implies either standby or power down.
        As these power states are mutually exclusive, only one timestamp
        is defined to describe both.  It is possible to differentiate between
        the two power states using the PSCI STAT interface.
      
      - Exit from low power state.  The timestamp is captured after a standby
        or power up operation has completed.
      
      To calculate the number of cycles spent running code in Trusted Firmware
      one can perform the following calculation:
      
      (exit_psci - enter_psci) - (exit_low_pwr - enter_low_pwr).
      
      The resulting number of cycles can be converted to time given the
      frequency of the counter.
      
      Change-Id: Ie3b8f3d16409b6703747093b3a2d5c7429ad0166
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      872be88a
  14. 22 Sep, 2016 1 commit
    • Soby Mathew's avatar
      PSCI: Introduce PSCI Library argument structure · f426fc05
      Soby Mathew authored
      This patch introduces a `psci_lib_args_t` structure which must be
      passed into `psci_setup()` which is then used to initialize the PSCI
      library. The `psci_lib_args_t` is a versioned structure so as to enable
      compatibility checks during library initialization. Both BL31 and SP_MIN
      are modified to use the new structure.
      
      SP_MIN is also modified to add version string and build message as part
      of its cold boot log just like the other BLs in Trusted Firmware.
      
      NOTE: Please be aware that this patch modifies the prototype of
      `psci_setup()`, which breaks compatibility with EL3 Runtime Firmware
      (excluding BL31 and SP_MIN) integrated with the PSCI Library.
      
      Change-Id: Ic3761db0b790760a7ad664d8a437c72ea5edbcd6
      f426fc05
  15. 15 Sep, 2016 1 commit
    • Jeenu Viswambharan's avatar
      PSCI: Add support for PSCI NODE_HW_STATE API · 28d3d614
      Jeenu Viswambharan authored
      This patch adds support for NODE_HW_STATE PSCI API by introducing a new
      PSCI platform hook (get_node_hw_state). The implementation validates
      supplied arguments, and then invokes this platform-defined hook and
      returns its result to the caller. PSCI capabilities are updated
      accordingly.
      
      Also updates porting and firmware design guides.
      
      Change-Id: I808e55bdf0c157002a7c104b875779fe50a68a30
      28d3d614
  16. 09 Sep, 2016 1 commit
    • Soby Mathew's avatar
      Flush `psci_plat_pm_ops` after initialization · 7a3d4bde
      Soby Mathew authored
      The `psci_plat_pm_ops` global pointer is initialized during cold boot by the
      primary CPU and will be accessed by the secondary CPUs before enabling data
      cache during warm boot. This patch adds a missing data cache flush of
      `psci_plat_psci_ops` after initialization during psci_setup() so that
      secondaries can see the updated `psci_plat_psci_ops` pointer.
      
      Fixes ARM-software/tf-issues#424
      
      Change-Id: Id4554800b5646302b944115a33be69507d53cedb
      7a3d4bde
  17. 10 Aug, 2016 1 commit
    • Soby Mathew's avatar
      AArch32: Add support to PSCI lib · 727e5238
      Soby Mathew authored
      This patch adds AArch32 support to PSCI library, as follows :
      
      * The `psci_helpers.S` is implemented for AArch32.
      
      * AArch32 version of internal helper function `psci_get_ns_ep_info()` is
        defined.
      
      * The PSCI Library is responsible for the Non Secure context initialization.
        Hence a library interface `psci_prepare_next_non_secure_ctx()` is introduced
        to enable EL3 runtime firmware to initialize the non secure context without
        invoking context management library APIs.
      
      Change-Id: I25595b0cc2dbfdf39dbf7c589b875cba33317b9d
      727e5238
  18. 09 Aug, 2016 1 commit
    • Soby Mathew's avatar
      Move spinlock library code to AArch64 folder · 12ab697e
      Soby Mathew authored
      This patch moves the assembly exclusive lock library code
      `spinlock.S` into architecture specific folder `aarch64`.
      A stub file which includes the file from new location is
      retained at the original location for compatibility. The BL
      makefiles are also modified to include the file from the new
      location.
      
      Change-Id: Ide0b601b79c439e390c3a017d93220a66be73543
      12ab697e
  19. 25 Jul, 2016 2 commits
    • Achin Gupta's avatar
      Fix use of stale power states in PSCI standby finisher · 61eae524
      Achin Gupta authored
      A PSCI CPU_SUSPEND request to place a CPU in retention states at power levels
      higher than the CPU power level is subject to the same state coordination as a
      power down state. A CPU could implement multiple retention states at a
      particular power level. When exiting WFI, the non-CPU power levels may be in a
      different retention state to what was initially requested, therefore each CPU
      should refresh its view of the states of all power levels.
      
      Previously, a CPU re-used the state of the power levels when it entered the
      retention state. This patch fixes this issue by ensuring that a CPU upon exit
      from retention reads the state of each power level afresh.
      
      Change-Id: I93b5f5065c63400c6fd2598dbaafac385748f989
      61eae524
    • Sandrine Bailleux's avatar
      Validate psci_find_target_suspend_lvl() result · a1c3faa6
      Sandrine Bailleux authored
      This patch adds a runtime check that psci_find_target_suspend_lvl()
      returns a valid value back to psci_cpu_suspend() and psci_get_stat().
      If it is invalid, BL31 will now panic.
      
      Note that on the PSCI CPU suspend path there is already a debug
      assertion checking the validity of the target composite power state,
      which effectively also checks the validity of the target suspend level.
      Therefore, the error condition would already be caught in debug builds,
      but in a release build this assertion would be compiled out.
      
      On the PSCI stat path, there is currently no debug assertion checking
      the validity of the power state before using it as an index into
      the power domain state array.
      
      Although BL31 platforms ports are responsible for validating the
      power state parameter, the security impact (i.e. an out-of-bounds
      array access) of a potential platform port bug in this code would
      be quite high, given that this parameter comes from an untrusted
      source. The cost of checking this in runtime generic code is low.
      
      Change-Id: Icea85b8020e39928ac03ec0cd49805b5857b3906
      a1c3faa6
  20. 19 Jul, 2016 1 commit
    • Soby Mathew's avatar
      Introduce PSCI Library Interface · cf0b1492
      Soby Mathew authored
      This patch introduces the PSCI Library interface. The major changes
      introduced are as follows:
      
      * Earlier BL31 was responsible for Architectural initialization during cold
      boot via bl31_arch_setup() whereas PSCI was responsible for the same during
      warm boot. This functionality is now consolidated by the PSCI library
      and it does Architectural initialization via psci_arch_setup() during both
      cold and warm boots.
      
      * Earlier the warm boot entry point was always `psci_entrypoint()`. This was
      not flexible enough as a library interface. Now PSCI expects the runtime
      firmware to provide the entry point via `psci_setup()`. A new function
      `bl31_warm_entrypoint` is introduced in BL31 and the previous
      `psci_entrypoint()` is deprecated.
      
      * The `smc_helpers.h` is reorganized to separate the SMC Calling Convention
      defines from the Trusted Firmware SMC helpers. The former is now in a new
      header file `smcc.h` and the SMC helpers are moved to Architecture specific
      header.
      
      * The CPU context is used by PSCI for context initialization and
      restoration after power down (PSCI Context). It is also used by BL31 for SMC
      handling and context management during Normal-Secure world switch (SMC
      Context). The `psci_smc_handler()` interface is redefined to not use SMC
      helper macros thus enabling to decouple the PSCI context from EL3 runtime
      firmware SMC context. This enables PSCI to be integrated with other runtime
      firmware using a different SMC context.
      
      NOTE: With this patch the architectural setup done in `bl31_arch_setup()`
      is done as part of `psci_setup()` and hence `bl31_platform_setup()` will be
      invoked prior to architectural setup. It is highly unlikely that the platform
      setup will depend on architectural setup and cause any failure. Please be
      be aware of this change in sequence.
      
      Change-Id: I7f497a08d33be234bbb822c28146250cb20dab73
      cf0b1492
  21. 18 Jul, 2016 1 commit
    • Soby Mathew's avatar
      Introduce `el3_runtime` and `PSCI` libraries · 532ed618
      Soby Mathew authored
      This patch moves the PSCI services and BL31 frameworks like context
      management and per-cpu data into new library components `PSCI` and
      `el3_runtime` respectively. This enables PSCI to be built independently from
      BL31. A new `psci_lib.mk` makefile is introduced which adds the relevant
      PSCI library sources and gets included by `bl31.mk`. Other changes which
      are done as part of this patch are:
      
      * The runtime services framework is now moved to the `common/` folder to
        enable reuse.
      * The `asm_macros.S` and `assert_macros.S` helpers are moved to architecture
        specific folder.
      * The `plat_psci_common.c` is moved from the `plat/common/aarch64/` folder
        to `plat/common` folder. The original file location now has a stub which
        just includes the file from new location to maintain platform compatibility.
      
      Most of the changes wouldn't affect platform builds as they just involve
      changes to the generic bl1.mk and bl31.mk makefiles.
      
      NOTE: THE `plat_psci_common.c` FILE HAS MOVED LOCATION AND THE STUB FILE AT
      THE ORIGINAL LOCATION IS NOW DEPRECATED. PLATFORMS SHOULD MODIFY THEIR
      MAKEFILES TO INCLUDE THE FILE FROM THE NEW LOCATION.
      
      Change-Id: I6bd87d5b59424995c6a65ef8076d4fda91ad5e86
      532ed618