1. 05 Sep, 2018 3 commits
    • Varun Wadekar's avatar
      cpus: denver: Implement static workaround for CVE-2018-3639 · 6cf8d65f
      Varun Wadekar authored
      
      
      For Denver CPUs, this approach enables the mitigation during EL3
      initialization, following every PE reset. No mechanism is provided to
      disable the mitigation at runtime.
      
      This approach permanently mitigates the EL3 software stack only. Other
      software components are responsible to enable it for their exception
      levels.
      
      TF-A implements this approach for the Denver CPUs with DENVER_MIDR_PN3
      and earlier:
      
      *   By setting bit 11 (Disable speculative store buffering) of
          `ACTLR_EL3`
      
      *   By setting bit 9 (Disable speculative memory disambiguation) of
          `ACTLR_EL3`
      
      TF-A implements this approach for the Denver CPUs with DENVER_MIDR_PN4
      and later:
      
      *   By setting bit 18 (Disable speculative store buffering) of
          `ACTLR_EL3`
      
      *   By setting bit 17 (Disable speculative memory disambiguation) of
          `ACTLR_EL3`
      
      Change-Id: If1de96605ce3f7b0aff5fab2c828e5aecb687555
      Signed-off-by: default avatarVarun Wadekar <vwadekar@nvidia.com>
      6cf8d65f
    • Varun Wadekar's avatar
      cpus: denver: reset power state to 'C1' on boot · cf3ed0dc
      Varun Wadekar authored
      
      
      Denver CPUs expect the power state field to be reset to 'C1'
      during boot. This patch updates the reset handler to reset the
      ACTLR_.PMSTATE field to 'C1' state during CPU boot.
      
      Change-Id: I7cb629627a4dd1a30ec5cbb3a5e90055244fe30c
      Signed-off-by: default avatarVarun Wadekar <vwadekar@nvidia.com>
      cf3ed0dc
    • Varun Wadekar's avatar
      denver: use plat_my_core_pos() to get core position · 1593cae4
      Varun Wadekar authored
      
      
      The current functions to disable and enable Dynamic Code Optimizer
      (DCO) assume that all denver cores are in the same cluster. They
      ignore AFF1 field of the mpidr_el1 register, which leads to
      incorect logical core id calculation.
      
      This patch calls the platform handler, plat_my_core_pos(), to get
      the logical core id to disable/enable DCO for the core.
      
      Original change by: Krishna Sitaraman <ksitaraman@nvidia.com>
      
      Change-Id: I45fbd1f1eb032cc1db677a4fdecc554548b4a830
      Signed-off-by: default avatarVarun Wadekar <vwadekar@nvidia.com>
      1593cae4
  2. 23 Aug, 2018 1 commit
  3. 17 Aug, 2018 2 commits
  4. 11 Jul, 2018 3 commits
  5. 19 Jun, 2018 1 commit
  6. 12 Jun, 2018 1 commit
    • Daniel Boulby's avatar
      Fix MISRA Rule 5.7 Part 1 · 40692923
      Daniel Boulby authored
      
      
      Rule 5.7: A tag name shall be a unique identifier
      
      There were 2 amu_ctx struct type definitions:
          - In lib/extensions/amu/aarch64/amu.c
          - In lib/cpus/aarch64/cpuamu.c
      
      Renamed the latter to cpuamu_ctx to avoid this name clash
      
      To avoid violation of Rule 8.3 also change name of function
      amu_ctxs to unique name (cpuamu_ctxs) since it now returns a
      different type (cpuamu_ctx) than the other amu_ctxs function
      
      Fixed for:
          make LOG_LEVEL=50 PLAT=fvp
      
      Change-Id: Ieeb7e390ec2900fd8b775bef312eda93804a43ed
      Signed-off-by: default avatarDaniel Boulby <daniel.boulby@arm.com>
      40692923
  7. 08 Jun, 2018 4 commits
  8. 07 Jun, 2018 1 commit
    • Dimitris Papastamos's avatar
      Fast path SMCCC_ARCH_WORKAROUND_1 calls from AArch32 · 2b915366
      Dimitris Papastamos authored
      
      
      When SMCCC_ARCH_WORKAROUND_1 is invoked from a lower EL running in
      AArch32 state, ensure that the SMC call will take a shortcut in EL3.
      This minimizes the time it takes to apply the mitigation in EL3.
      
      When lower ELs run in AArch32, it is preferred that they execute the
      `BPIALL` instruction to invalidate the BTB.  However, on some cores
      the `BPIALL` instruction may be a no-op and thus would benefit from
      making the SMCCC_ARCH_WORKAROUND_1 call go through the fast path.
      
      Change-Id: Ia38abd92efe2c4b4a8efa7b70f260e43c5bda8a5
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      2b915366
  9. 23 May, 2018 4 commits
    • Dimitris Papastamos's avatar
      Add support for dynamic mitigation for CVE-2018-3639 · fe007b2e
      Dimitris Papastamos authored
      
      
      Some CPUS may benefit from using a dynamic mitigation approach for
      CVE-2018-3639.  A new SMC interface is defined to allow software
      executing in lower ELs to enable or disable the mitigation for their
      execution context.
      
      It should be noted that regardless of the state of the mitigation for
      lower ELs, code executing in EL3 is always mitigated against
      CVE-2018-3639.
      
      NOTE: This change is a compatibility break for any platform using
      the declare_cpu_ops_workaround_cve_2017_5715 macro.  Migrate to
      the declare_cpu_ops_wa macro instead.
      
      Change-Id: I3509a9337ad217bbd96de9f380c4ff8bf7917013
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      fe007b2e
    • Dimitris Papastamos's avatar
      aarch32: Implement static workaround for CVE-2018-3639 · e0865708
      Dimitris Papastamos authored
      
      
      Implement static mitigation for CVE-2018-3639 on
      Cortex A57 and A72.
      
      Change-Id: I83409a16238729b84142b19e258c23737cc1ddc3
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      e0865708
    • Dimitris Papastamos's avatar
      Implement static workaround for CVE-2018-3639 · b8a25bbb
      Dimitris Papastamos authored
      For affected CPUs, this approach enables the mitigation during EL3
      initialization, following every PE reset. No mechanism is provided to
      disable the mitigation at runtime.
      
      This approach permanently mitigates the entire software stack and no
      additional mitigation code is required in other software components.
      
      TF-A implements this approach for the following affected CPUs:
      
      *   Cortex-A57 and Cortex-A72, by setting bit 55 (Disable load pass store) of
          `CPUACTLR_EL1` (`S3_1_C15_C2_0`).
      
      *   Cortex-A73, by setting bit 3 of `S3_0_C15_C0_0` (not documented in the
          Technical Reference Manual (TRM)).
      
      *   Cortex-A75, by setting bit 35 (reserved in TRM) of `CPUACTLR_EL1`
          (`S3_0_C15_C1_0`).
      
      Additionally, a new SMC interface is implemented to allow software
      executing in lower ELs to discover whether the system is mitigated
      against CVE-2018-3639.
      
      Refer to "Firmware interfaces for mitigating cache speculation
      vulnerabilities System Software on Arm Systems"[0] for more
      information.
      
      [0] https://developer.arm.com/cache-speculation-vulnerability-firmware-specification
      
      
      
      Change-Id: I084aa7c3bc7c26bf2df2248301270f77bed22ceb
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      b8a25bbb
    • Dimitris Papastamos's avatar
      Rename symbols and files relating to CVE-2017-5715 · 2c3a1078
      Dimitris Papastamos authored
      
      
      This patch renames symbols and files relating to CVE-2017-5715 to make
      it easier to introduce new symbols and files for new CVE mitigations.
      
      Change-Id: I24c23822862ca73648c772885f1690bed043dbc7
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      2c3a1078
  10. 15 May, 2018 1 commit
    • Varun Wadekar's avatar
      Workaround for CVE-2017-5715 on NVIDIA Denver CPUs · b0301467
      Varun Wadekar authored
      
      
      Flush the indirect branch predictor and RSB on entry to EL3 by issuing
      a newly added instruction for Denver CPUs. Support for this operation
      can be determined by comparing bits 19:16 of ID_AFR0_EL1 with 0b0001.
      
      To achieve this without performing any branch instruction, a per-cpu
      vbar is installed which executes the workaround and then branches off
      to the corresponding vector entry in the main vector table. A side
      effect of this change is that the main vbar is configured before any
      reset handling. This is to allow the per-cpu reset function to override
      the vbar setting.
      
      Change-Id: Ief493cd85935bab3cfee0397e856db5101bc8011
      Signed-off-by: default avatarVarun Wadekar <vwadekar@nvidia.com>
      b0301467
  11. 12 Apr, 2018 2 commits
  12. 14 Mar, 2018 2 commits
  13. 27 Feb, 2018 3 commits
  14. 22 Feb, 2018 1 commit
  15. 31 Jan, 2018 1 commit
  16. 29 Jan, 2018 2 commits
    • Dimitris Papastamos's avatar
      Optimize SMCCC_ARCH_WORKAROUND_1 on Cortex A57/A72/A73 and A75 · 1d6d47a8
      Dimitris Papastamos authored
      
      
      This patch implements a fast path for this SMC call on affected PEs by
      detecting and returning immediately after executing the workaround.
      
      NOTE: The MMU disable/enable workaround now assumes that the MMU was
      enabled on entry to EL3.  This is a valid assumption as the code turns
      on the MMU after reset and leaves it on until the core powers off.
      
      Change-Id: I13c336d06a52297620a9760fb2461b4d606a30b3
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      1d6d47a8
    • Dimitris Papastamos's avatar
      Optimize/cleanup BPIALL workaround · d9bd656c
      Dimitris Papastamos authored
      
      
      In the initial implementation of this workaround we used a dedicated
      workaround context to save/restore state.  This patch reduces the
      footprint as no additional context is needed.
      
      Additionally, this patch reduces the memory loads and stores by 20%,
      reduces the instruction count and exploits static branch prediction to
      optimize the SMC path.
      
      Change-Id: Ia9f6bf06fbf8a9037cfe7f1f1fb32e8aec38ec7d
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      d9bd656c
  17. 19 Jan, 2018 1 commit
  18. 18 Jan, 2018 4 commits
  19. 11 Jan, 2018 3 commits
    • Dimitris Papastamos's avatar
      Add hooks to save/restore AMU context for Cortex A75 · 53bfb94e
      Dimitris Papastamos authored
      
      
      Change-Id: I504d3f65ca5829bc1f4ebadb764931f8379ee81f
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      53bfb94e
    • Dimitris Papastamos's avatar
      Use PFR0 to identify need for mitigation of CVE-2017-5915 · 780edd86
      Dimitris Papastamos authored
      
      
      If the CSV2 field reads as 1 then branch targets trained in one
      context cannot affect speculative execution in a different context.
      In that case skip the workaround on Cortex A75.
      
      Change-Id: I4d5504cba516a67311fb5f0657b08f72909cbd38
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      780edd86
    • Dimitris Papastamos's avatar
      Workaround for CVE-2017-5715 on Cortex A73 and A75 · a1781a21
      Dimitris Papastamos authored
      
      
      Invalidate the Branch Target Buffer (BTB) on entry to EL3 by
      temporarily dropping into AArch32 Secure-EL1 and executing the
      `BPIALL` instruction.
      
      This is achieved by using 3 vector tables.  There is the runtime
      vector table which is used to handle exceptions and 2 additional
      tables which are required to implement this workaround.  The
      additional tables are `vbar0` and `vbar1`.
      
      The sequence of events for handling a single exception is
      as follows:
      
      1) Install vector table `vbar0` which saves the CPU context on entry
         to EL3 and sets up the Secure-EL1 context to execute in AArch32 mode
         with the MMU disabled and I$ enabled.  This is the default vector table.
      
      2) Before doing an ERET into Secure-EL1, switch vbar to point to
         another vector table `vbar1`.  This is required to restore EL3 state
         when returning from the workaround, before proceeding with normal EL3
         exception handling.
      
      3) While in Secure-EL1, the `BPIALL` instruction is executed and an
         SMC call back to EL3 is performed.
      
      4) On entry to EL3 from Secure-EL1, the saved context from step 1) is
         restored.  The vbar is switched to point to `vbar0` in preparation to
         handle further exceptions.  Finally a branch to the runtime vector
         table entry is taken to complete the handling of the original
         exception.
      
      This workaround is enabled by default on the affected CPUs.
      
      NOTE
      ====
      
      There are 4 different stubs in Secure-EL1.  Each stub corresponds to
      an exception type such as Sync/IRQ/FIQ/SError.  Each stub will move a
      different value in `R0` before doing an SMC call back into EL3.
      Without this piece of information it would not be possible to know
      what the original exception type was as we cannot use `ESR_EL3` to
      distinguish between IRQs and FIQs.
      
      Change-Id: I90b32d14a3735290b48685d43c70c99daaa4b434
      Signed-off-by: default avatarDimitris Papastamos <dimitris.papastamos@arm.com>
      a1781a21