1. 11 Jul, 2018 1 commit
    • Roberto Vargas's avatar
      Add end_vector_entry assembler macro · a9203eda
      Roberto Vargas authored
      
      
      Check_vector_size checks if the size of the vector fits
      in the size reserved for it. This check creates problems in
      the Clang assembler. A new macro, end_vector_entry, is added
      and check_vector_size is deprecated.
      
      This new macro fills the current exception vector until the next
      exception vector. If the size of the current vector is bigger
      than 32 instructions then it gives an error.
      
      Change-Id: Ie8545cf1003a1e31656a1018dd6b4c28a4eaf671
      Signed-off-by: default avatarRoberto Vargas <roberto.vargas@arm.com>
      a9203eda
  2. 15 May, 2018 1 commit
    • Varun Wadekar's avatar
      Workaround for CVE-2017-5715 on NVIDIA Denver CPUs · b0301467
      Varun Wadekar authored
      
      
      Flush the indirect branch predictor and RSB on entry to EL3 by issuing
      a newly added instruction for Denver CPUs. Support for this operation
      can be determined by comparing bits 19:16 of ID_AFR0_EL1 with 0b0001.
      
      To achieve this without performing any branch instruction, a per-cpu
      vbar is installed which executes the workaround and then branches off
      to the corresponding vector entry in the main vector table. A side
      effect of this change is that the main vbar is configured before any
      reset handling. This is to allow the per-cpu reset function to override
      the vbar setting.
      
      Change-Id: Ief493cd85935bab3cfee0397e856db5101bc8011
      Signed-off-by: default avatarVarun Wadekar <vwadekar@nvidia.com>
      b0301467
  3. 03 May, 2017 1 commit
  4. 28 Feb, 2017 1 commit
  5. 23 Feb, 2017 1 commit
  6. 22 Feb, 2017 1 commit
  7. 15 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Add provision to extend CPU operations at more levels · 5dd9dbb5
      Jeenu Viswambharan authored
      
      
      Various CPU drivers in ARM Trusted Firmware register functions to handle
      power-down operations. At present, separate functions are registered to
      power down individual cores and clusters.
      
      This scheme operates on the basis of core and cluster, and doesn't cater
      for extending the hierarchy for power-down operations. For example,
      future CPUs might support multiple threads which might need powering
      down individually.
      
      This patch therefore reworks the CPU operations framework to allow for
      registering power down handlers on specific level basis. Henceforth:
      
        - Generic code invokes CPU power down operations by the level
          required.
      
        - CPU drivers explicitly mention CPU_NO_RESET_FUNC when the CPU has no
          reset function.
      
        - CPU drivers register power down handlers as a list: a mandatory
          handler for level 0, and optional handlers for higher levels.
      
      All existing CPU drivers are adapted to the new CPU operations framework
      without needing any functional changes within.
      
      Also update firmware design guide.
      
      Change-Id: I1826842d37a9e60a9e85fdcee7b4b8f6bc1ad043
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      5dd9dbb5
  8. 24 Jul, 2015 1 commit
    • Varun Wadekar's avatar
      Add "Project Denver" CPU support · 3a8c55f6
      Varun Wadekar authored
      
      
      Denver is NVIDIA's own custom-designed, 64-bit, dual-core CPU which is
      fully ARMv8 architecture compatible.  Each of the two Denver cores
      implements a 7-way superscalar microarchitecture (up to 7 concurrent
      micro-ops can be executed per clock), and includes a 128KB 4-way L1
      instruction cache, a 64KB 4-way L1 data cache, and a 2MB 16-way L2
      cache, which services both cores.
      
      Denver implements an innovative process called Dynamic Code Optimization,
      which optimizes frequently used software routines at runtime into dense,
      highly tuned microcode-equivalent routines. These are stored in a
      dedicated, 128MB main-memory-based optimization cache. After being read
      into the instruction cache, the optimized micro-ops are executed,
      re-fetched and executed from the instruction cache as long as needed and
      capacity allows.
      
      Effectively, this reduces the need to re-optimize the software routines.
      Instead of using hardware to extract the instruction-level parallelism
      (ILP) inherent in the code, Denver extracts the ILP once via software
      techniques, and then executes those routines repeatedly, thus amortizing
      the cost of ILP extraction over the many execution instances.
      
      Denver also features new low latency power-state transitions, in addition
      to extensive power-gating and dynamic voltage and clock scaling based on
      workloads.
      Signed-off-by: default avatarVarun Wadekar <vwadekar@nvidia.com>
      3a8c55f6