1. 26 Sep, 2019 1 commit
  2. 13 Sep, 2019 2 commits
    • Deepika Bhavnani's avatar
      SCTLR and ACTLR are 32-bit for AArch32 and 64-bit for AArch64 · eeb5a7b5
      Deepika Bhavnani authored
      
      
      AArch64 System register SCTLR_EL1[31:0] is architecturally mapped
      to AArch32 System register SCTLR[31:0]
      AArch64 System register ACTLR_EL1[31:0] is architecturally mapped
      to AArch32 System register ACTLR[31:0].
      
      `u_register_t` should be used when it's important to store the
      contents of a register in its native size
      Signed-off-by: default avatarDeepika Bhavnani <deepika.bhavnani@arm.com>
      Change-Id: I0055422f8cc0454405e011f53c1c4ddcaceb5779
      eeb5a7b5
    • Alexei Fedorov's avatar
      Refactor ARMv8.3 Pointer Authentication support code · ed108b56
      Alexei Fedorov authored
      
      
      This patch provides the following features and makes modifications
      listed below:
      - Individual APIAKey key generation for each CPU.
      - New key generation on every BL31 warm boot and TSP CPU On event.
      - Per-CPU storage of APIAKey added in percpu_data[]
        of cpu_data structure.
      - `plat_init_apiakey()` function replaced with `plat_init_apkey()`
        which returns 128-bit value and uses Generic timer physical counter
        value to increase the randomness of the generated key.
        The new function can be used for generation of all ARMv8.3-PAuth keys
      - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
      - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
        generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
        pauth_disable_el1()` and `pauth_disable_el3()` functions disable
        PAuth for EL1 and EL3 respectively;
        `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
        cpu-data structure.
      - Combined `save_gp_pauth_registers()` function replaces calls to
        `save_gp_registers()` and `pauth_context_save()`;
        `restore_gp_pauth_registers()` replaces `pauth_context_restore()`
        and `restore_gp_registers()` calls.
      - `restore_gp_registers_eret()` function removed with corresponding
        code placed in `el3_exit()`.
      - Fixed the issue when `pauth_t pauth_ctx` structure allocated space
        for 12 uint64_t PAuth registers instead of 10 by removal of macro
        CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
        and assigning its value to CTX_PAUTH_REGS_END.
      - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
        in `msr	spsel`  instruction instead of hard-coded values.
      - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.
      
      Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      ed108b56
  3. 12 Sep, 2019 1 commit
  4. 11 Sep, 2019 2 commits
  5. 09 Sep, 2019 2 commits
  6. 21 Aug, 2019 1 commit
    • Alexei Fedorov's avatar
      AArch64: Disable Secure Cycle Counter · e290a8fc
      Alexei Fedorov authored
      
      
      This patch fixes an issue when secure world timing information
      can be leaked because Secure Cycle Counter is not disabled.
      For ARMv8.5 the counter gets disabled by setting MDCR_El3.SCCD
      bit on CPU cold/warm boot.
      For the earlier architectures PMCR_EL0 register is saved/restored
      on secure world entry/exit from/to Non-secure state, and cycle
      counting gets disabled by setting PMCR_EL0.DP bit.
      'include\aarch64\arch.h' header file was tided up and new
      ARMv8.5-PMU related definitions were added.
      
      Change-Id: I6f56db6bc77504634a352388990ad925a69ebbfa
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      e290a8fc
  7. 19 Aug, 2019 1 commit
  8. 16 Aug, 2019 2 commits
    • Deepika Bhavnani's avatar
      Coverity fix: Remove GGC ignore -Warray-bounds · 41af0515
      Deepika Bhavnani authored
      
      
      GCC diagnostics were added to ignore array boundaries, instead
      of ignoring GCC warning current code will check for array boundaries
      and perform and array update only for valid elements.
      
      Resolves: `CID 246574` `CID 246710` `CID 246651`
      Signed-off-by: default avatarDeepika Bhavnani <deepika.bhavnani@arm.com>
      Change-Id: I7530ecf7a1707351c6ee87e90cc3d33574088f57
      41af0515
    • Alexei Fedorov's avatar
      FVP_Base_AEMv8A platform: Fix cache maintenance operations · ef430ff4
      Alexei Fedorov authored
      
      
      This patch fixes FVP_Base_AEMv8A model hang issue with
      ARMv8.4+ with cache modelling enabled configuration.
      Incorrect L1 cache flush operation to PoU, using CLIDR_EL1
      LoUIS field, which is required by the architecture to be
      zero for ARMv8.4-A with ARMv8.4-S2FWB feature is replaced
      with L1 to L2 and L2 to L3 (if L3 is present) cache flushes.
      FVP_Base_AEMv8A model can be configured with L3 enabled by
      setting `cluster0.l3cache-size` and `cluster1.l3cache-size`
      to non-zero values, and presence of L3 is checked in
      `aem_generic_core_pwr_dwn` function by reading
      CLIDR_EL1.Ctype3 field value.
      
      Change-Id: If3de3d4eb5ed409e5b4ccdbc2fe6d5a01894a9af
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      ef430ff4
  9. 06 Aug, 2019 3 commits
    • Justin Chadwell's avatar
      Fix Coverity #261967, Infinite loop · 9624c0a9
      Justin Chadwell authored
      
      
      Coverity has identified that the __aeabi_imod function will loop forever
      if the denominator is not a power of 2, which is probably not the
      desired behaviour.
      
      The functions in the rest of the file are compiler implementations of
      division if ARMv7 does not implement division which is permitted by the
      spec. However, while most of the functions in the file are documented
      and referenced in other places online, __aeabi_uimod and __aeabi_imod
      are not. For this reason, these functions have been removed from the
      code base, which also removes the Coverity error.
      
      Change-Id: I20066d72365329a8b03a5536d865c4acaa2139ae
      Signed-off-by: default avatarJustin Chadwell <justin.chadwell@arm.com>
      9624c0a9
    • Justin Chadwell's avatar
      Fix Coverity #343008, Side affect in assertion · 4249e8b9
      Justin Chadwell authored
      
      
      This patch simply splits off the increment of next_xlat into a separate
      statement to ensure consistent behaviour if the assert was to ever be
      removed.
      
      Change-Id: I827f601ccea55f4da9442048419c9b8cc0c5d22e
      Signed-off-by: default avatarJustin Chadwell <justin.chadwell@arm.com>
      4249e8b9
    • Justin Chadwell's avatar
      Fix Coverity #342970, Uninitialized scalar variable · dbff5263
      Justin Chadwell authored
      
      
      This ensures that probe_data starts with a reasonable default, as
      opposed to whatever was left on the stack.
      
      Change-Id: I5550efea5e2bec7717f9fa063cb11e6a7005cce5
      Signed-off-by: default avatarJustin Chadwell <justin.chadwell@arm.com>
      dbff5263
  10. 01 Aug, 2019 2 commits
    • Julius Werner's avatar
      Switch AARCH32/AARCH64 to __aarch64__ · 402b3cf8
      Julius Werner authored
      
      
      NOTE: AARCH32/AARCH64 macros are now deprecated in favor of __aarch64__.
      
      All common C compilers pre-define the same macros to signal which
      architecture the code is being compiled for: __arm__ for AArch32 (or
      earlier versions) and __aarch64__ for AArch64. There's no need for TF-A
      to define its own custom macros for this. In order to unify code with
      the export headers (which use __aarch64__ to avoid another dependency),
      let's deprecate the AARCH32 and AARCH64 macros and switch the code base
      over to the pre-defined standard macro. (Since it is somewhat
      unintuitive that __arm__ only means AArch32, let's standardize on only
      using __aarch64__.)
      
      Change-Id: Ic77de4b052297d77f38fc95f95f65a8ee70cf200
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      402b3cf8
    • Julius Werner's avatar
      Replace __ASSEMBLY__ with compiler-builtin __ASSEMBLER__ · d5dfdeb6
      Julius Werner authored
      
      
      NOTE: __ASSEMBLY__ macro is now deprecated in favor of __ASSEMBLER__.
      
      All common C compilers predefine a macro called __ASSEMBLER__ when
      preprocessing a .S file. There is no reason for TF-A to define it's own
      __ASSEMBLY__ macro for this purpose instead. To unify code with the
      export headers (which use __ASSEMBLER__ to avoid one extra dependency),
      let's deprecate __ASSEMBLY__ and switch the code base over to the
      predefined standard.
      
      Change-Id: Id7d0ec8cf330195da80499c68562b65cb5ab7417
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      d5dfdeb6
  11. 31 Jul, 2019 1 commit
  12. 22 Jul, 2019 1 commit
    • Imre Kis's avatar
      Romlib makefile refactoring and script rewriting · d8210dc6
      Imre Kis authored
      
      
      The features of the previously existing gentbl, genvar and genwrappers
      scripts were reimplemented in the romlib_generator.py Python script.
      This resulted in more readable and maintainable code and the script
      introduces additional features that help dependency handling in
      makefiles. The assembly templates were separated from the script logic
      and were collected in the 'templates' directory.
      
      The targets and their dependencies were reorganized in the makefile and
      the dependency handling of included index files is possible now.
      Incremental build is available in case of modifying the index files.
      Signed-off-by: default avatarImre Kis <imre.kis@arm.com>
      Change-Id: I79f65fab9dc5c70d1f6fc8f57b2a3009bf842dc5
      d8210dc6
  13. 18 Jul, 2019 1 commit
    • Julius Werner's avatar
      Introduce lightweight BL platform parameter library · b852d229
      Julius Werner authored
      
      
      This patch adds some common helper code to support a lightweight
      platform parameter passing framework between BLs that has already been
      used on Rockchip platforms but is more widely useful to others as well.
      It can be used as an implementation for the SoC firmware configuration
      file mentioned in the docs, and is primarily intended for platforms
      that only require a handful of values to be passed and want to get by
      without a libfdt dependency. Parameters are stored in a linked list and
      the parameter space is split in generic and vendor-specific parameter
      types. Generic types will be handled by this code whereas
      vendor-specific types have to be handled by a vendor-specific handler
      function that gets passed in.
      
      Change-Id: If3413d44e86b99d417294ce8d33eb2fc77a6183f
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      b852d229
  14. 16 Jul, 2019 1 commit
  15. 12 Jul, 2019 2 commits
  16. 10 Jul, 2019 1 commit
  17. 02 Jul, 2019 10 commits
  18. 20 Jun, 2019 1 commit
  19. 11 Jun, 2019 1 commit
  20. 06 Jun, 2019 3 commits
    • Andrew F. Davis's avatar
      PSCI: Lookup list of parent nodes to lock only once · 74d27d00
      Andrew F. Davis authored
      
      
      When acquiring or releasing the power domain locks for a given CPU the
      parent nodes are looked up by walking the up the PD tree list on both the
      acquire and release path, only one set of lookups is needed. Fetch the
      parent nodes first and pass this list into both the acquire and release
      functions to avoid the double lookup.
      
      This also allows us to not have to do this lookup after coherency has
      been exited during the core power down sequence. The shared struct
      psci_cpu_pd_nodes is not placed in coherent memory like is done
      for psci_non_cpu_pd_nodes and doing so would negatively affect
      performance. With this patch we remove the need to have it in coherent
      memory by moving the access out of psci_release_pwr_domain_locks().
      Signed-off-by: default avatarAndrew F. Davis <afd@ti.com>
      Change-Id: I7b9cfa9d31148dea0f5e21091c8b45ef7fe4c4ab
      74d27d00
    • Andre Przywara's avatar
      Neoverse N1: Introduce workaround for Neoverse N1 erratum 1315703 · 5f5d0763
      Andre Przywara authored
      Neoverse N1 erratum 1315703 is a Cat A (rare) erratum [1], present in
      older revisions of the Neoverse N1 processor core.
      The workaround is to set a bit in the implementation defined CPUACTLR2_EL1
      system register, which will disable the load-bypass-store feature.
      
      [1] http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.pjdocpjdoc-466751330-1032/index.html
      
      
      
      Change-Id: I5c708dbe0efa4daa0bcb6bd9622c5efe19c03af9
      Signed-off-by: default avatarAndre Przywara <andre.przywara@arm.com>
      5f5d0763
    • Andrew F. Davis's avatar
      ti: k3: common: Remove coherency workaround for AM65x · 48d6b264
      Andrew F. Davis authored
      
      
      We previously left our caches on during power-down to prevent any
      non-caching accesses to memory that is cached by other cores. Now with
      the last accessed areas all being marked as non-cached by
      USE_COHERENT_MEM we can rely on that to workaround our interconnect
      issues. Remove the old workaround.
      
      Change-Id: Idadb7696d1449499d1edff4f6f62ab3b99d1efb7
      Signed-off-by: default avatarAndrew F. Davis <afd@ti.com>
      48d6b264
  21. 04 Jun, 2019 1 commit
    • John Tsichritzis's avatar
      Apply compile-time check for AArch64-only cores · 629d04f5
      John Tsichritzis authored
      
      
      Some cores support only AArch64 mode. In those cores, only a limited
      subset of the AArch32 system registers are implemented. Hence, if TF-A
      is supposed to run on AArch64-only cores, it must be compiled with
      CTX_INCLUDE_AARCH32_REGS=0.
      
      Currently, the default settings for compiling TF-A are with the AArch32
      system registers included. So, if we compile TF-A the default way and
      attempt to run it on an AArch64-only core, we only get a runtime panic.
      
      Now a compile-time check has been added to ensure that this flag has the
      appropriate value when AArch64-only cores are included in the build.
      
      Change-Id: I298ec550037fafc9347baafb056926d149197d4c
      Signed-off-by: default avatarJohn Tsichritzis <john.tsichritzis@arm.com>
      629d04f5