1. 12 Nov, 2019 1 commit
    • Deepika Bhavnani's avatar
      Coding guideline suggest not to use unsigned long · 9afe8cdc
      Deepika Bhavnani authored
      
      
      `unsigned long` should be replaced to
      1. `unsigned int` or `unsigned long long` - If fixed,
      based on the architecture AArch32 or AArch64
      2. `u_register_t` - If it is supposed to be 32-bit
      wide in AArch32 and 64-bit wide in AArch64.
      
      Translation descriptors are always 32-bit wide, here
      `uint32_t` is used to describe the `exact size` of
      translation descriptors instead of `unsigned int` which
      guarantees minimum 32-bits
      Signed-off-by: default avatarDeepika Bhavnani <deepika.bhavnani@arm.com>
      Change-Id: I6a2af2e8b3c71170e2634044e0b887f07a41677e
      9afe8cdc
  2. 24 Oct, 2019 1 commit
  3. 20 Oct, 2019 1 commit
    • Simon South's avatar
      Disable stack protection explicitly · 7af195e2
      Simon South authored
      
      
      Explicitly disable stack protection via the "-fno-stack-protector"
      compiler option when the ENABLE_STACK_PROTECTOR build option is
      set to "none" (the default).
      
      This allows the build to complete without link errors on systems where
      stack protection is enabled by default in the compiler.
      
      Change-Id: I0a676aa672815235894fb2cd05fa2b196fabb972
      Signed-off-by: default avatarSimon South <simon@simonsouth.net>
      7af195e2
  4. 18 Oct, 2019 1 commit
    • Artsem Artsemenka's avatar
      xlat_table_v2: Fix enable WARMBOOT_ENABLE_DCACHE_EARLY config · 0e7a0540
      Artsem Artsemenka authored
      
      
      The WARMBOOT_ENABLE_DCACHE_EARLY allows caches to be turned on early during
      the boot. But the xlat_change_mem_attributes_ctx() API did not do the required
      cache maintenance after the mmap tables are modified if
      WARMBOOT_ENABLE_DCACHE_EARLY is enabled. This meant that when the caches are turned
      off during power down, the tables in memory are accessed as part of cache
      maintenance for power down, and the tables are not correct at this point which
      results in a data abort.
      This patch removes the optimization within xlat_change_mem_attributes_ctx()
      when WARMBOOT_ENABLE_DCACHE_EARLY is enabled.
      Signed-off-by: default avatarArtsem Artsemenka <artsem.artsemenka@arm.com>
      Change-Id: I82de3decba87dd13e9856b5f3620a1c8571c8d87
      0e7a0540
  5. 04 Oct, 2019 2 commits
    • laurenw-arm's avatar
      Neoverse N1 Errata Workaround 1542419 · 80942622
      laurenw-arm authored
      
      
      Coherent I-cache is causing a prefetch violation where when the core
      executes an instruction that has recently been modified, the core might
      fetch a stale instruction which violates the ordering of instruction
      fetches.
      
      The workaround includes an instruction sequence to implementation
      defined registers to trap all EL0 IC IVAU instructions to EL3 and a trap
      handler to execute a TLB inner-shareable invalidation to an arbitrary
      address followed by a DSB.
      Signed-off-by: default avatarLauren Wehrmeister <lauren.wehrmeister@arm.com>
      Change-Id: Ic3b7cbb11cf2eaf9005523ef5578a372593ae4d6
      80942622
    • Soby Mathew's avatar
      Fix the CAS spinlock implementation · c97cba4e
      Soby Mathew authored
      
      
      Make the spinlock implementation use ARMv8.1-LSE CAS instruction based
      on a platform build option. The CAS-based implementation used to be
      unconditionally selected for all ARM8.1+ platforms.
      
      The previous CAS spinlock implementation had a bug wherein the spin_unlock()
      implementation had an `sev` after `stlr` which is not sufficient. A dsb is
      needed to ensure that the stlr completes prior to the sev. Having a dsb is
      heavyweight and a better solution would be to use load exclusive semantics
      to monitor the lock and wake up from wfe when a store happens to the lock.
      The patch implements the same.
      
      Change-Id: I5283ce4a889376e4cc01d1b9d09afa8229a2e522
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      Signed-off-by: default avatarOlivier Deprez <olivier.deprez@arm.com>
      c97cba4e
  6. 03 Oct, 2019 2 commits
  7. 02 Oct, 2019 1 commit
  8. 30 Sep, 2019 1 commit
  9. 26 Sep, 2019 2 commits
    • Alexei Fedorov's avatar
      AArch32: Disable Secure Cycle Counter · c3e8b0be
      Alexei Fedorov authored
      
      
      This patch changes implementation for disabling Secure Cycle
      Counter. For ARMv8.5 the counter gets disabled by setting
      SDCR.SCCD bit on CPU cold/warm boot. For the earlier
      architectures PMCR register is saved/restored on secure
      world entry/exit from/to Non-secure state, and cycle counting
      gets disabled by setting PMCR.DP bit.
      In 'include\aarch32\arch.h' header file new
      ARMv8.5-PMU related definitions were added.
      
      Change-Id: Ia8845db2ebe8de940d66dff479225a5b879316f8
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      c3e8b0be
    • Madhukar Pappireddy's avatar
      Adding new optional PSCI hook pwr_domain_on_finish_late · 10107707
      Madhukar Pappireddy authored
      
      
      This PSCI hook is similar to pwr_domain_on_finish but is
      guaranteed to be invoked with the respective core and cluster are
      participating in coherency. This will be necessary to safely invoke
      the new GICv3 API which modifies shared GIC data structures concurrently.
      
      Change-Id: I8e54f05c9d4ef5712184c9c18ba45ac97a29eb7a
      Signed-off-by: default avatarMadhukar Pappireddy <madhukar.pappireddy@arm.com>
      10107707
  10. 20 Sep, 2019 1 commit
    • Justin Chadwell's avatar
      Fix MTE support from causing unused variable warnings · 019b03a3
      Justin Chadwell authored
      
      
      assert() calls are removed in release builds, and if that assert call is
      the only use of a variable, an unused variable warning will be triggered
      in a release build. This patch fixes this problem when
      CTX_INCLUDE_MTE_REGS by not using an intermediate variable to store the
      results of get_armv8_5_mte_support().
      
      Change-Id: I529e10ec0b2c8650d2c3ab52c4f0cecc0b3a670e
      Signed-off-by: default avatarJustin Chadwell <justin.chadwell@arm.com>
      019b03a3
  11. 13 Sep, 2019 2 commits
    • Deepika Bhavnani's avatar
      SCTLR and ACTLR are 32-bit for AArch32 and 64-bit for AArch64 · eeb5a7b5
      Deepika Bhavnani authored
      
      
      AArch64 System register SCTLR_EL1[31:0] is architecturally mapped
      to AArch32 System register SCTLR[31:0]
      AArch64 System register ACTLR_EL1[31:0] is architecturally mapped
      to AArch32 System register ACTLR[31:0].
      
      `u_register_t` should be used when it's important to store the
      contents of a register in its native size
      Signed-off-by: default avatarDeepika Bhavnani <deepika.bhavnani@arm.com>
      Change-Id: I0055422f8cc0454405e011f53c1c4ddcaceb5779
      eeb5a7b5
    • Alexei Fedorov's avatar
      Refactor ARMv8.3 Pointer Authentication support code · ed108b56
      Alexei Fedorov authored
      
      
      This patch provides the following features and makes modifications
      listed below:
      - Individual APIAKey key generation for each CPU.
      - New key generation on every BL31 warm boot and TSP CPU On event.
      - Per-CPU storage of APIAKey added in percpu_data[]
        of cpu_data structure.
      - `plat_init_apiakey()` function replaced with `plat_init_apkey()`
        which returns 128-bit value and uses Generic timer physical counter
        value to increase the randomness of the generated key.
        The new function can be used for generation of all ARMv8.3-PAuth keys
      - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
      - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
        generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
        pauth_disable_el1()` and `pauth_disable_el3()` functions disable
        PAuth for EL1 and EL3 respectively;
        `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
        cpu-data structure.
      - Combined `save_gp_pauth_registers()` function replaces calls to
        `save_gp_registers()` and `pauth_context_save()`;
        `restore_gp_pauth_registers()` replaces `pauth_context_restore()`
        and `restore_gp_registers()` calls.
      - `restore_gp_registers_eret()` function removed with corresponding
        code placed in `el3_exit()`.
      - Fixed the issue when `pauth_t pauth_ctx` structure allocated space
        for 12 uint64_t PAuth registers instead of 10 by removal of macro
        CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
        and assigning its value to CTX_PAUTH_REGS_END.
      - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
        in `msr	spsel`  instruction instead of hard-coded values.
      - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.
      
      Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      ed108b56
  12. 12 Sep, 2019 1 commit
  13. 11 Sep, 2019 2 commits
  14. 09 Sep, 2019 2 commits
  15. 21 Aug, 2019 1 commit
    • Alexei Fedorov's avatar
      AArch64: Disable Secure Cycle Counter · e290a8fc
      Alexei Fedorov authored
      
      
      This patch fixes an issue when secure world timing information
      can be leaked because Secure Cycle Counter is not disabled.
      For ARMv8.5 the counter gets disabled by setting MDCR_El3.SCCD
      bit on CPU cold/warm boot.
      For the earlier architectures PMCR_EL0 register is saved/restored
      on secure world entry/exit from/to Non-secure state, and cycle
      counting gets disabled by setting PMCR_EL0.DP bit.
      'include\aarch64\arch.h' header file was tided up and new
      ARMv8.5-PMU related definitions were added.
      
      Change-Id: I6f56db6bc77504634a352388990ad925a69ebbfa
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      e290a8fc
  16. 19 Aug, 2019 1 commit
  17. 16 Aug, 2019 2 commits
    • Deepika Bhavnani's avatar
      Coverity fix: Remove GGC ignore -Warray-bounds · 41af0515
      Deepika Bhavnani authored
      
      
      GCC diagnostics were added to ignore array boundaries, instead
      of ignoring GCC warning current code will check for array boundaries
      and perform and array update only for valid elements.
      
      Resolves: `CID 246574` `CID 246710` `CID 246651`
      Signed-off-by: default avatarDeepika Bhavnani <deepika.bhavnani@arm.com>
      Change-Id: I7530ecf7a1707351c6ee87e90cc3d33574088f57
      41af0515
    • Alexei Fedorov's avatar
      FVP_Base_AEMv8A platform: Fix cache maintenance operations · ef430ff4
      Alexei Fedorov authored
      
      
      This patch fixes FVP_Base_AEMv8A model hang issue with
      ARMv8.4+ with cache modelling enabled configuration.
      Incorrect L1 cache flush operation to PoU, using CLIDR_EL1
      LoUIS field, which is required by the architecture to be
      zero for ARMv8.4-A with ARMv8.4-S2FWB feature is replaced
      with L1 to L2 and L2 to L3 (if L3 is present) cache flushes.
      FVP_Base_AEMv8A model can be configured with L3 enabled by
      setting `cluster0.l3cache-size` and `cluster1.l3cache-size`
      to non-zero values, and presence of L3 is checked in
      `aem_generic_core_pwr_dwn` function by reading
      CLIDR_EL1.Ctype3 field value.
      
      Change-Id: If3de3d4eb5ed409e5b4ccdbc2fe6d5a01894a9af
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      ef430ff4
  18. 06 Aug, 2019 3 commits
    • Justin Chadwell's avatar
      Fix Coverity #261967, Infinite loop · 9624c0a9
      Justin Chadwell authored
      
      
      Coverity has identified that the __aeabi_imod function will loop forever
      if the denominator is not a power of 2, which is probably not the
      desired behaviour.
      
      The functions in the rest of the file are compiler implementations of
      division if ARMv7 does not implement division which is permitted by the
      spec. However, while most of the functions in the file are documented
      and referenced in other places online, __aeabi_uimod and __aeabi_imod
      are not. For this reason, these functions have been removed from the
      code base, which also removes the Coverity error.
      
      Change-Id: I20066d72365329a8b03a5536d865c4acaa2139ae
      Signed-off-by: default avatarJustin Chadwell <justin.chadwell@arm.com>
      9624c0a9
    • Justin Chadwell's avatar
      Fix Coverity #343008, Side affect in assertion · 4249e8b9
      Justin Chadwell authored
      
      
      This patch simply splits off the increment of next_xlat into a separate
      statement to ensure consistent behaviour if the assert was to ever be
      removed.
      
      Change-Id: I827f601ccea55f4da9442048419c9b8cc0c5d22e
      Signed-off-by: default avatarJustin Chadwell <justin.chadwell@arm.com>
      4249e8b9
    • Justin Chadwell's avatar
      Fix Coverity #342970, Uninitialized scalar variable · dbff5263
      Justin Chadwell authored
      
      
      This ensures that probe_data starts with a reasonable default, as
      opposed to whatever was left on the stack.
      
      Change-Id: I5550efea5e2bec7717f9fa063cb11e6a7005cce5
      Signed-off-by: default avatarJustin Chadwell <justin.chadwell@arm.com>
      dbff5263
  19. 01 Aug, 2019 2 commits
    • Julius Werner's avatar
      Switch AARCH32/AARCH64 to __aarch64__ · 402b3cf8
      Julius Werner authored
      
      
      NOTE: AARCH32/AARCH64 macros are now deprecated in favor of __aarch64__.
      
      All common C compilers pre-define the same macros to signal which
      architecture the code is being compiled for: __arm__ for AArch32 (or
      earlier versions) and __aarch64__ for AArch64. There's no need for TF-A
      to define its own custom macros for this. In order to unify code with
      the export headers (which use __aarch64__ to avoid another dependency),
      let's deprecate the AARCH32 and AARCH64 macros and switch the code base
      over to the pre-defined standard macro. (Since it is somewhat
      unintuitive that __arm__ only means AArch32, let's standardize on only
      using __aarch64__.)
      
      Change-Id: Ic77de4b052297d77f38fc95f95f65a8ee70cf200
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      402b3cf8
    • Julius Werner's avatar
      Replace __ASSEMBLY__ with compiler-builtin __ASSEMBLER__ · d5dfdeb6
      Julius Werner authored
      
      
      NOTE: __ASSEMBLY__ macro is now deprecated in favor of __ASSEMBLER__.
      
      All common C compilers predefine a macro called __ASSEMBLER__ when
      preprocessing a .S file. There is no reason for TF-A to define it's own
      __ASSEMBLY__ macro for this purpose instead. To unify code with the
      export headers (which use __ASSEMBLER__ to avoid one extra dependency),
      let's deprecate __ASSEMBLY__ and switch the code base over to the
      predefined standard.
      
      Change-Id: Id7d0ec8cf330195da80499c68562b65cb5ab7417
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      d5dfdeb6
  20. 31 Jul, 2019 1 commit
  21. 22 Jul, 2019 1 commit
    • Imre Kis's avatar
      Romlib makefile refactoring and script rewriting · d8210dc6
      Imre Kis authored
      
      
      The features of the previously existing gentbl, genvar and genwrappers
      scripts were reimplemented in the romlib_generator.py Python script.
      This resulted in more readable and maintainable code and the script
      introduces additional features that help dependency handling in
      makefiles. The assembly templates were separated from the script logic
      and were collected in the 'templates' directory.
      
      The targets and their dependencies were reorganized in the makefile and
      the dependency handling of included index files is possible now.
      Incremental build is available in case of modifying the index files.
      Signed-off-by: default avatarImre Kis <imre.kis@arm.com>
      Change-Id: I79f65fab9dc5c70d1f6fc8f57b2a3009bf842dc5
      d8210dc6
  22. 18 Jul, 2019 1 commit
    • Julius Werner's avatar
      Introduce lightweight BL platform parameter library · b852d229
      Julius Werner authored
      
      
      This patch adds some common helper code to support a lightweight
      platform parameter passing framework between BLs that has already been
      used on Rockchip platforms but is more widely useful to others as well.
      It can be used as an implementation for the SoC firmware configuration
      file mentioned in the docs, and is primarily intended for platforms
      that only require a handful of values to be passed and want to get by
      without a libfdt dependency. Parameters are stored in a linked list and
      the parameter space is split in generic and vendor-specific parameter
      types. Generic types will be handled by this code whereas
      vendor-specific types have to be handled by a vendor-specific handler
      function that gets passed in.
      
      Change-Id: If3413d44e86b99d417294ce8d33eb2fc77a6183f
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      b852d229
  23. 16 Jul, 2019 1 commit
  24. 12 Jul, 2019 2 commits
  25. 10 Jul, 2019 1 commit
  26. 02 Jul, 2019 4 commits