- 03 Mar, 2020 1 commit
-
-
Max Shvetsov authored
This patch adds EL2 registers that are supported up to ARMv8.6. ARM_ARCH_MINOR has to specified to enable save/restore routine. Note: Following registers are still not covered in save/restore. * AMEVCNTVOFF0<n>_EL2 * AMEVCNTVOFF1<n>_EL2 * ICH_AP0R<n>_EL2 * ICH_AP1R<n>_EL2 * ICH_LR<n>_EL2 Change-Id: I4813f3243e56e21cb297b31ef549a4b38d4876e1 Signed-off-by: Max Shvetsov <maksims.svecovs@arm.com>
-
- 02 Mar, 2020 1 commit
-
-
Max Shvetsov authored
NOTE: Not all EL-2 system registers are saved/restored. This subset includes registers recognized by ARMv8.0 Change-Id: I9993c7d78d8f5f8e72d1c6c8d6fd871283aa3ce0 Signed-off-by: Jose Marinho <jose.marinho@arm.com> Signed-off-by: Olivier Deprez <olivier.deprez@arm.com> Signed-off-by: Artsem Artsemenka <artsem.artsemenka@arm.com> Signed-off-by: Max Shvetsov <maksims.svecovs@arm.com>
-
- 04 Feb, 2020 1 commit
-
-
Zelalem authored
This patch removes unnecessary header file includes discovered by Coverity HFA option. Change-Id: I2827c37c1c24866c87db0e206e681900545925d4 Signed-off-by: Zelalem <zelalem.aweke@arm.com>
-
- 28 Jan, 2020 1 commit
-
-
Louis Mayencourt authored
The Secure Configuration Register is 64-bits in AArch64 and 32-bits in AArch32. Use u_register_t instead of unsigned int to reflect this. Change-Id: I51b69467baba36bf0cfaec2595dc8837b1566934 Signed-off-by: Louis Mayencourt <louis.mayencourt@arm.com>
-
- 27 Jan, 2020 1 commit
-
-
Madhukar Pappireddy authored
Since BL31 PROGBITS and BL31 NOBITS sections are going to be in non-adjacent memory regions, potentially far from each other, some fixes are needed to support it completely. 1. adr instruction only allows computing the effective address of a location only within 1MB range of the PC. However, adrp instruction together with an add permits position independent address of any location with 4GB range of PC. 2. Since BL31 _RW_END_ marks the end of BL31 image, care must be taken that it is aligned to page size since we map this memory region in BL31 using xlat_v2 lib utils which mandate alignment of image size to page granularity. Change-Id: I3451cc030d03cb2032db3cc088f0c0e2c84bffda Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
-
- 22 Jan, 2020 3 commits
-
-
Mark Dykes authored
This reverts commit 76d84cbc. Change-Id: I867af7af3d9f5e568101f79b9ebea578e5cb2a4b
-
Anthony Steinhauser authored
Even though ERET always causes a jump to another address, aarch64 CPUs speculatively execute following instructions as if the ERET instruction was not a jump instruction. The speculative execution does not cross privilege-levels (to the jump target as one would expect), but it continues on the kernel privilege level as if the ERET instruction did not change the control flow - thus execution anything that is accidentally linked after the ERET instruction. Later, the results of this speculative execution are always architecturally discarded, however they can leak data using microarchitectural side channels. This speculative execution is very reliable (seems to be unconditional) and it manages to complete even relatively performance-heavy operations (e.g. multiple dependent fetches from uncached memory). This was fixed in Linux, FreeBSD, OpenBSD and Optee OS: https://github.com/torvalds/linux/commit/679db70801da9fda91d26caf13bf5b5ccc74e8e8 https://github.com/freebsd/freebsd/commit/29fb48ace4186a41c409fde52bcf4216e9e50b61 https://github.com/openbsd/src/commit/3a08873ece1cb28ace89fd65e8f3c1375cc98de2 https://github.com/OP-TEE/optee_os/commit/abfd092aa19f9c0251e3d5551e2d68a9ebcfec8a It is demonstrated in a SafeSide example: https://github.com/google/safeside/blob/master/demos/eret_hvc_smc_wrapper.cc https://github.com/google/safeside/blob/master/kernel_modules/kmod_eret_hvc_smc/eret_hvc_smc_module.c Signed-off-by: Anthony Steinhauser <asteinhauser@google.com> Change-Id: Iead39b0b9fb4b8d8b5609daaa8be81497ba63a0f
-
Madhukar Pappireddy authored
Since BL31 PROGBITS and BL31 NOBITS sections are going to be in non-adjacent memory regions, potentially far from each other, some fixes are needed to support it completely. 1. adr instruction only allows computing the effective address of a location only within 1MB range of the PC. However, adrp instruction together with an add permits position independent address of any location with 4GB range of PC. 2. Since BL31 _RW_END_ marks the end of BL31 image, care must be taken that it is aligned to page size since we map this memory region in BL31 using xlat_v2 lib utils which mandate alignment of image size to page granularity. Change-Id: Ic745c5a130fe4239fa2742142d083b2bdc4e8b85 Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
-
- 18 Dec, 2019 1 commit
-
-
Jan Dabros authored
EA handlers for exceptions taken from lower ELs at the end invokes el3_exit function. However there was a bug with sp maintenance which resulted in el3_exit setting runtime stack to context. This in turn caused memory corruption on consecutive EL3 entries. Signed-off-by: Jan Dabros <jsd@semihalf.com> Change-Id: I0424245c27c369c864506f4baa719968890ce659
-
- 06 Dec, 2019 2 commits
-
-
Artsem Artsemenka authored
Check that entry point information requesting S-EL2 has AArch64 as an execution state during context setup. Signed-off-by: Artsem Artsemenka <artsem.artsemenka@arm.com> Change-Id: I447263692fed6e55c1b076913e6eb73b1ea735b7
-
Achin Gupta authored
This patch adds support for enabling S-EL2 if this EL is specified in the entry point information being used to initialise a secure context. It is the caller's responsibility to check if S-EL2 is available on the system before requesting this EL through the entry point information. Signed-off-by: Achin Gupta <achin.gupta@arm.com> Change-Id: I2752964f078ab528b2e80de71c7d2f35e60569e1
-
- 26 Sep, 2019 1 commit
-
-
Alexei Fedorov authored
This patch changes implementation for disabling Secure Cycle Counter. For ARMv8.5 the counter gets disabled by setting SDCR.SCCD bit on CPU cold/warm boot. For the earlier architectures PMCR register is saved/restored on secure world entry/exit from/to Non-secure state, and cycle counting gets disabled by setting PMCR.DP bit. In 'include\aarch32\arch.h' header file new ARMv8.5-PMU related definitions were added. Change-Id: Ia8845db2ebe8de940d66dff479225a5b879316f8 Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
-
- 20 Sep, 2019 1 commit
-
-
Justin Chadwell authored
assert() calls are removed in release builds, and if that assert call is the only use of a variable, an unused variable warning will be triggered in a release build. This patch fixes this problem when CTX_INCLUDE_MTE_REGS by not using an intermediate variable to store the results of get_armv8_5_mte_support(). Change-Id: I529e10ec0b2c8650d2c3ab52c4f0cecc0b3a670e Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
-
- 13 Sep, 2019 2 commits
-
-
Deepika Bhavnani authored
AArch64 System register SCTLR_EL1[31:0] is architecturally mapped to AArch32 System register SCTLR[31:0] AArch64 System register ACTLR_EL1[31:0] is architecturally mapped to AArch32 System register ACTLR[31:0]. `u_register_t` should be used when it's important to store the contents of a register in its native size Signed-off-by: Deepika Bhavnani <deepika.bhavnani@arm.com> Change-Id: I0055422f8cc0454405e011f53c1c4ddcaceb5779
-
Alexei Fedorov authored
This patch provides the following features and makes modifications listed below: - Individual APIAKey key generation for each CPU. - New key generation on every BL31 warm boot and TSP CPU On event. - Per-CPU storage of APIAKey added in percpu_data[] of cpu_data structure. - `plat_init_apiakey()` function replaced with `plat_init_apkey()` which returns 128-bit value and uses Generic timer physical counter value to increase the randomness of the generated key. The new function can be used for generation of all ARMv8.3-PAuth keys - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`. - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively; pauth_disable_el1()` and `pauth_disable_el3()` functions disable PAuth for EL1 and EL3 respectively; `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from cpu-data structure. - Combined `save_gp_pauth_registers()` function replaces calls to `save_gp_registers()` and `pauth_context_save()`; `restore_gp_pauth_registers()` replaces `pauth_context_restore()` and `restore_gp_registers()` calls. - `restore_gp_registers_eret()` function removed with corresponding code placed in `el3_exit()`. - Fixed the issue when `pauth_t pauth_ctx` structure allocated space for 12 uint64_t PAuth registers instead of 10 by removal of macro CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h` and assigning its value to CTX_PAUTH_REGS_END. - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions in `msr spsel` instruction instead of hard-coded values. - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI. Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211 Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
-
- 09 Sep, 2019 1 commit
-
-
Justin Chadwell authored
This patch adds support for the new Memory Tagging Extension arriving in ARMv8.5. MTE support is now enabled by default on systems that support at EL0. To enable it at ELx for both the non-secure and the secure world, the compiler flag CTX_INCLUDE_MTE_REGS includes register saving and restoring when necessary in order to prevent register leakage between the worlds. Change-Id: I2d4ea993d6b11654ea0d4757d00ca20d23acf36c Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
-
- 21 Aug, 2019 1 commit
-
-
Alexei Fedorov authored
This patch fixes an issue when secure world timing information can be leaked because Secure Cycle Counter is not disabled. For ARMv8.5 the counter gets disabled by setting MDCR_El3.SCCD bit on CPU cold/warm boot. For the earlier architectures PMCR_EL0 register is saved/restored on secure world entry/exit from/to Non-secure state, and cycle counting gets disabled by setting PMCR_EL0.DP bit. 'include\aarch64\arch.h' header file was tided up and new ARMv8.5-PMU related definitions were added. Change-Id: I6f56db6bc77504634a352388990ad925a69ebbfa Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
-
- 12 Jul, 2019 1 commit
-
-
Soby Mathew authored
This patch enables MTE for Normal world if the CPU suppors it. Enabling MTE for secure world will be done later. Change-Id: I9ef64460beaba15e9a9c20ab02da4fb2208b6f7d Signed-off-by: Soby Mathew <soby.mathew@arm.com>
-
- 14 Mar, 2019 1 commit
-
-
Sandrine Bailleux authored
Instruction key A was incorrectly restored in the instruction key B registers. Change-Id: I4cb81ac72180442c077898509cb696c9d992eda3 Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
-
- 28 Feb, 2019 1 commit
-
-
Antonio Nino Diaz authored
Fix some typos and clarify some sentences. Change-Id: Id276d1ced9a991b4eddc5c47ad9a825e6b29ef74 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 27 Feb, 2019 3 commits
-
-
Antonio Nino Diaz authored
The previous commit added the infrastructure to load and save ARMv8.3-PAuth registers during Non-secure <-> Secure world switches, but didn't actually enable pointer authentication in the firmware. This patch adds the functionality needed for platforms to provide authentication keys for the firmware, and a new option (ENABLE_PAUTH) to enable pointer authentication in the firmware itself. This option is disabled by default, and it requires CTX_INCLUDE_PAUTH_REGS to be enabled. Change-Id: I35127ec271e1198d43209044de39fa712ef202a5 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
ARMv8.3-PAuth adds functionality that supports address authentication of the contents of a register before that register is used as the target of an indirect branch, or as a load. This feature is supported only in AArch64 state. This feature is mandatory in ARMv8.3 implementations. This feature adds several registers to EL1. A new option called CTX_INCLUDE_PAUTH_REGS has been added to select if the TF needs to save them during Non-secure <-> Secure world switches. This option must be enabled if the hardware has the registers or the values will be leaked during world switches. To prevent leaks, this patch also disables pointer authentication in the Secure world if CTX_INCLUDE_PAUTH_REGS is 0. Any attempt to use it will be trapped in EL3. Change-Id: I27beba9907b9a86c6df1d0c5bf6180c972830855 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
Minor style cleanup. Change-Id: Ief19dece41a989e2e8157859a265701549f6c585 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 26 Feb, 2019 1 commit
-
-
Louis Mayencourt authored
Implicit Error Synchronization Barrier (IESB) might not be correctly generated in Cortex-A75 r0p0. To prevent this, IESB are enabled at all expection levels. Change-Id: I2a1a568668a31e4f3f38d0fba1d632ad9939e5ad Signed-off-by: Louis Mayencourt <louis.mayencourt@arm.com>
-
- 15 Jan, 2019 1 commit
-
-
Paul Beesley authored
Corrects typos in core code, documentation files, drivers, Arm platforms and services. None of the corrections affect code; changes are limited to comments and other documentation. Change-Id: I5c1027b06ef149864f315ccc0ea473e2a16bfd1d Signed-off-by: Paul Beesley <paul.beesley@arm.com>
-
- 04 Jan, 2019 1 commit
-
-
Antonio Nino Diaz authored
Enforce full include path for includes. Deprecate old paths. The following folders inside include/lib have been left unchanged: - include/lib/cpus/${ARCH} - include/lib/el3_runtime/${ARCH} The reason for this change is that having a global namespace for includes isn't a good idea. It defeats one of the advantages of having folders and it introduces problems that are sometimes subtle (because you may not know the header you are actually including if there are two of them). For example, this patch had to be created because two headers were called the same way: e0ea0928 ("Fix gpio includes of mt8173 platform to avoid collision."). More recently, this patch has had similar problems: 46f9b2c3 ("drivers: add tzc380 support"). This problem was introduced in commit 4ecca339 ("Move include and source files to logical locations"). At that time, there weren't too many headers so it wasn't a real issue. However, time has shown that this creates problems. Platforms that want to preserve the way they include headers may add the removed paths to PLAT_INCLUDES, but this is discouraged. Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 01 Nov, 2018 1 commit
-
-
Antonio Nino Diaz authored
The macro EL_IMPLEMENTED() has been deprecated in favour of the new function el_implemented(). Change-Id: Ic9b1b81480b5e019b50a050e8c1a199991bf0ca9 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 29 Oct, 2018 1 commit
-
-
Antonio Nino Diaz authored
No functional changes. Change-Id: I2f28f20944f552447ac4e9e755493cd7c0ea1192 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 16 Oct, 2018 1 commit
-
-
Jeenu Viswambharan authored
Pointer authentication is an Armv8.3 feature that introduces instructions that can be used to authenticate and verify pointers. Pointer authentication instructions are allowed to be accessed from all ELs but only when EL3 explicitly allows for it; otherwise, their usage will trap to EL3. Since EL3 doesn't have trap handling in place, this patch unconditionally disables all related traps to EL3 to avoid potential misconfiguration leading to an unhandled EL3 exception. Fixes ARM-software/tf-issues#629 Change-Id: I9bd2efe0dc714196f503713b721ffbf05672c14d Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 03 Oct, 2018 1 commit
-
-
Daniel Boulby authored
Mark the initialization functions in BL31, such as context management, EHF, RAS and PSCI as __init so that they can be reclaimed by the platform when no longer needed Change-Id: I7446aeee3dde8950b0f410cb766b7a2312c20130 Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
-
- 30 Aug, 2018 1 commit
-
-
Julius Werner authored
This patch fixes a bug in the context management code that causes it to ignore the HANDLE_EA_EL3_FIRST compile-time option and instead always configure SCR_EL3 to force all external aborts to trap into EL3. The code used #ifdef to read compile-time option declared with add_define in the Makefile... however, those options are always defined, they're just defined to either 0 or 1, so #if is the correct syntax to check for them. Also update the documentation to match. This bug has existed since the Nov 2017 commit 76454abf (AArch64: Introduce External Abort handling), which changed the HANDLE_EA_EL3_FIRST option to use add_define. Change-Id: I7189f41d0daee78fa2fcf4066323e663e1e04d3d Signed-off-by: Julius Werner <jwerner@chromium.org>
-
- 20 Aug, 2018 1 commit
-
-
Jeenu Viswambharan authored
Memory Partitioning And Monitoring is an Armv8.4 feature that enables various memory system components and resources to define partitions. Software running at various ELs can then assign themselves to the desired partition to control their performance aspects. With this patch, when ENABLE_MPAM_FOR_LOWER_ELS is set to 1, EL3 allows lower ELs to access their own MPAM registers without trapping to EL3. This patch however doesn't make use of partitioning in EL3; platform initialisation code should configure and use partitions in EL3 if required. Change-Id: I5a55b6771ccaa0c1cffc05543d2116b60cbbcdcd Co-authored-by: James Morse <james.morse@arm.com> Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 23 May, 2018 2 commits
-
-
Antonio Nino Diaz authored
This function can be currently accessed through the wrappers cm_init_context_by_index() and cm_init_my_context(). However, they only work on contexts that are associated to a CPU. By making this function public, it is possible to set up a context that isn't associated to any CPU. For consistency, it has been renamed to cm_setup_context(). Change-Id: Ib2146105abc8137bab08745a8adb30ca2c4cedf4 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Dimitris Papastamos authored
Some CPUS may benefit from using a dynamic mitigation approach for CVE-2018-3639. A new SMC interface is defined to allow software executing in lower ELs to enable or disable the mitigation for their execution context. It should be noted that regardless of the state of the mitigation for lower ELs, code executing in EL3 is always mitigated against CVE-2018-3639. NOTE: This change is a compatibility break for any platform using the declare_cpu_ops_workaround_cve_2017_5715 macro. Migrate to the declare_cpu_ops_wa macro instead. Change-Id: I3509a9337ad217bbd96de9f380c4ff8bf7917013 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
- 09 May, 2018 1 commit
-
-
Varun Wadekar authored
The context management library initialises the CPU context for the secure/non-secure worlds to zero. This leads to zeros being stored to the actual registers when we restore the CPU context, during a world switch. Denver CPUs dont expect zero to be written to the implementation defined, actlr_el1 register, at any point of time. Writing a zero to some fields of this register, results in an UNDEFINED exception. This patch bases the context actlr_el1 value on the actual hardware register, to maintain parity with the expected settings Change-Id: I1c806d7ff12daa7fd1e5c72825494b81454948f2 Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
- 04 May, 2018 3 commits
-
-
Jeenu Viswambharan authored
The ARMv8.4 RAS extensions introduce architectural support for software to inject faults into the system in order to test fault-handling software. This patch introduces the build option FAULT_HANDLING_SUPPORT to allow for lower ELs to use registers in the Standard Error Record to inject fault. The build option RAS_EXTENSIONS must also be enabled along with fault injection. This feature is intended for testing purposes only, and is advisable to keep disabled for production images. Change-Id: I6f7a4454b15aec098f9505a10eb188c2f928f7ea Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
RAS extensions are mandatory for ARMv8.2 CPUs, but are also optional extensions to base ARMv8.0 architecture. This patch adds build system support to enable RAS features in ARM Trusted Firmware. A boolean build option RAS_EXTENSION is introduced for this. With RAS_EXTENSION, an Exception Synchronization Barrier (ESB) is inserted at all EL3 vector entry and exit. ESBs will synchronize pending external aborts before entering EL3, and therefore will contain and attribute errors to lower EL execution. Any errors thus synchronized are detected via. DISR_EL1 register. When RAS_EXTENSION is set to 1, HANDLE_EL3_EA_FIRST must also be set to 1. Change-Id: I38a19d84014d4d8af688bd81d61ba582c039383a Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
At present, the function that restores general purpose registers also does ERET. Refactor the restore code to restore general purpose registers without ERET to complement the save function. The macro save_x18_to_x29_sp_el0 was used only once, and is therefore removed, and its contents expanded inline for readability. No functional changes, but with this patch: - The SMC return path will incur an branch-return and an additional register load. - The unknown SMC path restores registers x0 to x3. Change-Id: I7a1a63e17f34f9cde810685d70a0ad13ca3b7c50 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 21 Mar, 2018 1 commit
-
-
Antonio Nino Diaz authored
When the source code says 'SMCC' it is talking about the SMC Calling Convention. The correct acronym is SMCCC. This affects a few definitions and file names. Some files have been renamed (smcc.h, smcc_helpers.h and smcc_macros.S) but the old files have been kept for compatibility, they include the new ones with an ERROR_DEPRECATED guard. Change-Id: I78f94052a502436fdd97ca32c0fe86bd58173f2f Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 30 Nov, 2017 1 commit
-
-
David Cunado authored
This patch adds a new build option, ENABLE_SVE_FOR_NS, which when set to one EL3 will check to see if the Scalable Vector Extension (SVE) is implemented when entering and exiting the Non-secure world. If SVE is implemented, EL3 will do the following: - Entry to Non-secure world: SIMD, FP and SVE functionality is enabled. - Exit from Non-secure world: SIMD, FP and SVE functionality is disabled. As SIMD and FP registers are part of the SVE Z-registers then any use of SIMD / FP functionality would corrupt the SVE registers. The build option default is 1. The SVE functionality is only supported on AArch64 and so the build option is set to zero when the target archiecture is AArch32. This build option is not compatible with the CTX_INCLUDE_FPREGS - an assert will be raised on platforms where SVE is implemented and both ENABLE_SVE_FOR_NS and CTX_INCLUDE_FPREGS are set to 1. Also note this change prevents secure world use of FP&SIMD registers on SVE-enabled platforms. Existing Secure-EL1 Payloads will not work on such platforms unless ENABLE_SVE_FOR_NS is set to 0. Additionally, on the first entry into the Non-secure world the SVE functionality is enabled and the SVE Z-register length is set to the maximum size allowed by the architecture. This includes the use case where EL2 is implemented but not used. Change-Id: Ie2d733ddaba0b9bef1d7c9765503155188fe7dae Signed-off-by: David Cunado <david.cunado@arm.com>
-