- 22 Jun, 2018 1 commit
-
-
Antonio Nino Diaz authored
The values defined in this type are used in logical operations, which goes against MISRA Rule 10.1: "Operands shall not be of an inappropriate essential type". Now, `unsigned int` is used instead. This also allows us to move the dynamic mapping bit from 30 to 31. It was an undefined behaviour in the past because an enum is signed by default, and bit 31 corresponds to the sign bit. It is undefined behaviour to modify the sign bit. Now, bit 31 is free to use as it was originally meant to be. mmap_attr_t is now defined as an `unsigned int` for backwards compatibility. Change-Id: I6b31218c14b9c7fdabebe432de7fae6e90a97f34 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 21 Jun, 2018 2 commits
-
-
Jeenu Viswambharan authored
This patch introduces setjmp() and ongjmp() primitives to enable standard setjmp/longjmp style execution. Both APIs parameters take a pointer to struct jmpbuf type, which hosts CPU registers saved/restored during jump. As per the standard usage: - setjmp() return 0 when a jump is setup; and a non-zero value when returning from jump. - The caller of setjmp() must not return, or otherwise update stack pointer since. Change-Id: I4af1d32e490cfa547979631b762b4cba188d0551 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Antonio Nino Diaz authored
The XN, PXN and UXN bits are part of the upper attributes, not the lower attributes. Change-Id: Ia5e83f06f2a8de88b551f55f1d36d694918ccbc0 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 19 Jun, 2018 1 commit
-
-
Dimitris Papastamos authored
Change-Id: I18a41bb9fedda635c3c002a7f112578808410ef6 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
- 14 Jun, 2018 1 commit
-
-
Yann Gautier authored
The issue can occur if end_va is equal to the max architecture address, and when mm_cursor point to the last entry of mmap_region_t table: {0}. The first line of the while will then be true, e.g. on AARCH32, we have: mm_cursor->base_va (=0) + mm_cursor->size (=0) - 1 == end_va (=0xFFFFFFFF) And the mm_cursor->size = 0 will be lesser than mm->size A check on mm_cursor->size != 0 should be done as in the previous while, to avoid such kind of infinite loop. fixes arm-software/tf-issues#594 Signed-off-by: Yann Gautier <yann.gautier@st.com>
-
- 13 Jun, 2018 1 commit
-
-
Antonio Nino Diaz authored
The function xlat_arch_is_granule_size_supported() can be used to check if a specific granule size is supported. In Armv8, AArch32 only supports 4 KiB pages. AArch64 supports 4 KiB, 16 KiB or 64 KiB depending on the implementation, which is detected at runtime. The function xlat_arch_get_max_supported_granule_size() returns the max granule size supported by the implementation. Even though right now they are only used by SPM, they may be useful in other places in the future. This patch moves the code currently in SPM to the xlat tables lib so that it can be reused. Change-Id: If54624a5ecf20b9b9b7f38861b56383a03bbc8a4 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 12 Jun, 2018 4 commits
-
-
Daniel Boulby authored
Rule 5.7: A tag name shall be a unique identifier Follow convention of shorter names for smaller scope to fix violations of MISRA rule 5.7 Fixed For: make ARM_TSP_RAM_LOCATION=tdram LOG_LEVEL=50 PLAT=fvp SPD=opteed Change-Id: I5fbb5d6ebddf169550eddb07ed880f5c8076bb76 Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
-
Daniel Boulby authored
Rule 5.7: A tag name shall be a unique identifier There were 2 amu_ctx struct type definitions: - In lib/extensions/amu/aarch64/amu.c - In lib/cpus/aarch64/cpuamu.c Renamed the latter to cpuamu_ctx to avoid this name clash To avoid violation of Rule 8.3 also change name of function amu_ctxs to unique name (cpuamu_ctxs) since it now returns a different type (cpuamu_ctx) than the other amu_ctxs function Fixed for: make LOG_LEVEL=50 PLAT=fvp Change-Id: Ieeb7e390ec2900fd8b775bef312eda93804a43ed Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
-
Daniel Boulby authored
Use a _ prefix for macro arguments to prevent that argument from hiding variables of the same name in the outer scope Rule 5.3: An identifier declared in an inner scope shall not hide an identifier declared in an outer scope Fixed For: make PLAT=fvp USE_COHERENT_MEM=0 Change-Id: If50c583d3b63799ee6852626b15be00c0f6b10a0 Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
-
Daniel Boulby authored
Use a _ prefix for Macro arguments to prevent that argument from hiding variables of the same name in the outer scope Rule 5.3: An identifier declared in an inner scope shall not hide an identifier declared in an outer scope Fixed For: make LOG_LEVEL=50 PLAT=fvp Change-Id: I67b6b05cbad4aeca65ce52981b4679b340604708 Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
-
- 11 Jun, 2018 1 commit
-
-
John Tsichritzis authored
Rule 21.15: The pointer arguments to the Standard Library functions memcpy, memmove and memcmp shall be pointers to qualified or unqualified versions of compatible types. Basically that means that both pointer arguments must be of the same type. However, even if the pointers passed as arguments to the above functions are of the same type, Coverity still thinks it's a violation if we do pointer arithmetics directly at the function call. Thus the pointer arithmetic operations were moved outside of the function argument. First detected on the following configuration make PLAT=fvp LOG_LEVEL=50 Change-Id: I8b912ec1bfa6f2d60857cb1bd453981fd7001b94 Signed-off-by: John Tsichritzis <john.tsichritzis@arm.com>
-
- 08 Jun, 2018 4 commits
-
-
Dimitris Papastamos authored
The Cortex-A76 implements SMCCC_ARCH_WORKAROUND_2 as defined in "Firmware interfaces for mitigating cache speculation vulnerabilities System Software on Arm Systems"[0]. Dynamic mitigation for CVE-2018-3639 is enabled/disabled by setting/clearning bit 16 (Disable load pass store) of `CPUACTLR2_EL1`. NOTE: The generic code that implements dynamic mitigation does not currently implement the expected semantics when dispatching an SDEI event to a lower EL. This will be fixed in a separate patch. [0] https://developer.arm.com/cache-speculation-vulnerability-firmware-specification Change-Id: I8fb2862b9ab24d55a0e9693e48e8be4df32afb5a Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Dimitris Papastamos authored
The workaround uses the instruction patching feature of the Ares cpu. Change-Id: I868fce0dc0e8e41853dcce311f01ee3867aabb59 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Dimitris Papastamos authored
Change-Id: Ia170c12d3929a616ba80eb7645c301066641f5cc Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Isla Mitchell authored
Both Cortex-Ares and Cortex-A76 CPUs use the ARM DynamIQ Shared Unit (DSU). The power-down and power-up sequences are therefore mostly managed in hardware, and required software operations are simple. Change-Id: I3a9447b5bdbdbc5ed845b20f6564d086516fa161 Signed-off-by: Isla Mitchell <isla.mitchell@arm.com>
-
- 07 Jun, 2018 1 commit
-
-
Dimitris Papastamos authored
When SMCCC_ARCH_WORKAROUND_1 is invoked from a lower EL running in AArch32 state, ensure that the SMC call will take a shortcut in EL3. This minimizes the time it takes to apply the mitigation in EL3. When lower ELs run in AArch32, it is preferred that they execute the `BPIALL` instruction to invalidate the BTB. However, on some cores the `BPIALL` instruction may be a no-op and thus would benefit from making the SMCCC_ARCH_WORKAROUND_1 call go through the fast path. Change-Id: Ia38abd92efe2c4b4a8efa7b70f260e43c5bda8a5 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
- 23 May, 2018 5 commits
-
-
Antonio Nino Diaz authored
This function can be currently accessed through the wrappers cm_init_context_by_index() and cm_init_my_context(). However, they only work on contexts that are associated to a CPU. By making this function public, it is possible to set up a context that isn't associated to any CPU. For consistency, it has been renamed to cm_setup_context(). Change-Id: Ib2146105abc8137bab08745a8adb30ca2c4cedf4 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Dimitris Papastamos authored
Some CPUS may benefit from using a dynamic mitigation approach for CVE-2018-3639. A new SMC interface is defined to allow software executing in lower ELs to enable or disable the mitigation for their execution context. It should be noted that regardless of the state of the mitigation for lower ELs, code executing in EL3 is always mitigated against CVE-2018-3639. NOTE: This change is a compatibility break for any platform using the declare_cpu_ops_workaround_cve_2017_5715 macro. Migrate to the declare_cpu_ops_wa macro instead. Change-Id: I3509a9337ad217bbd96de9f380c4ff8bf7917013 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Dimitris Papastamos authored
Implement static mitigation for CVE-2018-3639 on Cortex A57 and A72. Change-Id: I83409a16238729b84142b19e258c23737cc1ddc3 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Dimitris Papastamos authored
For affected CPUs, this approach enables the mitigation during EL3 initialization, following every PE reset. No mechanism is provided to disable the mitigation at runtime. This approach permanently mitigates the entire software stack and no additional mitigation code is required in other software components. TF-A implements this approach for the following affected CPUs: * Cortex-A57 and Cortex-A72, by setting bit 55 (Disable load pass store) of `CPUACTLR_EL1` (`S3_1_C15_C2_0`). * Cortex-A73, by setting bit 3 of `S3_0_C15_C0_0` (not documented in the Technical Reference Manual (TRM)). * Cortex-A75, by setting bit 35 (reserved in TRM) of `CPUACTLR_EL1` (`S3_0_C15_C1_0`). Additionally, a new SMC interface is implemented to allow software executing in lower ELs to discover whether the system is mitigated against CVE-2018-3639. Refer to "Firmware interfaces for mitigating cache speculation vulnerabilities System Software on Arm Systems"[0] for more information. [0] https://developer.arm.com/cache-speculation-vulnerability-firmware-specification Change-Id: I084aa7c3bc7c26bf2df2248301270f77bed22ceb Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Dimitris Papastamos authored
This patch renames symbols and files relating to CVE-2017-5715 to make it easier to introduce new symbols and files for new CVE mitigations. Change-Id: I24c23822862ca73648c772885f1690bed043dbc7 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
- 15 May, 2018 1 commit
-
-
Varun Wadekar authored
Flush the indirect branch predictor and RSB on entry to EL3 by issuing a newly added instruction for Denver CPUs. Support for this operation can be determined by comparing bits 19:16 of ID_AFR0_EL1 with 0b0001. To achieve this without performing any branch instruction, a per-cpu vbar is installed which executes the workaround and then branches off to the corresponding vector entry in the main vector table. A side effect of this change is that the main vbar is configured before any reset handling. This is to allow the per-cpu reset function to override the vbar setting. Change-Id: Ief493cd85935bab3cfee0397e856db5101bc8011 Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
- 09 May, 2018 2 commits
-
-
Varun Wadekar authored
The context management library initialises the CPU context for the secure/non-secure worlds to zero. This leads to zeros being stored to the actual registers when we restore the CPU context, during a world switch. Denver CPUs dont expect zero to be written to the implementation defined, actlr_el1 register, at any point of time. Writing a zero to some fields of this register, results in an UNDEFINED exception. This patch bases the context actlr_el1 value on the actual hardware register, to maintain parity with the expected settings Change-Id: I1c806d7ff12daa7fd1e5c72825494b81454948f2 Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
Roberto Vargas authored
Using variables as format strings can generate security problems when the user can control those strings. Some compilers generate warnings in that cases, even when the variables are constants and are not controlled by the user. Change-Id: I65dee1d1b66feab38cbf298290a86fa56e6cca40 Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
- 04 May, 2018 6 commits
-
-
Jeenu Viswambharan authored
The ARMv8.4 RAS extensions introduce architectural support for software to inject faults into the system in order to test fault-handling software. This patch introduces the build option FAULT_HANDLING_SUPPORT to allow for lower ELs to use registers in the Standard Error Record to inject fault. The build option RAS_EXTENSIONS must also be enabled along with fault injection. This feature is intended for testing purposes only, and is advisable to keep disabled for production images. Change-Id: I6f7a4454b15aec098f9505a10eb188c2f928f7ea Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
EHF currently allows for registering interrupt handlers for a defined priority ranges. This is primarily targeted at various EL3 dispatchers to own ranges of secure interrupt priorities in order to delegate execution to lower ELs. The RAS support added by earlier patches necessitates registering handlers based on interrupt number so that error handling agents shall receive and handle specific Error Recovery or Fault Handling interrupts at EL3. This patch introduces a macro, RAS_INTERRUPTS() to declare an array of interrupt numbers and handlers. Error handling agents can use this macro to register handlers for individual RAS interrupts. The array is expected to be sorted in the increasing order of interrupt numbers. As part of RAS initialisation, the list of all RAS interrupts are sorted based on their ID so that, given an interrupt, its handler can be looked up with a simple binary search. For an error handling agent that wants to handle a RAS interrupt, platform must: - Define PLAT_RAS_PRI to be the priority of all RAS exceptions. - Enumerate interrupts to have the GIC driver program individual EL3 interrupts to the required priority range. This is required by EHF even before this patch. Documentation to follow. Change-Id: I9471e4887ff541f8a7a63309e9cd8f771f76aeda Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
Previous patches added frameworks for handling RAS errors. This patch introduces features that the platform can use to enumerate and iterate RAS nodes: - The REGISTER_RAS_NODES() can be used to expose an array of ras_node_info_t structures. Each ras_node_info_t describes a RAS node, along with handlers for probing the node for error, and if did record an error, another handler to handle it. - The macro for_each_ras_node() can be used to iterate over the registered RAS nodes, probe for, and handle any errors. The common platform EA handler has been amended using error handling primitives introduced by both this and previous patches. Change-Id: I2e13f65a88357bc48cd97d608db6c541fad73853 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
The ARMv8 RAS Extensions introduced Standard Error Records which are a set of standard registers through which: - Platform can configure RAS node policy; e.g., notification mechanism; - RAS nodes can record and expose error information for error handling agents. Standard Error Records can either be accessed via. memory-mapped or System registers. This patch adds helper functions to access registers and fields within an error record. Change-Id: I6594ba799f4a1789d7b1e45b3e17fd40e7e0ba5c Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
RAS extensions are mandatory for ARMv8.2 CPUs, but are also optional extensions to base ARMv8.0 architecture. This patch adds build system support to enable RAS features in ARM Trusted Firmware. A boolean build option RAS_EXTENSION is introduced for this. With RAS_EXTENSION, an Exception Synchronization Barrier (ESB) is inserted at all EL3 vector entry and exit. ESBs will synchronize pending external aborts before entering EL3, and therefore will contain and attribute errors to lower EL execution. Any errors thus synchronized are detected via. DISR_EL1 register. When RAS_EXTENSION is set to 1, HANDLE_EL3_EA_FIRST must also be set to 1. Change-Id: I38a19d84014d4d8af688bd81d61ba582c039383a Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
At present, the function that restores general purpose registers also does ERET. Refactor the restore code to restore general purpose registers without ERET to complement the save function. The macro save_x18_to_x29_sp_el0 was used only once, and is therefore removed, and its contents expanded inline for readability. No functional changes, but with this patch: - The SMC return path will incur an branch-return and an additional register load. - The unknown SMC path restores registers x0 to x3. Change-Id: I7a1a63e17f34f9cde810685d70a0ad13ca3b7c50 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 02 May, 2018 1 commit
-
-
Antonio Nino Diaz authored
In AArch64, the field ID_AA64MMFR0_EL1.PARange has a different set of allowed values depending on the architecture version. Previously, we only compiled the Trusted Firmware with the values that were allowed by the architecture. However, given that this field is read-only, it is easier to compile the code with all values regardless of the target architecture. Change-Id: I57597ed103dd0189b1fb738a9ec5497391c10dd1 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 01 May, 2018 1 commit
-
-
Roberto Vargas authored
Previously mem_protect used to be only supported from BL2. This is not helpful in the case when ARM TF-A BL2 is not used. This patch demonstrates mem_protect from el3_runtime firmware on ARM Platforms specifically when RESET_TO_BL31 or RESET_TO_SP_MIN flag is set as BL2 may be absent in these cases. The Non secure DRAM is dynamically mapped into EL3 mmap tables temporarily and then the protected regions are then cleared. This avoids the need to map the non secure DRAM permanently to BL31/sp_min. The stack size is also increased, because DYNAMIC_XLAT_TABLES require a bigger stack. Change-Id: Ia44c594192ed5c5adc596c0cff2c7cc18c001fde Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
- 26 Apr, 2018 1 commit
-
-
Antonio Nino Diaz authored
According to the ARMv8 ARM issue C.a: AP[1] is valid only for stage 1 of a translation regime that can support two VA ranges. It is RES 1 when stage 1 translations can support only one VA range. This means that, even though this bit is ignored, it should be set to 1 in the EL3 and EL2 translation regimes. For translation regimes consisting on EL0 and a higher regime this bit selects between control at EL0 or at the higher Exception level. The regimes that support two VA ranges are EL1&0 and EL2&0 (the later one is only available since ARMv8.1). This fix has to be applied to both versions of the translation tables library. Change-Id: If19aaf588551bac7aeb6e9a686cf0c2068e7c181 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 17 Apr, 2018 1 commit
-
-
Antonio Nino Diaz authored
Change-Id: I989c1f4aef8e3cb20d5d19e6347575e6449bb60b Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 12 Apr, 2018 2 commits
-
-
Jonathan Wright authored
A fix for errata 835769 may be available in revisions r0p2, r0p3 or r0p4 of the Cortex-A53 processor. The presence of the fix is determined by checking bit 7 in the REVIDR register. If the fix is present we report ERRATA_NOT_APPLIES which silences the erroneous 'missing workaround' warning. Change-Id: Ib75b008e755e9ac648554ca9398024fdbea4a91a Signed-off-by: Jonathan Wright <jonathan.wright@arm.com>
-
Jonathan Wright authored
A fix for errata 843419 may be available in revision r0p4 of the Cortex-A53 processor. The presence of the fix is determined by checking bit 8 in the REVIDR register. If the fix is present we report ERRATA_NOT_APPLIES which silences the erroneous 'missing workaround' warning. Change-Id: Ibd2a478df3e2a6325442a6a48a0bb0259dcfc1d7 Signed-off-by: Jonathan Wright <jonathan.wright@arm.com>
-
- 09 Apr, 2018 1 commit
-
-
Varun Wadekar authored
The last entry in the mapping table is not necessarily the same as the end of the table. This patch loops through the table to find the last entry marker, on every new mmap addition. The memove operation then has to only move the memory between current entry and the last entry. For platforms that arrange their MMIO map properly, this opearation turns out to be a NOP. The previous implementation added significant overhead per mmap addition as the memmove operation always moved the difference between the current mmap entry and the end of the table. Tested on Tegra platforms and this new approach improves the memory mapping time by ~75%, thus significantly reducing boot time on some platforms. Change-Id: Ie3478fa5942379282ef58bee2085da799137e2ca Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
- 27 Mar, 2018 2 commits
-
-
Jonathan Wright authored
Initializes each element of the last_cpu_in_non_cpu_pd array in PSCI stat implementation to -1, the reset value. This satisfies MISRA rule 9.3. Previously, only the first element of the array was initialized to -1. Change-Id: I666c71e6c073710c67c6d24c07a219b1feb5b773 Signed-off-by: Jonathan Wright <jonathan.wright@arm.com>
-
Joel Hutton authored
Void pointers have been used to access linker symbols, by declaring an extern pointer, then taking the address of it. This limits symbols values to aligned pointer values. To remove this restriction an IMPORT_SYM macro has been introduced, which declares it as a char pointer and casts it to the required type. Change-Id: I89877fc3b13ed311817bb8ba79d4872b89bfd3b0 Signed-off-by: Joel Hutton <Joel.Hutton@Arm.com>
-
- 26 Mar, 2018 1 commit
-
-
Jonathan Wright authored
Ensure (where possible) that switch statements in lib comply with MISRA rules 16.1 - 16.7. Change-Id: I52bc896fb7094d2b7569285686ee89f39f1ddd84 Signed-off-by: Jonathan Wright <jonathan.wright@arm.com>
-