- 11 Jul, 2018 3 commits
-
-
Joel Hutton authored
Change-Id: Ic0486131c493632eadf329f80b0b5904aed5e4ef Signed-off-by: Joel Hutton <joel.hutton@arm.com> Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Joel Hutton authored
Change-Id: I2c4b06423fcd96af9351b88a5e2818059f981f1b Signed-off-by: Joel Hutton <Joel.Hutton@Arm.com> Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Roberto Vargas authored
Check_vector_size checks if the size of the vector fits in the size reserved for it. This check creates problems in the Clang assembler. A new macro, end_vector_entry, is added and check_vector_size is deprecated. This new macro fills the current exception vector until the next exception vector. If the size of the current vector is bigger than 32 instructions then it gives an error. Change-Id: Ie8545cf1003a1e31656a1018dd6b4c28a4eaf671 Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
- 10 Jul, 2018 1 commit
-
-
Roberto Vargas authored
Rule 8.3: All declarations of an object or function shall use the same names and type qualifiers. Fixed for: make DEBUG=1 PLAT=juno ARCH=aarch32 AARCH32_SP=sp_min RESET_TO_SP_MIN=1 JUNO_AARCH32_EL3_RUNTIME=1 bl32 Change-Id: Ia34f5155e1cdb67161191f69e8d1248cbaa39e1a Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
- 03 Jul, 2018 3 commits
-
-
Antonio Nino Diaz authored
It is useful to have LOG_LEVEL_VERBOSE because it prints the memory map of each image, but that also means that the change_mem_attributes and get_mem_attributes functions have verbose prints, and generate a too long text output that hides other useful information. As they were mostly there for debug purposes, this patch removes them. Change-Id: I2986537377d1f78be2b79cc8a6cf230c380bdb55 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
The previous debug output for EL1&0 translation regimes was too verbose, which makes it hard to read and hides the intent behind the parameters assigned to each region. This patch simplifies this output and makes the outputs for EL3 and EL1&0 mostly the same. The difference is that in EL1&0 it is specified whether the region is exclusively accessible from EL1 (PRIV) or both EL0 and EL1 (USER). For example: MEM-RW(PRIV)-NOACCESS(USER)-XN(PRIV)-XN(USER)-S MEM-RO(PRIV)-NOACCESS(USER)-EXEC(PRIV)-EXEC(USER)-S After the change, it becomes this: MEM-RW-XN-PRIV-S MEM-RO-EXEC-PRIV-S Change-Id: I15f4b99058429d42107fbf89e15f4838a9b559a5 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
Instead of having one big file with all the code, it's better to have a few smaller files that are more manageable: - xlat_tables_core.c: Code related to the core functionality of the library (map and unmap regions, initialize xlat context). - xlat_tables_context.c: Instantiation of the active image context as well as APIs to manipulate it. - xlat_tables_utils.c: Helper code that isn't part of the core functionality (change attributes, debug print messages). Change-Id: I3ea956fc1afd7473c0bb5e7c6aab3b2e5d88c711 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 27 Jun, 2018 3 commits
-
-
Jeenu Viswambharan authored
Having an active stack while enabling MMU has shown coherency problems. This patch builds on top of translation library changes that introduces MMU-enabling without using stacks. Previously, with HW_ASSISTED_COHERENCY, data caches were disabled while enabling MMU only because of active stack. Now that we can enable MMU without using stack, we can enable both MMU and data caches at the same time. NOTE: Since this feature depends on using translation table library v2, disallow using translation table library v1 with HW_ASSISTED_COHERENCY. Fixes ARM-software/tf-issues#566 Change-Id: Ie55aba0c23ee9c5109eb3454cb8fa45d74f8bbb2 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
An earlier patch split MMU-enabling function for translation library v2. Although we don't intend to introduce the exact same functionality for xlat v1, this patch introduces stubs for directly enabling MMU to maintain API-compatibility. Change-Id: Id7d56e124c80af71de999fcda10f1734b50bca97 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
At present, the function provided by the translation library to enable MMU constructs appropriate values for translation library, and programs them to the right registers. The construction of initial values, however, is only required once as both the primary and secondaries program the same values. Additionally, the MMU-enabling function is written in C, which means there's an active stack at the time of enabling MMU. On some systems, like Arm DynamIQ, having active stack while enabling MMU during warm boot might lead to coherency problems. This patch addresses both the above problems by: - Splitting the MMU-enabling function into two: one that sets up values to be programmed into the registers, and another one that takes the pre-computed values and writes to the appropriate registers. With this, the primary effectively calls both functions to have the MMU enabled, but secondaries only need to call the latter. - Rewriting the function that enables MMU in assembly so that it doesn't use stack. This patch fixes a bunch of MISRA issues on the way. Change-Id: I0faca97263a970ffe765f0e731a1417e43fbfc45 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 22 Jun, 2018 1 commit
-
-
Antonio Nino Diaz authored
The values defined in this type are used in logical operations, which goes against MISRA Rule 10.1: "Operands shall not be of an inappropriate essential type". Now, `unsigned int` is used instead. This also allows us to move the dynamic mapping bit from 30 to 31. It was an undefined behaviour in the past because an enum is signed by default, and bit 31 corresponds to the sign bit. It is undefined behaviour to modify the sign bit. Now, bit 31 is free to use as it was originally meant to be. mmap_attr_t is now defined as an `unsigned int` for backwards compatibility. Change-Id: I6b31218c14b9c7fdabebe432de7fae6e90a97f34 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 21 Jun, 2018 2 commits
-
-
Jeenu Viswambharan authored
This patch introduces setjmp() and ongjmp() primitives to enable standard setjmp/longjmp style execution. Both APIs parameters take a pointer to struct jmpbuf type, which hosts CPU registers saved/restored during jump. As per the standard usage: - setjmp() return 0 when a jump is setup; and a non-zero value when returning from jump. - The caller of setjmp() must not return, or otherwise update stack pointer since. Change-Id: I4af1d32e490cfa547979631b762b4cba188d0551 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Antonio Nino Diaz authored
The XN, PXN and UXN bits are part of the upper attributes, not the lower attributes. Change-Id: Ia5e83f06f2a8de88b551f55f1d36d694918ccbc0 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 19 Jun, 2018 1 commit
-
-
Dimitris Papastamos authored
Change-Id: I18a41bb9fedda635c3c002a7f112578808410ef6 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
- 14 Jun, 2018 1 commit
-
-
Yann Gautier authored
The issue can occur if end_va is equal to the max architecture address, and when mm_cursor point to the last entry of mmap_region_t table: {0}. The first line of the while will then be true, e.g. on AARCH32, we have: mm_cursor->base_va (=0) + mm_cursor->size (=0) - 1 == end_va (=0xFFFFFFFF) And the mm_cursor->size = 0 will be lesser than mm->size A check on mm_cursor->size != 0 should be done as in the previous while, to avoid such kind of infinite loop. fixes arm-software/tf-issues#594 Signed-off-by: Yann Gautier <yann.gautier@st.com>
-
- 13 Jun, 2018 1 commit
-
-
Antonio Nino Diaz authored
The function xlat_arch_is_granule_size_supported() can be used to check if a specific granule size is supported. In Armv8, AArch32 only supports 4 KiB pages. AArch64 supports 4 KiB, 16 KiB or 64 KiB depending on the implementation, which is detected at runtime. The function xlat_arch_get_max_supported_granule_size() returns the max granule size supported by the implementation. Even though right now they are only used by SPM, they may be useful in other places in the future. This patch moves the code currently in SPM to the xlat tables lib so that it can be reused. Change-Id: If54624a5ecf20b9b9b7f38861b56383a03bbc8a4 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 12 Jun, 2018 4 commits
-
-
Daniel Boulby authored
Rule 5.7: A tag name shall be a unique identifier Follow convention of shorter names for smaller scope to fix violations of MISRA rule 5.7 Fixed For: make ARM_TSP_RAM_LOCATION=tdram LOG_LEVEL=50 PLAT=fvp SPD=opteed Change-Id: I5fbb5d6ebddf169550eddb07ed880f5c8076bb76 Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
-
Daniel Boulby authored
Rule 5.7: A tag name shall be a unique identifier There were 2 amu_ctx struct type definitions: - In lib/extensions/amu/aarch64/amu.c - In lib/cpus/aarch64/cpuamu.c Renamed the latter to cpuamu_ctx to avoid this name clash To avoid violation of Rule 8.3 also change name of function amu_ctxs to unique name (cpuamu_ctxs) since it now returns a different type (cpuamu_ctx) than the other amu_ctxs function Fixed for: make LOG_LEVEL=50 PLAT=fvp Change-Id: Ieeb7e390ec2900fd8b775bef312eda93804a43ed Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
-
Daniel Boulby authored
Use a _ prefix for macro arguments to prevent that argument from hiding variables of the same name in the outer scope Rule 5.3: An identifier declared in an inner scope shall not hide an identifier declared in an outer scope Fixed For: make PLAT=fvp USE_COHERENT_MEM=0 Change-Id: If50c583d3b63799ee6852626b15be00c0f6b10a0 Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
-
Daniel Boulby authored
Use a _ prefix for Macro arguments to prevent that argument from hiding variables of the same name in the outer scope Rule 5.3: An identifier declared in an inner scope shall not hide an identifier declared in an outer scope Fixed For: make LOG_LEVEL=50 PLAT=fvp Change-Id: I67b6b05cbad4aeca65ce52981b4679b340604708 Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
-
- 11 Jun, 2018 1 commit
-
-
John Tsichritzis authored
Rule 21.15: The pointer arguments to the Standard Library functions memcpy, memmove and memcmp shall be pointers to qualified or unqualified versions of compatible types. Basically that means that both pointer arguments must be of the same type. However, even if the pointers passed as arguments to the above functions are of the same type, Coverity still thinks it's a violation if we do pointer arithmetics directly at the function call. Thus the pointer arithmetic operations were moved outside of the function argument. First detected on the following configuration make PLAT=fvp LOG_LEVEL=50 Change-Id: I8b912ec1bfa6f2d60857cb1bd453981fd7001b94 Signed-off-by: John Tsichritzis <john.tsichritzis@arm.com>
-
- 08 Jun, 2018 4 commits
-
-
Dimitris Papastamos authored
The Cortex-A76 implements SMCCC_ARCH_WORKAROUND_2 as defined in "Firmware interfaces for mitigating cache speculation vulnerabilities System Software on Arm Systems"[0]. Dynamic mitigation for CVE-2018-3639 is enabled/disabled by setting/clearning bit 16 (Disable load pass store) of `CPUACTLR2_EL1`. NOTE: The generic code that implements dynamic mitigation does not currently implement the expected semantics when dispatching an SDEI event to a lower EL. This will be fixed in a separate patch. [0] https://developer.arm.com/cache-speculation-vulnerability-firmware-specification Change-Id: I8fb2862b9ab24d55a0e9693e48e8be4df32afb5a Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Dimitris Papastamos authored
The workaround uses the instruction patching feature of the Ares cpu. Change-Id: I868fce0dc0e8e41853dcce311f01ee3867aabb59 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Dimitris Papastamos authored
Change-Id: Ia170c12d3929a616ba80eb7645c301066641f5cc Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Isla Mitchell authored
Both Cortex-Ares and Cortex-A76 CPUs use the ARM DynamIQ Shared Unit (DSU). The power-down and power-up sequences are therefore mostly managed in hardware, and required software operations are simple. Change-Id: I3a9447b5bdbdbc5ed845b20f6564d086516fa161 Signed-off-by: Isla Mitchell <isla.mitchell@arm.com>
-
- 07 Jun, 2018 1 commit
-
-
Dimitris Papastamos authored
When SMCCC_ARCH_WORKAROUND_1 is invoked from a lower EL running in AArch32 state, ensure that the SMC call will take a shortcut in EL3. This minimizes the time it takes to apply the mitigation in EL3. When lower ELs run in AArch32, it is preferred that they execute the `BPIALL` instruction to invalidate the BTB. However, on some cores the `BPIALL` instruction may be a no-op and thus would benefit from making the SMCCC_ARCH_WORKAROUND_1 call go through the fast path. Change-Id: Ia38abd92efe2c4b4a8efa7b70f260e43c5bda8a5 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
- 23 May, 2018 5 commits
-
-
Antonio Nino Diaz authored
This function can be currently accessed through the wrappers cm_init_context_by_index() and cm_init_my_context(). However, they only work on contexts that are associated to a CPU. By making this function public, it is possible to set up a context that isn't associated to any CPU. For consistency, it has been renamed to cm_setup_context(). Change-Id: Ib2146105abc8137bab08745a8adb30ca2c4cedf4 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Dimitris Papastamos authored
Some CPUS may benefit from using a dynamic mitigation approach for CVE-2018-3639. A new SMC interface is defined to allow software executing in lower ELs to enable or disable the mitigation for their execution context. It should be noted that regardless of the state of the mitigation for lower ELs, code executing in EL3 is always mitigated against CVE-2018-3639. NOTE: This change is a compatibility break for any platform using the declare_cpu_ops_workaround_cve_2017_5715 macro. Migrate to the declare_cpu_ops_wa macro instead. Change-Id: I3509a9337ad217bbd96de9f380c4ff8bf7917013 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Dimitris Papastamos authored
Implement static mitigation for CVE-2018-3639 on Cortex A57 and A72. Change-Id: I83409a16238729b84142b19e258c23737cc1ddc3 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Dimitris Papastamos authored
For affected CPUs, this approach enables the mitigation during EL3 initialization, following every PE reset. No mechanism is provided to disable the mitigation at runtime. This approach permanently mitigates the entire software stack and no additional mitigation code is required in other software components. TF-A implements this approach for the following affected CPUs: * Cortex-A57 and Cortex-A72, by setting bit 55 (Disable load pass store) of `CPUACTLR_EL1` (`S3_1_C15_C2_0`). * Cortex-A73, by setting bit 3 of `S3_0_C15_C0_0` (not documented in the Technical Reference Manual (TRM)). * Cortex-A75, by setting bit 35 (reserved in TRM) of `CPUACTLR_EL1` (`S3_0_C15_C1_0`). Additionally, a new SMC interface is implemented to allow software executing in lower ELs to discover whether the system is mitigated against CVE-2018-3639. Refer to "Firmware interfaces for mitigating cache speculation vulnerabilities System Software on Arm Systems"[0] for more information. [0] https://developer.arm.com/cache-speculation-vulnerability-firmware-specification Change-Id: I084aa7c3bc7c26bf2df2248301270f77bed22ceb Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Dimitris Papastamos authored
This patch renames symbols and files relating to CVE-2017-5715 to make it easier to introduce new symbols and files for new CVE mitigations. Change-Id: I24c23822862ca73648c772885f1690bed043dbc7 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
- 15 May, 2018 1 commit
-
-
Varun Wadekar authored
Flush the indirect branch predictor and RSB on entry to EL3 by issuing a newly added instruction for Denver CPUs. Support for this operation can be determined by comparing bits 19:16 of ID_AFR0_EL1 with 0b0001. To achieve this without performing any branch instruction, a per-cpu vbar is installed which executes the workaround and then branches off to the corresponding vector entry in the main vector table. A side effect of this change is that the main vbar is configured before any reset handling. This is to allow the per-cpu reset function to override the vbar setting. Change-Id: Ief493cd85935bab3cfee0397e856db5101bc8011 Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
- 09 May, 2018 2 commits
-
-
Varun Wadekar authored
The context management library initialises the CPU context for the secure/non-secure worlds to zero. This leads to zeros being stored to the actual registers when we restore the CPU context, during a world switch. Denver CPUs dont expect zero to be written to the implementation defined, actlr_el1 register, at any point of time. Writing a zero to some fields of this register, results in an UNDEFINED exception. This patch bases the context actlr_el1 value on the actual hardware register, to maintain parity with the expected settings Change-Id: I1c806d7ff12daa7fd1e5c72825494b81454948f2 Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
Roberto Vargas authored
Using variables as format strings can generate security problems when the user can control those strings. Some compilers generate warnings in that cases, even when the variables are constants and are not controlled by the user. Change-Id: I65dee1d1b66feab38cbf298290a86fa56e6cca40 Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
- 04 May, 2018 6 commits
-
-
Jeenu Viswambharan authored
The ARMv8.4 RAS extensions introduce architectural support for software to inject faults into the system in order to test fault-handling software. This patch introduces the build option FAULT_HANDLING_SUPPORT to allow for lower ELs to use registers in the Standard Error Record to inject fault. The build option RAS_EXTENSIONS must also be enabled along with fault injection. This feature is intended for testing purposes only, and is advisable to keep disabled for production images. Change-Id: I6f7a4454b15aec098f9505a10eb188c2f928f7ea Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
EHF currently allows for registering interrupt handlers for a defined priority ranges. This is primarily targeted at various EL3 dispatchers to own ranges of secure interrupt priorities in order to delegate execution to lower ELs. The RAS support added by earlier patches necessitates registering handlers based on interrupt number so that error handling agents shall receive and handle specific Error Recovery or Fault Handling interrupts at EL3. This patch introduces a macro, RAS_INTERRUPTS() to declare an array of interrupt numbers and handlers. Error handling agents can use this macro to register handlers for individual RAS interrupts. The array is expected to be sorted in the increasing order of interrupt numbers. As part of RAS initialisation, the list of all RAS interrupts are sorted based on their ID so that, given an interrupt, its handler can be looked up with a simple binary search. For an error handling agent that wants to handle a RAS interrupt, platform must: - Define PLAT_RAS_PRI to be the priority of all RAS exceptions. - Enumerate interrupts to have the GIC driver program individual EL3 interrupts to the required priority range. This is required by EHF even before this patch. Documentation to follow. Change-Id: I9471e4887ff541f8a7a63309e9cd8f771f76aeda Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
Previous patches added frameworks for handling RAS errors. This patch introduces features that the platform can use to enumerate and iterate RAS nodes: - The REGISTER_RAS_NODES() can be used to expose an array of ras_node_info_t structures. Each ras_node_info_t describes a RAS node, along with handlers for probing the node for error, and if did record an error, another handler to handle it. - The macro for_each_ras_node() can be used to iterate over the registered RAS nodes, probe for, and handle any errors. The common platform EA handler has been amended using error handling primitives introduced by both this and previous patches. Change-Id: I2e13f65a88357bc48cd97d608db6c541fad73853 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
The ARMv8 RAS Extensions introduced Standard Error Records which are a set of standard registers through which: - Platform can configure RAS node policy; e.g., notification mechanism; - RAS nodes can record and expose error information for error handling agents. Standard Error Records can either be accessed via. memory-mapped or System registers. This patch adds helper functions to access registers and fields within an error record. Change-Id: I6594ba799f4a1789d7b1e45b3e17fd40e7e0ba5c Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
RAS extensions are mandatory for ARMv8.2 CPUs, but are also optional extensions to base ARMv8.0 architecture. This patch adds build system support to enable RAS features in ARM Trusted Firmware. A boolean build option RAS_EXTENSION is introduced for this. With RAS_EXTENSION, an Exception Synchronization Barrier (ESB) is inserted at all EL3 vector entry and exit. ESBs will synchronize pending external aborts before entering EL3, and therefore will contain and attribute errors to lower EL execution. Any errors thus synchronized are detected via. DISR_EL1 register. When RAS_EXTENSION is set to 1, HANDLE_EL3_EA_FIRST must also be set to 1. Change-Id: I38a19d84014d4d8af688bd81d61ba582c039383a Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
At present, the function that restores general purpose registers also does ERET. Refactor the restore code to restore general purpose registers without ERET to complement the save function. The macro save_x18_to_x29_sp_el0 was used only once, and is therefore removed, and its contents expanded inline for readability. No functional changes, but with this patch: - The SMC return path will incur an branch-return and an additional register load. - The unknown SMC path restores registers x0 to x3. Change-Id: I7a1a63e17f34f9cde810685d70a0ad13ca3b7c50 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-