- 08 Nov, 2018 1 commit
-
-
Antonio Nino Diaz authored
All identifiers, regardless of use, that start with two underscores are reserved. This means they can't be used in header guards. The style that this project is now to use the full name of the file in capital letters followed by 'H'. For example, for a file called "uart_example.h", the header guard is UART_EXAMPLE_H. The exceptions are files that are imported from other projects: - CryptoCell driver - dt-bindings folders - zlib headers Change-Id: I50561bf6c88b491ec440d0c8385c74650f3c106e Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 29 Oct, 2018 1 commit
-
-
Soby Mathew authored
This patch fixes up the AArch64 assembly code to use adrp/adr instructions instead of ldr instruction for reference to symbols. This allows these assembly sequences to be Position Independant. Note that the the reference to sizes have been replaced with calculation of size at runtime. This is because size is a constant value and does not depend on execution address and using PC relative instructions for loading them makes them relative to execution address. Also we cannot use `ldr` instruction to load size as it generates a dynamic relocation entry which must *not* be fixed up and it is difficult for a dynamic loader to differentiate which entries need to be skipped. Change-Id: I8bf4ed5c58a9703629e5498a27624500ef40a836 Signed-off-by: Soby Mathew <soby.mathew@arm.com>
-
- 16 Oct, 2018 1 commit
-
-
Jeenu Viswambharan authored
Pointer authentication is an Armv8.3 feature that introduces instructions that can be used to authenticate and verify pointers. Pointer authentication instructions are allowed to be accessed from all ELs but only when EL3 explicitly allows for it; otherwise, their usage will trap to EL3. Since EL3 doesn't have trap handling in place, this patch unconditionally disables all related traps to EL3 to avoid potential misconfiguration leading to an unhandled EL3 exception. Fixes ARM-software/tf-issues#629 Change-Id: I9bd2efe0dc714196f503713b721ffbf05672c14d Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 22 May, 2018 1 commit
-
-
Qixiang Xu authored
File: include/common/aarch64/el3_common_macros.S Change-Id: I619401e961a3f627ad8864781b5f90bc747c3ddb Signed-off-by: Qixiang Xu <qixiang.xu@arm.com>
-
- 07 Apr, 2018 2 commits
-
-
Jiafei Pan authored
For the adr instruction, it require the label's offset from the address of this instruction must be in the range +/-1MB. If the option "BL2_IN_XIP_MEM" is set to '1', in some cases, BL2's RW memory will not in the range of +/-1MB from BL2's RO memory region. so we need to use ldr instruction to cover this case. Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
-
Jiafei Pan authored
In some use-cases BL2 will be stored in eXecute In Place (XIP) memory, like BL1. In these use-cases, it is necessary to initialize the RW sections in RAM, while leaving the RO sections in place. This patch enable this use-case with a new build option, BL2_IN_XIP_MEM. For now, this option is only supported when BL2_AT_EL3 is 1. Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
-
- 18 Jan, 2018 1 commit
-
-
Roberto Vargas authored
This patch enables BL2 to execute at the highest exception level without any dependancy on TF BL1. This enables platforms which already have a non-TF Boot ROM to directly load and execute BL2 and subsequent BL stages without need for BL1. This is not currently possible because BL2 executes at S-EL1 and cannot jump straight to EL3. Change-Id: Ief1efca4598560b1b8c8e61fbe26d1f44e929d69 Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
- 11 Jan, 2018 1 commit
-
-
Dimitris Papastamos authored
Invalidate the Branch Target Buffer (BTB) on entry to EL3 by disabling and enabling the MMU. To achieve this without performing any branch instruction, a per-cpu vbar is installed which executes the workaround and then branches off to the corresponding vector entry in the main vector table. A side effect of this change is that the main vbar is configured before any reset handling. This is to allow the per-cpu reset function to override the vbar setting. This workaround is enabled by default on the affected CPUs. Change-Id: I97788d38463a5840a410e3cea85ed297a1678265 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
- 30 Nov, 2017 1 commit
-
-
David Cunado authored
This patch adds a new build option, ENABLE_SVE_FOR_NS, which when set to one EL3 will check to see if the Scalable Vector Extension (SVE) is implemented when entering and exiting the Non-secure world. If SVE is implemented, EL3 will do the following: - Entry to Non-secure world: SIMD, FP and SVE functionality is enabled. - Exit from Non-secure world: SIMD, FP and SVE functionality is disabled. As SIMD and FP registers are part of the SVE Z-registers then any use of SIMD / FP functionality would corrupt the SVE registers. The build option default is 1. The SVE functionality is only supported on AArch64 and so the build option is set to zero when the target archiecture is AArch32. This build option is not compatible with the CTX_INCLUDE_FPREGS - an assert will be raised on platforms where SVE is implemented and both ENABLE_SVE_FOR_NS and CTX_INCLUDE_FPREGS are set to 1. Also note this change prevents secure world use of FP&SIMD registers on SVE-enabled platforms. Existing Secure-EL1 Payloads will not work on such platforms unless ENABLE_SVE_FOR_NS is set to 0. Additionally, on the first entry into the Non-secure world the SVE functionality is enabled and the SVE Z-register length is set to the maximum size allowed by the architecture. This includes the use case where EL2 is implemented but not used. Change-Id: Ie2d733ddaba0b9bef1d7c9765503155188fe7dae Signed-off-by: David Cunado <david.cunado@arm.com>
-
- 20 Nov, 2017 1 commit
-
-
Dimitris Papastamos authored
Factor out SPE operations in a separate file. Use the publish subscribe framework to drain the SPE buffers before entering secure world. Additionally, enable SPE before entering normal world. A side effect of this change is that the profiling buffers are now only drained when a transition from normal world to secure world happens. Previously they were drained also on return from secure world, which is unnecessary as SPE is not supported in S-EL1. Change-Id: I17582c689b4b525770dbb6db098b3a0b5777b70a Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
- 22 Jun, 2017 1 commit
-
-
dp-arm authored
SPE is only supported in non-secure state. Accesses to SPE specific registers from SEL1 will trap to EL3. During a world switch, before `TTBR` is modified the SPE profiling buffers are drained. This is to avoid a potential invalid memory access in SEL1. SPE is architecturally specified only for AArch64. Change-Id: I04a96427d9f9d586c331913d815fdc726855f6b0 Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
-
- 21 Jun, 2017 1 commit
-
-
David Cunado authored
This patch updates the el3_arch_init_common macro so that it fully initialises essential control registers rather then relying on hardware to set the reset values. The context management functions are also updated to fully initialise the appropriate control registers when initialising the non-secure and secure context structures and when preparing to leave EL3 for a lower EL. This gives better alignement with the ARM ARM which states that software must initialise RES0 and RES1 fields with 0 / 1. This patch also corrects the following typos: "NASCR definitions" -> "NSACR definitions" Change-Id: Ia8940b8351dc27bc09e2138b011e249655041cfc Signed-off-by: David Cunado <david.cunado@arm.com>
-
- 03 May, 2017 1 commit
-
-
dp-arm authored
To make software license auditing simpler, use SPDX[0] license identifiers instead of duplicating the license text in every file. NOTE: Files that have been imported by FreeBSD have not been modified. [0]: https://spdx.org/ Change-Id: I80a00e1f641b8cc075ca5a95b10607ed9ed8761a Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
-
- 31 Mar, 2017 1 commit
-
-
Douglas Raillard authored
Introduce new build option ENABLE_STACK_PROTECTOR. It enables compilation of all BL images with one of the GCC -fstack-protector-* options. A new platform function plat_get_stack_protector_canary() is introduced. It returns a value that is used to initialize the canary for stack corruption detection. Returning a random value will prevent an attacker from predicting the value and greatly increase the effectiveness of the protection. A message is printed at the ERROR level when a stack corruption is detected. To be effective, the global data must be stored at an address lower than the base of the stacks. Failure to do so would allow an attacker to overwrite the canary as part of an attack which would void the protection. FVP implementation of plat_get_stack_protector_canary is weak as there is no real source of entropy on the FVP. It therefore relies on a timer's value, which could be predictable. Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
- 15 Feb, 2017 1 commit
-
-
dp-arm authored
Trusted Firmware currently has no support for secure self-hosted debug. To avoid unexpected exceptions, disable software debug exceptions, other than software breakpoint instruction exceptions, from all exception levels in secure state. This applies to both AArch32 and AArch64 EL3 initialization. Change-Id: Id097e54a6bbcd0ca6a2be930df5d860d8d09e777 Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
-
- 06 Feb, 2017 1 commit
-
-
Douglas Raillard authored
Introduce zeromem_dczva function on AArch64 that can handle unaligned addresses and make use of DC ZVA instruction to zero a whole block at a time. This zeroing takes place directly in the cache to speed it up without doing external memory access. Remove the zeromem16 function on AArch64 and replace it with an alias to zeromem. This zeromem16 function is now deprecated. Remove the 16-bytes alignment constraint on __BSS_START__ in firmware-design.md as it is now not mandatory anymore (it used to comply with zeromem16 requirements). Change the 16-bytes alignment constraints in SP min's linker script to a 8-bytes alignment constraint as the AArch32 zeromem implementation is now more efficient on 8-bytes aligned addresses. Introduce zero_normalmem and zeromem helpers in platform agnostic header that are implemented this way: * AArch32: * zero_normalmem: zero using usual data access * zeromem: alias for zero_normalmem * AArch64: * zero_normalmem: zero normal memory using DC ZVA instruction (needs MMU enabled) * zeromem: zero using usual data access Usage guidelines: in most cases, zero_normalmem should be preferred. There are 2 scenarios where zeromem (or memset) must be used instead: * Code that must run with MMU disabled (which means all memory is considered device memory for data accesses). * Code that fills device memory with null bytes. Optionally, the following rule can be applied if performance is important: * Code zeroing small areas (few bytes) that are not secrets should use memset to take advantage of compiler optimizations. Note: Code zeroing security-related critical information should use zero_normalmem/zeromem instead of memset to avoid removal by compilers' optimizations in some cases or misbehaving versions of GCC. Fixes ARM-software/tf-issues#408 Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
- 23 Jan, 2017 1 commit
-
-
Masahiro Yamada authored
One nasty part of ATF is some of boolean macros are always defined as 1 or 0, and the rest of them are only defined under certain conditions. For the former group, "#if FOO" or "#if !FOO" must be used because "#ifdef FOO" is always true. (Options passed by $(call add_define,) are the cases.) For the latter, "#ifdef FOO" or "#ifndef FOO" should be used because checking the value of an undefined macro is strange. Here, IMAGE_BL* is handled by make_helpers/build_macro.mk like follows: $(eval IMAGE := IMAGE_BL$(call uppercase,$(3))) $(OBJ): $(2) @echo " CC $$<" $$(Q)$$(CC) $$(TF_CFLAGS) $$(CFLAGS) -D$(IMAGE) -c $$< -o $$@ This means, IMAGE_BL* is defined when building the corresponding image, but *undefined* for the other images. So, IMAGE_BL* belongs to the latter group where we should use #ifdef or #ifndef. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 09 Nov, 2016 1 commit
-
-
David Cunado authored
In order to avoid unexpected traps into EL3/MON mode, this patch resets the debug registers, MDCR_EL3 and MDCR_EL2 for AArch64, and SDCR and HDCR for AArch32. MDCR_EL3/SDCR is zero'ed when EL3/MON mode is entered, at the start of BL1 and BL31/SMP_MIN. For MDCR_EL2/HDCR, this patch zero's the bits that are architecturally UNKNOWN values on reset. This is done when exiting from EL3/MON mode but only on platforms that support EL2/HYP mode but choose to exit to EL1/SVC mode. Fixes ARM-software/tf-issues#430 Change-Id: Idb992232163c072faa08892251b5626ae4c3a5b6 Signed-off-by: David Cunado <david.cunado@arm.com>
-
- 19 Jul, 2016 1 commit
-
-
Soby Mathew authored
This patch moves assembler macros which are not architecture specific to a new file `asm_macros_common.S` and moves the `el3_common_macros.S` into `aarch64` specific folder. Change-Id: I444a1ee3346597bf26a8b827480cd9640b38c826
-
- 07 Apr, 2016 1 commit
-
-
Soby Mathew authored
This patch enables the SCR_EL3.SIF (Secure Instruction Fetch) bit in BL1 and BL31 common architectural setup code. When in secure state, this disables instruction fetches from Non-secure memory. NOTE: THIS COULD BREAK PLATFORMS THAT HAVE SECURE WORLD CODE EXECUTING FROM NON-SECURE MEMORY, BUT THIS IS CONSIDERED UNLIKELY AND IS A SERIOUS SECURITY RISK. Fixes ARM-Software/tf-issues#372 Change-Id: I684e84b8d523c3b246e9a5fabfa085b6405df319
-
- 30 Mar, 2016 1 commit
-
-
Gerald Lejeune authored
Asynchronous abort exceptions generated by the platform during cold boot are not taken in EL3 unless SCR_EL3.EA is set. Therefore EA bit is set along with RES1 bits in early BL1 and BL31 architecture initialisation. Further write accesses to SCR_EL3 preserve these bits during cold boot. A build flag controls SCR_EL3.EA value to keep asynchronous abort exceptions being trapped by EL3 after cold boot or not. For further reference SError Interrupts are also known as asynchronous external aborts. On Cortex-A53 revisions below r0p2, asynchronous abort exceptions are taken in EL3 whatever the SCR_EL3.EA value is. Fixes arm-software/tf-issues#368 Signed-off-by: Gerald Lejeune <gerald.lejeune@st.com>
-
- 14 Mar, 2016 1 commit
-
-
Antonio Nino Diaz authored
Added a new platform porting function plat_panic_handler, to allow platforms to handle unexpected error situations. It must be implemented in assembly as it may be called before the C environment is initialized. A default implementation is provided, which simply spins. Corrected all dead loops in generic code to call this function instead. This includes the dead loop that occurs at the end of the call to panic(). All unnecesary wfis from bl32/tsp/aarch64/tsp_exceptions.S have been removed. Change-Id: I67cb85f6112fa8e77bd62f5718efcef4173d8134
-
- 07 Mar, 2016 1 commit
-
-
Antonio Nino Diaz authored
The previous reset code in BL1 performed the following steps in order: 1. Warm/Cold boot detection. If it's a warm boot, jump to warm boot entrypoint. 2. Primary/Secondary CPU detection. If it's a secondary CPU, jump to plat_secondary_cold_boot_setup(), which doesn't return. 3. CPU initialisations (cache, TLB...). 4. Memory and C runtime initialization. For a secondary CPU, steps 3 and 4 are never reached. This shouldn't be a problem in most cases, since current implementations of plat_secondary_cold_boot_setup() either panic or power down the secondary CPUs. The main concern is the lack of secondary CPU initialization when bare metal EL3 payloads are used in case they don't take care of this initialisation themselves. This patch moves the detection of primary/secondary CPU after step 3 so that the CPU initialisations are performed per-CPU, while the memory and the C runtime initialisation are only performed on the primary CPU. The diagrams used in the ARM Trusted Firmware Reset Design documentation file have been updated to reflect the new boot flow. Platforms ports might be affected by this patch depending on the behaviour of plat_secondary_cold_boot_setup(), as the state of the platform when entering this function will be different. Fixes ARM-software/tf-issues#342 Change-Id: Icbf4a0ee2a3e5b856030064472f9fa6696f2eb9e
-
- 14 Dec, 2015 1 commit
-
-
Juan Castillo authored
This patch removes the dash character from the image name, to follow the image terminology in the Trusted Firmware Wiki page: https://github.com/ARM-software/arm-trusted-firmware/wiki Changes apply to output messages, comments and documentation. non-ARM platform files have been left unmodified. Change-Id: Ic2a99be4ed929d52afbeb27ac765ceffce46ed76
-
- 09 Dec, 2015 1 commit
-
-
Yatharth Kochar authored
As of now BL1 loads and execute BL2 based on hard coded information provided in BL1. But due to addition of support for upcoming Firmware Update feature, BL1 now require more flexible approach to load and run different images using information provided by the platform. This patch adds new mechanism to load and execute images based on platform provided image id's. BL1 now queries the platform to fetch the image id of the next image to be loaded and executed. In order to achieve this, a new struct image_desc_t was added which holds the information about images, such as: ep_info and image_info. This patch introduces following platform porting functions: unsigned int bl1_plat_get_next_image_id(void); This is used to identify the next image to be loaded and executed by BL1. struct image_desc *bl1_plat_get_image_desc(unsigned int image_id); This is used to retrieve the image_desc for given image_id. void bl1_plat_set_ep_info(unsigned int image_id, struct entry_point_info *ep_info); This function allows platforms to update ep_info for given image_id. The plat_bl1_common.c file provides default weak implementations of all above functions, the `bl1_plat_get_image_desc()` always return BL2 image descriptor, the `bl1_plat_get_next_image_id()` always return BL2 image ID and `bl1_plat_set_ep_info()` is empty and just returns. These functions gets compiled into all BL1 platforms by default. Platform setup in BL1, using `bl1_platform_setup()`, is now done _after_ the initialization of authentication module. This change provides the opportunity to use authentication while doing the platform setup in BL1. In order to store secure/non-secure context, BL31 uses percpu_data[] to store context pointer for each core. In case of BL1 only the primary CPU will be active hence percpu_data[] is not required to store the context pointer. This patch introduce bl1_cpu_context[] and bl1_cpu_context_ptr[] to store the context and context pointers respectively. It also also re-defines cm_get_context() and cm_set_context() for BL1 in bl1/bl1_context_mgmt.c. BL1 now follows the BL31 pattern of using SP_EL0 for the C runtime environment, to support resuming execution from a previously saved context. NOTE: THE `bl1_plat_set_bl2_ep_info()` PLATFORM PORTING FUNCTION IS NO LONGER CALLED BY BL1 COMMON CODE. PLATFORMS THAT OVERRIDE THIS FUNCTION MAY NEED TO IMPLEMENT `bl1_plat_set_ep_info()` INSTEAD TO MAINTAIN EXISTING BEHAVIOUR. Change-Id: Ieee4c124b951c2e9bc1c1013fa2073221195d881
-
- 14 Sep, 2015 1 commit
-
-
Achin Gupta authored
On the ARMv8 architecture, cache maintenance operations by set/way on the last level of integrated cache do not affect the system cache. This means that such a flush or clean operation could result in the data being pushed out to the system cache rather than main memory. Another CPU could access this data before it enables its data cache or MMU. Such accesses could be serviced from the main memory instead of the system cache. If the data in the sysem cache has not yet been flushed or evicted to main memory then there could be a loss of coherency. The only mechanism to guarantee that the main memory will be updated is to use cache maintenance operations to the PoC by MVA(See section D3.4.11 (System level caches) of ARMv8-A Reference Manual (Issue A.g/ARM DDI0487A.G). This patch removes the reliance of Trusted Firmware on the flush by set/way operation to ensure visibility of data in the main memory. Cache maintenance operations by MVA are now used instead. The following are the broad category of changes: 1. The RW areas of BL2/BL31/BL32 are invalidated by MVA before the C runtime is initialised. This ensures that any stale cache lines at any level of cache are removed. 2. Updates to global data in runtime firmware (BL31) by the primary CPU are made visible to secondary CPUs using a cache clean operation by MVA. 3. Cache maintenance by set/way operations are only used prior to power down. NOTE: NON-UPSTREAM TRUSTED FIRMWARE CODE SHOULD MAKE EQUIVALENT CHANGES IN ORDER TO FUNCTION CORRECTLY ON PLATFORMS WITH SUPPORT FOR SYSTEM CACHES. Fixes ARM-software/tf-issues#205 Change-Id: I64f1b398de0432813a0e0881d70f8337681f6e9a
-
- 13 Aug, 2015 2 commits
-
-
Soby Mathew authored
This patch adds the necessary documentation updates to porting_guide.md for the changes in the platform interface mandated as a result of the new PSCI Topology and power state management frameworks. It also adds a new document `platform-migration-guide.md` to aid the migration of existing platform ports to the new API. The patch fixes the implementation and callers of plat_is_my_cpu_primary() to use w0 as the return parameter as implied by the function signature rather than x0 which was used previously. Change-Id: Ic11e73019188c8ba2bd64c47e1729ff5acdcdd5b
-
Soby Mathew authored
This patch migrates the rest of Trusted Firmware excluding Secure Payload and the dispatchers to the new platform and context management API. The per-cpu data framework APIs which took MPIDRs as their arguments are deleted and only the ones which take core index as parameter are retained. Change-Id: I839d05ad995df34d2163a1cfed6baa768a5a595d
-
- 24 Jun, 2015 1 commit
-
-
Sandrine Bailleux authored
This patch fixes the build time condition deciding whether the read-write data should be relocated from ROM to RAM. It was incorrectly using __DATA_ROM_START__, which is a linker symbol and not a compiler build flag. As a result, the relocation code was always compiled out. This bug has been introduced by the following patch: "Rationalize reset handling code" Change-Id: I1c8d49de32f791551ab4ac832bd45101d6934045
-
- 04 Jun, 2015 1 commit
-
-
Sandrine Bailleux authored
The attempt to run the CPU reset code as soon as possible after reset results in highly complex conditional code relating to the RESET_TO_BL31 option. This patch relaxes this requirement a little. In the BL1, BL3-1 and PSCI entrypoints code, the sequence of operations is now as follows: 1) Detect whether it is a cold or warm boot; 2) For cold boot, detect whether it is the primary or a secondary CPU. This is needed to handle multiple CPUs entering cold reset simultaneously; 3) Run the CPU init code. This patch also abstracts the EL3 registers initialisation done by the BL1, BL3-1 and PSCI entrypoints into common code. This improves code re-use and consolidates the code flows for different types of systems. NOTE: THE FUNCTION plat_secondary_cold_boot() IS NOW EXPECTED TO NEVER RETURN. THIS PATCH FORCES PLATFORM PORTS THAT RELIED ON THE FORMER RETRY LOOP AT THE CALL SITE TO MODIFY THEIR IMPLEMENTATION. OTHERWISE, SECONDARY CPUS WILL PANIC. Change-Id: If5ecd74d75bee700b1bd718d23d7556b8f863546
-