- 29 Nov, 2017 3 commits
-
-
Soby Mathew authored
This patch fixes a couple of issues for AArch32 builds on ARM reference platforms : 1. The arm_def.h previously defined the same BL32_BASE value for AArch64 and AArch32 build. Since BL31 is not present in AArch32 mode, this meant that the BL31 memory is empty when built for AArch32. Hence this patch allocates BL32 to the memory region occupied by BL31 for AArch32 builds. As a side-effect of this change, the ARM_TSP_RAM_LOCATION macro cannot be used to control the load address of BL32 in AArch32 mode which was never the intention of the macro anyway. 2. A static assert is added to sp_min linker script to check that the progbits are within the bounds expected when overlaid with other images. 3. Fix specifying `SPD` when building Juno for AArch32 mode. Due to the quirks involved when building Juno for AArch32 mode, the build option SPD needed to specifed. This patch corrects this and also updates the documentation in the user-guide. 4. Exclude BL31 from the build and FIP when building Juno for AArch32 mode. As a result the previous assumption that BL31 must be always present is removed and the certificates for BL31 is only generated if `NEED_BL31` is defined. Change-Id: I1c39bbc0abd2be8fbe9f2dea2e9cb4e3e3e436a8 Signed-off-by: Soby Mathew <soby.mathew@arm.com>
-
Antonio Nino Diaz authored
When defining different sections in linker scripts it is needed to align them to multiples of the page size. In most linker scripts this is done by aligning to the hardcoded value 4096 instead of PAGE_SIZE. This may be confusing when taking a look at all the codebase, as 4096 is used in some parts that aren't meant to be a multiple of the page size. Change-Id: I36c6f461c7782437a58d13d37ec8b822a1663ec1 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Dimitris Papastamos authored
The `ENABLE_AMU` build option can be used to enable the architecturally defined AMU counters. At present, there is no support for the auxiliary counter group. Change-Id: Ifc7532ef836f83e629f2a146739ab61e75c4abc8 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
- 08 Nov, 2017 1 commit
-
-
Etienne Carriere authored
Clear exclusive monitor on SMC and FIQ entry for ARMv7 cores. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
-
- 23 Oct, 2017 1 commit
-
-
Jeenu Viswambharan authored
This light-weight framework enables some EL3 components to publish events which other EL3 components can subscribe to. Publisher can optionally pass opaque data for subscribers. The order in which subscribers are called is not defined. Firmware design updated. Change-Id: I24a3a70b2b1dedcb1f73cf48313818aebf75ebb6 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 13 Oct, 2017 1 commit
-
-
David Cunado authored
Currently TF does not initialise the PMCR_EL0 register in the secure context or save/restore the register. In particular, the DP field may not be set to one to prohibit cycle counting in the secure state, even though event counting generally is prohibited via the default setting of MDCR_EL3.SMPE to 0. This patch initialises PMCR_EL0.DP to one in the secure state to prohibit cycle counting and also initialises other fields that have an architectually UNKNOWN reset value. Additionally, PMCR_EL0 is added to the list of registers that are saved and restored during a world switch. Similar changes are made for PMCR for the AArch32 execution state. NOTE: secure world code at lower ELs that assume other values in PMCR_EL0 will be impacted. Change-Id: Iae40e8c0a196d74053accf97063ebc257b4d2f3a Signed-off-by: David Cunado <david.cunado@arm.com>
-
- 05 Sep, 2017 1 commit
-
-
David Cunado authored
When ARM TF executes in AArch32 state, the NS version of SCTLR is not being set during warmboot flow. This results in secondary CPUs entering the Non-secure world with the default reset value in SCTLR. This patch explicitly sets the value of the NS version of SCTLR during the warmboot flow rather than relying on the h/w. Change-Id: I86bf52b6294baae0a5bd8af0cd0358cc4f55c416 Signed-off-by: David Cunado <david.cunado@arm.com>
-
- 21 Aug, 2017 1 commit
-
-
Julius Werner authored
Some error paths that lead to a crash dump will overwrite the value in the x30 register by calling functions with the no_ret macro, which resolves to a BL instruction. This is not very useful and not what the reader would expect, since a crash dump should usually show all registers in the state they were in when the exception happened. This patch replaces the offending function calls with a B instruction to preserve the value in x30. Change-Id: I2a3636f2943f79bab0cd911f89d070012e697c2a Signed-off-by: Julius Werner <jwerner@chromium.org>
-
- 15 Aug, 2017 1 commit
-
-
Julius Werner authored
Assembler programmers are used to being able to define functions with a specific aligment with a pattern like this: .align X myfunction: However, this pattern is subtly broken when instead of a direct label like 'myfunction:', you use the 'func myfunction' macro that's standard in Trusted Firmware. Since the func macro declares a new section for the function, the .align directive written above it actually applies to the *previous* section in the assembly file, and the function it was supposed to apply to is linked with default alignment. An extreme case can be seen in Rockchip's plat_helpers.S which contains this code: [...] endfunc plat_crash_console_putc .align 16 func platform_cpu_warmboot [...] This assembles into the following plat_helpers.o: Sections: Idx Name Size [...] Algn 9 .text.plat_crash_console_putc 00010000 [...] 2**16 10 .text.platform_cpu_warmboot 00000080 [...] 2**3 As can be seen, the *previous* function actually got the alignment constraint, and it is also 64KB big even though it contains only two instructions, because the .align directive at the end of its section forces the assembler to insert a giant sled of NOPs. The function we actually wanted to align has the default constraint. This code only works at all because the linker just happens to put the two functions right behind each other when linking the final image, and since the end of plat_crash_console_putc is aligned the start of platform_cpu_warmboot will also be. But it still wastes almost 64KB of image space unnecessarily, and it will break under certain circumstances (e.g. if the plat_crash_console_putc function becomes unused and its section gets garbage-collected out). There's no real way to fix this with the existing func macro. Code like func myfunc .align X happens to do the right thing, but is still not really correct code (because the function label is inserted before the .align directive, so the assembler is technically allowed to insert padding at the beginning of the function which would then get executed as instructions if the function was called). Therefore, this patch adds a new parameter with a default value to the func macro that allows overriding its alignment. Also fix up all existing instances of this dangerous antipattern. Change-Id: I5696a07e2fde896f21e0e83644c95b7b6ac79a10 Signed-off-by: Julius Werner <jwerner@chromium.org>
-
- 09 Aug, 2017 1 commit
-
-
Etienne Carriere authored
Add support for a minimal secure interrupt service in sp_min for the AArch32 implementation. Hard code that only FIQs are handled. Introduce bolean build directive SP_MIN_WITH_SECURE_FIQ to enable FIQ handling from SP_MIN. Configure SCR[FIQ] and SCR[FW] from generic code for both cold and warm boots to handle FIQ in secure state from monitor. Since SP_MIN architecture, FIQ are always trapped when system executes in non secure state. Hence discard relay of the secure/non-secure state in the FIQ handler. Change-Id: I1f7d1dc7b21f6f90011b7f3fcd921e455592f5e7 Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
-
- 23 Jun, 2017 1 commit
-
-
Etienne Carriere authored
security_state is either 0 or 1. Prevent sign conversion potential error (setting -Werror=sign-conversion results in a build error). Signed-off-by: Yann Gautier <yann.gautier@st.com> Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
-
- 21 Jun, 2017 1 commit
-
-
David Cunado authored
This patch updates the el3_arch_init_common macro so that it fully initialises essential control registers rather then relying on hardware to set the reset values. The context management functions are also updated to fully initialise the appropriate control registers when initialising the non-secure and secure context structures and when preparing to leave EL3 for a lower EL. This gives better alignement with the ARM ARM which states that software must initialise RES0 and RES1 fields with 0 / 1. This patch also corrects the following typos: "NASCR definitions" -> "NSACR definitions" Change-Id: Ia8940b8351dc27bc09e2138b011e249655041cfc Signed-off-by: David Cunado <david.cunado@arm.com>
-
- 20 Jun, 2017 2 commits
-
-
Dimitris Papastamos authored
Flush the console so the errata report is printed correctly before exit to normal world. Change-Id: Idd6b5199b5fb8bda9d16a7b5c6426cdda7c73167 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
Dimitris Papastamos authored
On ARM platforms before exiting from SP_MIN ensure that the default console is switched to the runtime serial port. Change-Id: I0ca0d42cc47e345d56179eac16aa3d6712767c9b Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
- 12 May, 2017 1 commit
-
-
Soby Mathew authored
The current SMC context data structure `smc_ctx_t` and related helpers are optimized for case when SMC call does not result in world switch. This was the case for SP_MIN and BL1 cold boot flow. But the firmware update usecase requires world switch as a result of SMC and the current SMC context helpers were not helping very much in this regard. Therefore this patch does the following changes to improve this: 1. Add monitor stack pointer, `spmon` to `smc_ctx_t` The C Runtime stack pointer in monitor mode, `sp_mon` is added to the SMC context, and the `smc_ctx_t` pointer is cached in `sp_mon` prior to exit from Monitor mode. This makes is easier to retrieve the context when the next SMC call happens. As a result of this change, the SMC context helpers no longer depend on the stack to save and restore the register. This aligns it with the context save and restore mechanism in AArch64. 2. Add SCR in `smc_ctx_t` Adding the SCR register to `smc_ctx_t` makes it easier to manage this register state when switching between non secure and secure world as a result of an SMC call. Change-Id: I5e12a7056107c1701b457b8f7363fdbf892230bf Signed-off-by: Soby Mathew <soby.mathew@arm.com> Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
-
- 03 May, 2017 1 commit
-
-
dp-arm authored
To make software license auditing simpler, use SPDX[0] license identifiers instead of duplicating the license text in every file. NOTE: Files that have been imported by FreeBSD have not been modified. [0]: https://spdx.org/ Change-Id: I80a00e1f641b8cc075ca5a95b10607ed9ed8761a Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
-
- 26 Apr, 2017 1 commit
-
-
David Cunado authored
Since Issue B (November 2016) of the SMC Calling Convention document standard SMC calls are renamed to yielding SMC calls to help avoid confusion with the standard service SMC range, which remains unchanged. http://infocenter.arm.com/help/topic/com.arm.doc.den0028b/ARM_DEN0028B_SMC_Calling_Convention.pdf This patch adds a new define for yielding SMC call type and deprecates the current standard SMC call type. The tsp is migrated to use this new terminology and, additionally, the documentation and code comments are updated to use this new terminology. Change-Id: I0d7cc0224667ee6c050af976745f18c55906a793 Signed-off-by: David Cunado <david.cunado@arm.com>
-
- 19 Apr, 2017 1 commit
-
-
Soby Mathew authored
This patch introduces a build option to enable D-cache early on the CPU after warm boot. This is applicable for platforms which do not require interconnect programming to enable cache coherency (eg: single cluster platforms). If this option is enabled, then warm boot path enables D-caches immediately after enabling MMU. Fixes ARM-Software/tf-issues#456 Change-Id: I44c8787d116d7217837ced3bcf0b1d3441c8d80e Signed-off-by: Soby Mathew <soby.mathew@arm.com>
-
- 31 Mar, 2017 1 commit
-
-
Douglas Raillard authored
Introduce new build option ENABLE_STACK_PROTECTOR. It enables compilation of all BL images with one of the GCC -fstack-protector-* options. A new platform function plat_get_stack_protector_canary() is introduced. It returns a value that is used to initialize the canary for stack corruption detection. Returning a random value will prevent an attacker from predicting the value and greatly increase the effectiveness of the protection. A message is printed at the ERROR level when a stack corruption is detected. To be effective, the global data must be stored at an address lower than the base of the stacks. Failure to do so would allow an attacker to overwrite the canary as part of an attack which would void the protection. FVP implementation of plat_get_stack_protector_canary is weak as there is no real source of entropy on the FVP. It therefore relies on a timer's value, which could be predictable. Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
- 08 Mar, 2017 1 commit
-
-
Antonio Nino Diaz authored
The files affected by this patch don't really depend on `xlat_tables.h`. By changing the included file it becomes easier to switch between the two versions of the translation tables library. Change-Id: Idae9171c490e0865cb55883b19eaf942457c4ccc Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 02 Mar, 2017 1 commit
-
-
Jeenu Viswambharan authored
At present, warm-booted CPUs keep their caches disabled when enabling MMU, and remains so until they enter coherency later. On systems with hardware-assisted coherency, for which HW_ASSISTED_COHERENCY build flag would be enabled, warm-booted CPUs can have both caches and MMU enabled at once. Change-Id: Icb0adb026e01aecf34beadf49c88faa9dd368327 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 06 Feb, 2017 2 commits
-
-
Douglas Raillard authored
Replace all use of memset by zeromem when zeroing moderately-sized structure by applying the following transformation: memset(x, 0, sizeof(x)) => zeromem(x, sizeof(x)) As the Trusted Firmware is compiled with -ffreestanding, it forbids the compiler from using __builtin_memset and forces it to generate calls to the slow memset implementation. Zeromem is a near drop in replacement for this use case, with a more efficient implementation on both AArch32 and AArch64. Change-Id: Ia7f3a90e888b96d056881be09f0b4d65b41aa79e Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
Douglas Raillard authored
Introduce zeromem_dczva function on AArch64 that can handle unaligned addresses and make use of DC ZVA instruction to zero a whole block at a time. This zeroing takes place directly in the cache to speed it up without doing external memory access. Remove the zeromem16 function on AArch64 and replace it with an alias to zeromem. This zeromem16 function is now deprecated. Remove the 16-bytes alignment constraint on __BSS_START__ in firmware-design.md as it is now not mandatory anymore (it used to comply with zeromem16 requirements). Change the 16-bytes alignment constraints in SP min's linker script to a 8-bytes alignment constraint as the AArch32 zeromem implementation is now more efficient on 8-bytes aligned addresses. Introduce zero_normalmem and zeromem helpers in platform agnostic header that are implemented this way: * AArch32: * zero_normalmem: zero using usual data access * zeromem: alias for zero_normalmem * AArch64: * zero_normalmem: zero normal memory using DC ZVA instruction (needs MMU enabled) * zeromem: zero using usual data access Usage guidelines: in most cases, zero_normalmem should be preferred. There are 2 scenarios where zeromem (or memset) must be used instead: * Code that must run with MMU disabled (which means all memory is considered device memory for data accesses). * Code that fills device memory with null bytes. Optionally, the following rule can be applied if performance is important: * Code zeroing small areas (few bytes) that are not secrets should use memset to take advantage of compiler optimizations. Note: Code zeroing security-related critical information should use zero_normalmem/zeromem instead of memset to avoid removal by compilers' optimizations in some cases or misbehaving versions of GCC. Fixes ARM-software/tf-issues#408 Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
- 23 Dec, 2016 1 commit
-
-
Douglas Raillard authored
Standard SMC requests that are handled in the secure-world by the Secure Payload can be preempted by interrupts that must be handled in the normal world. When the TSP is preempted the secure context is stored and control is passed to the normal world to handle the non-secure interrupt. Once completed the preempted secure context is restored. When restoring the preempted context, the dispatcher assumes that the TSP preempted context is still stored as the SECURE context by the context management library. However, PSCI power management operations causes synchronous entry into TSP. This overwrites the preempted SECURE context in the context management library. When restoring back the SECURE context, the Secure Payload crashes because this context is not the preempted context anymore. This patch avoids corruption of the preempted SECURE context by aborting any preempted SMC during PSCI power management calls. The abort_std_smc_entry hook of the TSP is called when aborting the SMC request. It also exposes this feature as a FAST SMC callable from normal world to abort preempted SMC with FID TSP_FID_ABORT. Change-Id: I7a70347e9293f47d87b5de20484b4ffefb56b770 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
- 12 Dec, 2016 1 commit
-
-
Soby Mathew authored
The AArch32 Procedure call Standard mandates that the stack must be aligned to 8 byte boundary at external interfaces. This patch does the required changes. This problem was detected when a crash was encountered in `psci_print_power_domain_map()` while printing 64 bit values. Aligning the stack to 8 byte boundary resolved the problem. Fixes ARM-Software/tf-issues#437 Change-Id: I517bd8203601bb88e9311bd36d477fb7b3efb292 Signed-off-by: Soby Mathew <soby.mathew@arm.com>
-
- 05 Dec, 2016 1 commit
-
-
Jeenu Viswambharan authored
There are many instances in ARM Trusted Firmware where control is transferred to functions from which return isn't expected. Such jumps are made using 'bl' instruction to provide the callee with the location from which it was jumped to. Additionally, debuggers infer the caller by examining where 'lr' register points to. If a 'bl' of the nature described above falls at the end of an assembly function, 'lr' will be left pointing to a location outside of the function range. This misleads the debugger back trace. This patch defines a 'no_ret' macro to be used when jumping to functions from which return isn't expected. The macro ensures to use 'bl' instruction for the jump, and also, for debug builds, places a 'nop' instruction immediately thereafter (unless instructed otherwise) so as to leave 'lr' pointing within the function range. Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0 Co-authored-by: Douglas Raillard <douglas.raillard@arm.com> Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 22 Sep, 2016 2 commits
-
-
Soby Mathew authored
This patch moves the invocation of `psci_setup()` from BL31 and SP_MIN into `std_svc_setup()` as part of ARM Standard Service initialization. This allows us to consolidate ARM Standard Service initializations which will be added to in the future. A new function `get_arm_std_svc_args()` is introduced to get arguments corresponding to each standard service. This function must be implemented by the EL3 Runtime Firmware and both SP_MIN and BL31 implement it. Change-Id: I38e1b644f797fa4089b20574bd4a10f0419de184
-
Soby Mathew authored
This patch introduces a `psci_lib_args_t` structure which must be passed into `psci_setup()` which is then used to initialize the PSCI library. The `psci_lib_args_t` is a versioned structure so as to enable compatibility checks during library initialization. Both BL31 and SP_MIN are modified to use the new structure. SP_MIN is also modified to add version string and build message as part of its cold boot log just like the other BLs in Trusted Firmware. NOTE: Please be aware that this patch modifies the prototype of `psci_setup()`, which breaks compatibility with EL3 Runtime Firmware (excluding BL31 and SP_MIN) integrated with the PSCI Library. Change-Id: Ic3761db0b790760a7ad664d8a437c72ea5edbcd6
-
- 21 Sep, 2016 2 commits
-
-
Yatharth Kochar authored
This patch adds support in SP_MIN to receive generic and platform specific arguments from BL2. The new signature is as following: void sp_min_early_platform_setup(void *from_bl2, void *plat_params_from_bl2); ARM platforms have been modified to use this support. Note: Platforms may break if using old signature. Default value for RESET_TO_SP_MIN is changed to 0. Change-Id: I008d4b09fd3803c7b6231587ebf02a047bdba8d0
-
Yatharth Kochar authored
This patch uses the `el3_entrypoint_common` macro to initialize CPU registers, in SP_MIN entrypoint.s file, in both cold and warm boot path. It also adds conditional compilation, in cold and warm boot entry path, based on RESET_TO_SP_MIN. Change-Id: Id493ca840dc7b9e26948dc78ee928e9fdb76b9e4
-
- 10 Aug, 2016 1 commit
-
-
Soby Mathew authored
This patch adds a minimal AArch32 secure payload SP_MIN. It relies on PSCI library to initialize the normal world context. It runs in Monitor mode and uses the runtime service framework to handle SMCs. It is added as a BL32 component in the Trusted Firmware source tree. Change-Id: Icc04fa6b242025a769c1f6c7022fde19459c43e9
-
- 09 Aug, 2016 1 commit
-
-
Soby Mathew authored
This patch moves the assembly exclusive lock library code `spinlock.S` into architecture specific folder `aarch64`. A stub file which includes the file from new location is retained at the original location for compatibility. The BL makefiles are also modified to include the file from the new location. Change-Id: Ide0b601b79c439e390c3a017d93220a66be73543
-
- 08 Jul, 2016 2 commits
-
-
Sandrine Bailleux authored
In debug builds, the TSP prints its image base address and size. The base address displayed corresponds to the start address of the read-only section, as defined in the linker script. This patch changes this to use the BL32_BASE address instead, which is the same address as __RO_START__ at the moment but has the advantage to be independent of the linker symbols defined in the linker script as well as the layout and order of the sections. Change-Id: I032d8d50df712c014cbbcaa84a9615796ec902cc
-
Sandrine Bailleux authored
At the moment, all BL images share a similar memory layout: they start with their code section, followed by their read-only data section. The two sections are contiguous in memory. Therefore, the end of the code section and the beginning of the read-only data one might share a memory page. This forces both to be mapped with the same memory attributes. As the code needs to be executable, this means that the read-only data stored on the same memory page as the code are executable as well. This could potentially be exploited as part of a security attack. This patch introduces a new build flag called SEPARATE_CODE_AND_RODATA, which isolates the code and read-only data on separate memory pages. This in turn allows independent control of the access permissions for the code and read-only data. This has an impact on memory footprint, as padding bytes need to be introduced between the code and read-only data to ensure the segragation of the two. To limit the memory cost, the memory layout of the read-only section has been changed in this case. - When SEPARATE_CODE_AND_RODATA=0, the layout is unchanged, i.e. the read-only section still looks like this (padding omitted): | ... | +-------------------+ | Exception vectors | +-------------------+ | Read-only data | +-------------------+ | Code | +-------------------+ BLx_BASE In this case, the linker script provides the limits of the whole read-only section. - When SEPARATE_CODE_AND_RODATA=1, the exception vectors and read-only data are swapped, such that the code and exception vectors are contiguous, followed by the read-only data. This gives the following new layout (padding omitted): | ... | +-------------------+ | Read-only data | +-------------------+ | Exception vectors | +-------------------+ | Code | +-------------------+ BLx_BASE In this case, the linker script now exports 2 sets of addresses instead: the limits of the code and the limits of the read-only data. Refer to the Firmware Design guide for more details. This provides platform code with a finer-grained view of the image layout and allows it to map these 2 regions with the appropriate access permissions. Note that SEPARATE_CODE_AND_RODATA applies to all BL images. Change-Id: I936cf80164f6b66b6ad52b8edacadc532c935a49
-
- 26 May, 2016 1 commit
-
-
Sandrine Bailleux authored
This patch introduces some assembler macros to simplify the declaration of the exception vectors. It abstracts the section the exception code is put into as well as the alignments constraints mandated by the ARMv8 architecture. For all TF images, the exception code has been updated to make use of these macros. This patch also updates some invalid comments in the exception vector code. Change-Id: I35737b8f1c8c24b6da89b0a954c8152a4096fa95
-
- 01 Apr, 2016 1 commit
-
-
Evan Lloyd authored
As an initial stage of making Trusted Firmware build environment more portable, we remove most uses of the $(shell ) function and replace them with more portable make function based solutions. Note that the setting of BUILD_STRING still uses $(shell ) since it's not possible to reimplement this as a make function. Avoiding invocation of this on incompatible host platforms will be implemented separately. Change-Id: I768e2f9a265c78814a4adf2edee4cc46cda0f5b8
-
- 14 Mar, 2016 1 commit
-
-
Antonio Nino Diaz authored
Added a new platform porting function plat_panic_handler, to allow platforms to handle unexpected error situations. It must be implemented in assembly as it may be called before the C environment is initialized. A default implementation is provided, which simply spins. Corrected all dead loops in generic code to call this function instead. This includes the dead loop that occurs at the end of the call to panic(). All unnecesary wfis from bl32/tsp/aarch64/tsp_exceptions.S have been removed. Change-Id: I67cb85f6112fa8e77bd62f5718efcef4173d8134
-
- 14 Dec, 2015 1 commit
-
-
Juan Castillo authored
This patch removes the dash character from the image name, to follow the image terminology in the Trusted Firmware Wiki page: https://github.com/ARM-software/arm-trusted-firmware/wiki Changes apply to output messages, comments and documentation. non-ARM platform files have been left unmodified. Change-Id: Ic2a99be4ed929d52afbeb27ac765ceffce46ed76
-
- 09 Dec, 2015 1 commit
-
-
Soby Mathew authored
Earlier the TSP only ever expected to be preempted during Standard SMC processing. If a S-EL1 interrupt triggered while in the normal world, it will routed to S-EL1 `synchronously` for handling. The `synchronous` S-EL1 interrupt handler `tsp_sel1_intr_entry` used to panic if this S-EL1 interrupt was preempted by another higher priority pending interrupt which should be handled in EL3 e.g. Group0 interrupt in GICv3. With this patch, the `tsp_sel1_intr_entry` now expects `TSP_PREEMPTED` as the return code from the `tsp_common_int_handler` in addition to 0 (interrupt successfully handled) and in both cases it issues an SMC with id `TSP_HANDLED_S_EL1_INTR`. The TSPD switches the context and returns back to normal world. In case a higher priority EL3 interrupt was pending, the execution will be routed to EL3 where interrupt will be handled. On return back to normal world, the pending S-EL1 interrupt which was preempted will get routed to S-EL1 to be handled `synchronously` via `tsp_sel1_intr_entry`. Change-Id: I2087c7fedb37746fbd9200cdda9b6dba93e16201
-
- 04 Dec, 2015 1 commit
-
-
Soby Mathew authored
On a GICv2 system, interrupts that should be handled in the secure world are typically signalled as FIQs. On a GICv3 system, these interrupts are signalled as IRQs instead. The mechanism for handling both types of interrupts is the same in both cases. This patch enables the TSP to run on a GICv3 system by: 1. adding support for handling IRQs in the exception handling code. 2. removing use of "fiq" in the names of data structures, macros and functions. The build option TSPD_ROUTE_IRQ_TO_EL3 is deprecated and is replaced with a new build flag TSP_NS_INTR_ASYNC_PREEMPT. For compatibility reasons, if the former build flag is defined, it will be used to define the value for the new build flag. The documentation is also updated accordingly. Change-Id: I1807d371f41c3656322dd259340a57649833065e
-