- 02 Apr, 2020 3 commits
-
-
Masahiro Yamada authored
Move the bss section to the common header. This adds BAKERY_LOCK_NORMAL and PMF_TIMESTAMP, which previously existed only in BL31. This is not a big deal because unused data should not be compiled in the first place. I believe this should be controlled by BL*_SOURCES in Makefiles, not by linker scripts. I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3, BL31, BL31 for plat=uniphier. I did not see any more unexpected code addition. The bss section has bigger alignment. I added BSS_ALIGN for this. Currently, SORT_BY_ALIGNMENT() is missing in sp_min.ld.S, and with this change, the BSS symbols in SP_MIN will be sorted by the alignment. This is not a big deal (or, even better in terms of the image size). Change-Id: I680ee61f84067a559bac0757f9d03e73119beb33 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
Masahiro Yamada authored
The common section data are repeated in many linker scripts (often twice in each script to support SEPARATE_CODE_AND_RODATA). When you add a new read-only data section, you end up with touching lots of places. After this commit, you will only need to touch bl_common.ld.h when you add a new section to RODATA_COMMON. Replace a series of RO section with RODATA_COMMON, which contains 6 sections, some of which did not exist before. This is not a big deal because unneeded data should not be compiled in the first place. I believe this should be controlled by BL*_SOURCES in Makefiles, not by linker scripts. When I was working on this commit, the BL1 image size increased due to the fconf_populator. Commit c452ba15 ("fconf: exclude fconf_dyn_cfg_getter.c from BL1_SOURCES") fixed this issue. I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3, BL31, BL31 for plat=uniphier. I did not see any more unexpected code addition. Change-Id: I5d14d60dbe3c821765bce3ae538968ef266f1460 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
Masahiro Yamada authored
These are mostly used to collect data from special structure, and repeated in many linker scripts. To differentiate the alignment size between aarch32/aarch64, I added a new macro STRUCT_ALIGN. While I moved the PMF_SVC_DESCS, I dropped #if ENABLE_PMF conditional. As you can see in include/lib/pmf/pmf_helpers.h, PMF_REGISTER_SERVICE* are no-op when ENABLE_PMF=0. So, pmf_svc_descs and pmf_timestamp_array data are not populated. Change-Id: I3f4ab7fa18f76339f1789103407ba76bda7e56d0 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 20 Mar, 2020 1 commit
-
-
Madhukar Pappireddy authored
CPUs use console to print debug/info messages. This critical section must be guarded by locks to avoid overlaps in messages from multiple CPUs. Change-Id: I786bf90072c1ed73c4f53d8c950979d95255e67e Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
-
- 11 Mar, 2020 1 commit
-
-
Masahiro Yamada authored
TF-A has so many linker scripts, at least one linker script for each BL image, and some platforms have their own ones. They duplicate quite similar code (and comments). When we add some changes to linker scripts, we end up with touching so many files. This is not nice in the maintainability perspective. When you look at Linux kernel, the common code is macrofied in include/asm-generic/vmlinux.lds.h, which is included from each arch linker script, arch/*/kernel/vmlinux.lds.S TF-A can follow this approach. Let's factor out the common code into include/common/bl_common.ld.h As a start point, this commit factors out the xlat_table section. Change-Id: Ifa369e9b48e8e12702535d721cc2a16d12397895 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 06 Mar, 2020 1 commit
-
-
Manish Pandey authored
In CPU resume function, CPU suspend count was printed instead of CPU resume count. Signed-off-by: Manish Pandey <manish.pandey2@arm.com> Change-Id: I0c081dc03a4ccfb2129687f690667c5ceed00a5f
-
- 24 Jan, 2020 1 commit
-
-
Masahiro Yamada authored
This implementation simply mimics that of BL31. Change-Id: Ibbaa4ca012d38ac211c52b0b3e97449947160e07 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 22 Jan, 2020 1 commit
-
-
Anthony Steinhauser authored
Even though ERET always causes a jump to another address, aarch64 CPUs speculatively execute following instructions as if the ERET instruction was not a jump instruction. The speculative execution does not cross privilege-levels (to the jump target as one would expect), but it continues on the kernel privilege level as if the ERET instruction did not change the control flow - thus execution anything that is accidentally linked after the ERET instruction. Later, the results of this speculative execution are always architecturally discarded, however they can leak data using microarchitectural side channels. This speculative execution is very reliable (seems to be unconditional) and it manages to complete even relatively performance-heavy operations (e.g. multiple dependent fetches from uncached memory). This was fixed in Linux, FreeBSD, OpenBSD and Optee OS: https://github.com/torvalds/linux/commit/679db70801da9fda91d26caf13bf5b5ccc74e8e8 https://github.com/freebsd/freebsd/commit/29fb48ace4186a41c409fde52bcf4216e9e50b61 https://github.com/openbsd/src/commit/3a08873ece1cb28ace89fd65e8f3c1375cc98de2 https://github.com/OP-TEE/optee_os/commit/abfd092aa19f9c0251e3d5551e2d68a9ebcfec8a It is demonstrated in a SafeSide example: https://github.com/google/safeside/blob/master/demos/eret_hvc_smc_wrapper.cc https://github.com/google/safeside/blob/master/kernel_modules/kmod_eret_hvc_smc/eret_hvc_smc_module.c Signed-off-by: Anthony Steinhauser <asteinhauser@google.com> Change-Id: Iead39b0b9fb4b8d8b5609daaa8be81497ba63a0f
-
- 13 Sep, 2019 1 commit
-
-
Alexei Fedorov authored
This patch provides the following features and makes modifications listed below: - Individual APIAKey key generation for each CPU. - New key generation on every BL31 warm boot and TSP CPU On event. - Per-CPU storage of APIAKey added in percpu_data[] of cpu_data structure. - `plat_init_apiakey()` function replaced with `plat_init_apkey()` which returns 128-bit value and uses Generic timer physical counter value to increase the randomness of the generated key. The new function can be used for generation of all ARMv8.3-PAuth keys - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`. - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively; pauth_disable_el1()` and `pauth_disable_el3()` functions disable PAuth for EL1 and EL3 respectively; `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from cpu-data structure. - Combined `save_gp_pauth_registers()` function replaces calls to `save_gp_registers()` and `pauth_context_save()`; `restore_gp_pauth_registers()` replaces `pauth_context_restore()` and `restore_gp_registers()` calls. - `restore_gp_registers_eret()` function removed with corresponding code placed in `el3_exit()`. - Fixed the issue when `pauth_t pauth_ctx` structure allocated space for 12 uint64_t PAuth registers instead of 10 by removal of macro CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h` and assigning its value to CTX_PAUTH_REGS_END. - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions in `msr spsel` instruction instead of hard-coded values. - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI. Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211 Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
-
- 09 Sep, 2019 1 commit
-
-
Justin Chadwell authored
This patch adds support for the new Memory Tagging Extension arriving in ARMv8.5. MTE support is now enabled by default on systems that support at EL0. To enable it at ELx for both the non-secure and the secure world, the compiler flag CTX_INCLUDE_MTE_REGS includes register saving and restoring when necessary in order to prevent register leakage between the worlds. Change-Id: I2d4ea993d6b11654ea0d4757d00ca20d23acf36c Signed-off-by: Justin Chadwell <justin.chadwell@arm.com>
-
- 01 Aug, 2019 1 commit
-
-
Julius Werner authored
NOTE: __ASSEMBLY__ macro is now deprecated in favor of __ASSEMBLER__. All common C compilers predefine a macro called __ASSEMBLER__ when preprocessing a .S file. There is no reason for TF-A to define it's own __ASSEMBLY__ macro for this purpose instead. To unify code with the export headers (which use __ASSEMBLER__ to avoid one extra dependency), let's deprecate __ASSEMBLY__ and switch the code base over to the predefined standard. Change-Id: Id7d0ec8cf330195da80499c68562b65cb5ab7417 Signed-off-by: Julius Werner <jwerner@chromium.org>
-
- 24 May, 2019 1 commit
-
-
Alexei Fedorov authored
This patch adds the functionality needed for platforms to provide Branch Target Identification (BTI) extension, introduced to AArch64 in Armv8.5-A by adding BTI instruction used to mark valid targets for indirect branches. The patch sets new GP bit [50] to the stage 1 Translation Table Block and Page entries to denote guarded EL3 code pages which will cause processor to trap instructions in protected pages trying to perform an indirect branch to any instruction other than BTI. BTI feature is selected by BRANCH_PROTECTION option which supersedes the previous ENABLE_PAUTH used for Armv8.3-A Pointer Authentication and is disabled by default. Enabling BTI requires compiler support and was tested with GCC versions 9.0.0, 9.0.1 and 10.0.0. The assembly macros and helpers are modified to accommodate the BTI instruction. This is an experimental feature. Note. The previous ENABLE_PAUTH build option to enable PAuth in EL3 is now made as an internal flag and BRANCH_PROTECTION flag should be used instead to enable Pointer Authentication. Note. USE_LIBROM=1 option is currently not supported. Change-Id: Ifaf4438609b16647dc79468b70cd1f47a623362e Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
-
- 12 Mar, 2019 1 commit
-
-
John Tsichritzis authored
The SCTLR.DSSBS bit is zero by default thus disabling speculative loads. However, we also explicitly set it to zero for BL2 and TSP images when each image initialises its context. This is done to ensure that the image environment is initialised in a safe state, regardless of the reset value of the bit. Change-Id: If25a8396641edb640f7f298b8d3309d5cba3cd79 Signed-off-by: John Tsichritzis <john.tsichritzis@arm.com>
-
- 27 Feb, 2019 1 commit
-
-
Antonio Nino Diaz authored
The size increase after enabling options related to ARMv8.3-PAuth is: +----------------------------+-------+-------+-------+--------+ | | text | bss | data | rodata | +----------------------------+-------+-------+-------+--------+ | CTX_INCLUDE_PAUTH_REGS = 1 | +40 | +0 | +0 | +0 | | | 0.4% | | | | +----------------------------+-------+-------+-------+--------+ | ENABLE_PAUTH = 1 | +352 | +0 | +16 | +0 | | | 3.1% | | 15.8% | | +----------------------------+-------+-------+-------+--------+ Results calculated with the following build configuration: make PLAT=fvp SPD=tspd DEBUG=1 \ SDEI_SUPPORT=1 \ EL3_EXCEPTION_HANDLING=1 \ TSP_NS_INTR_ASYNC_PREEMPT=1 \ CTX_INCLUDE_PAUTH_REGS=1 \ ENABLE_PAUTH=1 Change-Id: I6cc1fe0b2345c547dcef66f98758c4eb55fe5ee4 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 01 Feb, 2019 1 commit
-
-
Antonio Nino Diaz authored
Many parts of the code were duplicating symbols that are defined in include/common/bl_common.h. It is better to only use the definitions in this header. As all the symbols refer to virtual addresses, they have to be uintptr_t, not unsigned long. This has also been fixed in bl_common.h. Change-Id: I204081af78326ced03fb05f69846f229d324c711 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 15 Jan, 2019 1 commit
-
-
Paul Beesley authored
Corrects typos in core code, documentation files, drivers, Arm platforms and services. None of the corrections affect code; changes are limited to comments and other documentation. Change-Id: I5c1027b06ef149864f315ccc0ea473e2a16bfd1d Signed-off-by: Paul Beesley <paul.beesley@arm.com>
-
- 04 Jan, 2019 1 commit
-
-
Antonio Nino Diaz authored
Enforce full include path for includes. Deprecate old paths. The following folders inside include/lib have been left unchanged: - include/lib/cpus/${ARCH} - include/lib/el3_runtime/${ARCH} The reason for this change is that having a global namespace for includes isn't a good idea. It defeats one of the advantages of having folders and it introduces problems that are sometimes subtle (because you may not know the header you are actually including if there are two of them). For example, this patch had to be created because two headers were called the same way: e0ea0928 ("Fix gpio includes of mt8173 platform to avoid collision."). More recently, this patch has had similar problems: 46f9b2c3 ("drivers: add tzc380 support"). This problem was introduced in commit 4ecca339 ("Move include and source files to logical locations"). At that time, there weren't too many headers so it wasn't a real issue. However, time has shown that this creates problems. Platforms that want to preserve the way they include headers may add the removed paths to PLAT_INCLUDES, but this is discouraged. Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 08 Nov, 2018 1 commit
-
-
Antonio Nino Diaz authored
All identifiers, regardless of use, that start with two underscores are reserved. This means they can't be used in header guards. The style that this project is now to use the full name of the file in capital letters followed by 'H'. For example, for a file called "uart_example.h", the header guard is UART_EXAMPLE_H. The exceptions are files that are imported from other projects: - CryptoCell driver - dt-bindings folders - zlib headers Change-Id: I50561bf6c88b491ec440d0c8385c74650f3c106e Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 11 Jul, 2018 2 commits
-
-
Roberto Vargas authored
Check_vector_size checks if the size of the vector fits in the size reserved for it. This check creates problems in the Clang assembler. A new macro, end_vector_entry, is added and check_vector_size is deprecated. This new macro fills the current exception vector until the next exception vector. If the size of the current vector is bigger than 32 instructions then it gives an error. Change-Id: Ie8545cf1003a1e31656a1018dd6b4c28a4eaf671 Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
Roberto Vargas authored
Clang linker doesn't support NEXT. As we are not using the MEMORY command to define discontinuous memory for the output file in any of the linker scripts, ALIGN and NEXT are equivalent. Change-Id: I867ffb9c9a76d4e81c9ca7998280b2edf10efea0 Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
- 27 Jun, 2018 1 commit
-
-
Jeenu Viswambharan authored
Previously, data caches were disabled while enabling MMU only because of active stack. Now that we can enable MMU without using stack, we can enable both MMU and data caches at the same time. Change-Id: I73f3b8bae5178610e17e9ad06f81f8f6f97734a6 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 27 Apr, 2018 1 commit
-
-
Masahiro Yamada authored
Since commit 031dbb12 ("AArch32: Add essential Arch helpers"), it is difficult to use consistent format strings for printf() family between aarch32 and aarch64. For example, uint64_t is defined as 'unsigned long long' for aarch32 and as 'unsigned long' for aarch64. Likewise, uintptr_t is defined as 'unsigned int' for aarch32, and as 'unsigned long' for aarch64. A problem typically arises when you use printf() in common code. One solution could be, to cast the arguments to a type long enough for both architectures. For example, if 'val' is uint64_t type, like this: printf("val = %llx\n", (unsigned long long)val); Or, somebody may suggest to use a macro provided by <inttypes.h>, like this: printf("val = %" PRIx64 "\n", val); But, both would make the code ugly. The solution adopted in Linux kernel is to use the same typedefs for all architectures. The fixed integer types in the kernel-space have been unified into int-ll64, like follows: typedef signed char int8_t; typedef unsigned char uint8_t; typedef signed short int16_t; typedef unsigned short uint16_t; typedef signed int int32_t; typedef unsigned int uint32_t; typedef signed long long int64_t; typedef unsigned long long uint64_t; [ Linux commit: 0c79a8e29b5fcbcbfd611daf9d500cfad8370fcf ] This gets along with the codebase shared between 32 bit and 64 bit, with the data model called ILP32, LP64, respectively. The width for primitive types is defined as follows: ILP32 LP64 int 32 32 long 32 64 long long 64 64 pointer 32 64 'long long' is 64 bit for both, so it is used for defining uint64_t. 'long' has the same width as pointer, so for uintptr_t. We still need an ifdef conditional for (s)size_t. All 64 bit architectures use "unsigned long" size_t, and most 32 bit architectures use "unsigned int" size_t. H8/300, S/390 are known as exceptions; they use "unsigned long" size_t despite their architecture is 32 bit. One idea for simplification might be to define size_t as 'unsigned long' across architectures, then forbid the use of "%z" string format. However, this would cause a distortion between size_t and sizeof() operator. We have unknowledge about the native type of sizeof(), so we need a guess of it anyway. I want the following formula to always return 1: __builtin_types_compatible_p(size_t, typeof(sizeof(int))) Fortunately, ARM is probably a majority case. As far as I know, all 32 bit ARM compilers use "unsigned int" size_t. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 13 Apr, 2018 2 commits
-
-
Roberto Vargas authored
Rule 8.4: A compatible declaration shall be visible when an object or function with external linkage is defined Fixed for: make DEBUG=1 PLAT=fvp SPD=tspd all Change-Id: I0a16cf68fef29cf00ec0a52e47786f61d02ca4ae Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
Roberto Vargas authored
Rule 8.3: All declarations of an object or function shall use the same names and type qualifiers Fixed for: make DEBUG=1 PLAT=fvp SPD=tspd all Change-Id: I4e31c93d502d433806dfc521479d5d428468b37c Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
- 27 Feb, 2018 1 commit
-
-
Antonio Nino Diaz authored
When the MMU is enabled and the translation tables are mapped, data read/writes to the translation tables are made using the attributes specified in the translation tables themselves. However, the MMU performs table walks with the attributes specified in TCR_ELx. They are completely independent, so special care has to be taken to make sure that they are the same. This has to be done manually because it is not practical to have a test in the code. Such a test would need to know the virtual memory region that contains the translation tables and check that for all of the tables the attributes match the ones in TCR_ELx. As the tables may not even be mapped at all, this isn't a test that can be made generic. The flags used by enable_mmu_xxx() have been moved to the same header where the functions are. Also, some comments in the linker scripts related to the translation tables have been fixed. Change-Id: I1754768bffdae75f53561b1c4a5baf043b45a304 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 29 Nov, 2017 1 commit
-
-
Antonio Nino Diaz authored
When defining different sections in linker scripts it is needed to align them to multiples of the page size. In most linker scripts this is done by aligning to the hardcoded value 4096 instead of PAGE_SIZE. This may be confusing when taking a look at all the codebase, as 4096 is used in some parts that aren't meant to be a multiple of the page size. Change-Id: I36c6f461c7782437a58d13d37ec8b822a1663ec1 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 21 Aug, 2017 1 commit
-
-
Julius Werner authored
Some error paths that lead to a crash dump will overwrite the value in the x30 register by calling functions with the no_ret macro, which resolves to a BL instruction. This is not very useful and not what the reader would expect, since a crash dump should usually show all registers in the state they were in when the exception happened. This patch replaces the offending function calls with a B instruction to preserve the value in x30. Change-Id: I2a3636f2943f79bab0cd911f89d070012e697c2a Signed-off-by: Julius Werner <jwerner@chromium.org>
-
- 15 Aug, 2017 1 commit
-
-
Julius Werner authored
Assembler programmers are used to being able to define functions with a specific aligment with a pattern like this: .align X myfunction: However, this pattern is subtly broken when instead of a direct label like 'myfunction:', you use the 'func myfunction' macro that's standard in Trusted Firmware. Since the func macro declares a new section for the function, the .align directive written above it actually applies to the *previous* section in the assembly file, and the function it was supposed to apply to is linked with default alignment. An extreme case can be seen in Rockchip's plat_helpers.S which contains this code: [...] endfunc plat_crash_console_putc .align 16 func platform_cpu_warmboot [...] This assembles into the following plat_helpers.o: Sections: Idx Name Size [...] Algn 9 .text.plat_crash_console_putc 00010000 [...] 2**16 10 .text.platform_cpu_warmboot 00000080 [...] 2**3 As can be seen, the *previous* function actually got the alignment constraint, and it is also 64KB big even though it contains only two instructions, because the .align directive at the end of its section forces the assembler to insert a giant sled of NOPs. The function we actually wanted to align has the default constraint. This code only works at all because the linker just happens to put the two functions right behind each other when linking the final image, and since the end of plat_crash_console_putc is aligned the start of platform_cpu_warmboot will also be. But it still wastes almost 64KB of image space unnecessarily, and it will break under certain circumstances (e.g. if the plat_crash_console_putc function becomes unused and its section gets garbage-collected out). There's no real way to fix this with the existing func macro. Code like func myfunc .align X happens to do the right thing, but is still not really correct code (because the function label is inserted before the .align directive, so the assembler is technically allowed to insert padding at the beginning of the function which would then get executed as instructions if the function was called). Therefore, this patch adds a new parameter with a default value to the func macro that allows overriding its alignment. Also fix up all existing instances of this dangerous antipattern. Change-Id: I5696a07e2fde896f21e0e83644c95b7b6ac79a10 Signed-off-by: Julius Werner <jwerner@chromium.org>
-
- 03 May, 2017 1 commit
-
-
dp-arm authored
To make software license auditing simpler, use SPDX[0] license identifiers instead of duplicating the license text in every file. NOTE: Files that have been imported by FreeBSD have not been modified. [0]: https://spdx.org/ Change-Id: I80a00e1f641b8cc075ca5a95b10607ed9ed8761a Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
-
- 26 Apr, 2017 1 commit
-
-
David Cunado authored
Since Issue B (November 2016) of the SMC Calling Convention document standard SMC calls are renamed to yielding SMC calls to help avoid confusion with the standard service SMC range, which remains unchanged. http://infocenter.arm.com/help/topic/com.arm.doc.den0028b/ARM_DEN0028B_SMC_Calling_Convention.pdf This patch adds a new define for yielding SMC call type and deprecates the current standard SMC call type. The tsp is migrated to use this new terminology and, additionally, the documentation and code comments are updated to use this new terminology. Change-Id: I0d7cc0224667ee6c050af976745f18c55906a793 Signed-off-by: David Cunado <david.cunado@arm.com>
-
- 31 Mar, 2017 1 commit
-
-
Douglas Raillard authored
Introduce new build option ENABLE_STACK_PROTECTOR. It enables compilation of all BL images with one of the GCC -fstack-protector-* options. A new platform function plat_get_stack_protector_canary() is introduced. It returns a value that is used to initialize the canary for stack corruption detection. Returning a random value will prevent an attacker from predicting the value and greatly increase the effectiveness of the protection. A message is printed at the ERROR level when a stack corruption is detected. To be effective, the global data must be stored at an address lower than the base of the stacks. Failure to do so would allow an attacker to overwrite the canary as part of an attack which would void the protection. FVP implementation of plat_get_stack_protector_canary is weak as there is no real source of entropy on the FVP. It therefore relies on a timer's value, which could be predictable. Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
- 08 Mar, 2017 1 commit
-
-
Antonio Nino Diaz authored
The files affected by this patch don't really depend on `xlat_tables.h`. By changing the included file it becomes easier to switch between the two versions of the translation tables library. Change-Id: Idae9171c490e0865cb55883b19eaf942457c4ccc Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 06 Feb, 2017 1 commit
-
-
Douglas Raillard authored
Introduce zeromem_dczva function on AArch64 that can handle unaligned addresses and make use of DC ZVA instruction to zero a whole block at a time. This zeroing takes place directly in the cache to speed it up without doing external memory access. Remove the zeromem16 function on AArch64 and replace it with an alias to zeromem. This zeromem16 function is now deprecated. Remove the 16-bytes alignment constraint on __BSS_START__ in firmware-design.md as it is now not mandatory anymore (it used to comply with zeromem16 requirements). Change the 16-bytes alignment constraints in SP min's linker script to a 8-bytes alignment constraint as the AArch32 zeromem implementation is now more efficient on 8-bytes aligned addresses. Introduce zero_normalmem and zeromem helpers in platform agnostic header that are implemented this way: * AArch32: * zero_normalmem: zero using usual data access * zeromem: alias for zero_normalmem * AArch64: * zero_normalmem: zero normal memory using DC ZVA instruction (needs MMU enabled) * zeromem: zero using usual data access Usage guidelines: in most cases, zero_normalmem should be preferred. There are 2 scenarios where zeromem (or memset) must be used instead: * Code that must run with MMU disabled (which means all memory is considered device memory for data accesses). * Code that fills device memory with null bytes. Optionally, the following rule can be applied if performance is important: * Code zeroing small areas (few bytes) that are not secrets should use memset to take advantage of compiler optimizations. Note: Code zeroing security-related critical information should use zero_normalmem/zeromem instead of memset to avoid removal by compilers' optimizations in some cases or misbehaving versions of GCC. Fixes ARM-software/tf-issues#408 Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
- 23 Dec, 2016 1 commit
-
-
Douglas Raillard authored
Standard SMC requests that are handled in the secure-world by the Secure Payload can be preempted by interrupts that must be handled in the normal world. When the TSP is preempted the secure context is stored and control is passed to the normal world to handle the non-secure interrupt. Once completed the preempted secure context is restored. When restoring the preempted context, the dispatcher assumes that the TSP preempted context is still stored as the SECURE context by the context management library. However, PSCI power management operations causes synchronous entry into TSP. This overwrites the preempted SECURE context in the context management library. When restoring back the SECURE context, the Secure Payload crashes because this context is not the preempted context anymore. This patch avoids corruption of the preempted SECURE context by aborting any preempted SMC during PSCI power management calls. The abort_std_smc_entry hook of the TSP is called when aborting the SMC request. It also exposes this feature as a FAST SMC callable from normal world to abort preempted SMC with FID TSP_FID_ABORT. Change-Id: I7a70347e9293f47d87b5de20484b4ffefb56b770 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
- 05 Dec, 2016 1 commit
-
-
Jeenu Viswambharan authored
There are many instances in ARM Trusted Firmware where control is transferred to functions from which return isn't expected. Such jumps are made using 'bl' instruction to provide the callee with the location from which it was jumped to. Additionally, debuggers infer the caller by examining where 'lr' register points to. If a 'bl' of the nature described above falls at the end of an assembly function, 'lr' will be left pointing to a location outside of the function range. This misleads the debugger back trace. This patch defines a 'no_ret' macro to be used when jumping to functions from which return isn't expected. The macro ensures to use 'bl' instruction for the jump, and also, for debug builds, places a 'nop' instruction immediately thereafter (unless instructed otherwise) so as to leave 'lr' pointing within the function range. Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0 Co-authored-by: Douglas Raillard <douglas.raillard@arm.com> Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 09 Aug, 2016 1 commit
-
-
Soby Mathew authored
This patch moves the assembly exclusive lock library code `spinlock.S` into architecture specific folder `aarch64`. A stub file which includes the file from new location is retained at the original location for compatibility. The BL makefiles are also modified to include the file from the new location. Change-Id: Ide0b601b79c439e390c3a017d93220a66be73543
-
- 08 Jul, 2016 2 commits
-
-
Sandrine Bailleux authored
In debug builds, the TSP prints its image base address and size. The base address displayed corresponds to the start address of the read-only section, as defined in the linker script. This patch changes this to use the BL32_BASE address instead, which is the same address as __RO_START__ at the moment but has the advantage to be independent of the linker symbols defined in the linker script as well as the layout and order of the sections. Change-Id: I032d8d50df712c014cbbcaa84a9615796ec902cc
-
Sandrine Bailleux authored
At the moment, all BL images share a similar memory layout: they start with their code section, followed by their read-only data section. The two sections are contiguous in memory. Therefore, the end of the code section and the beginning of the read-only data one might share a memory page. This forces both to be mapped with the same memory attributes. As the code needs to be executable, this means that the read-only data stored on the same memory page as the code are executable as well. This could potentially be exploited as part of a security attack. This patch introduces a new build flag called SEPARATE_CODE_AND_RODATA, which isolates the code and read-only data on separate memory pages. This in turn allows independent control of the access permissions for the code and read-only data. This has an impact on memory footprint, as padding bytes need to be introduced between the code and read-only data to ensure the segragation of the two. To limit the memory cost, the memory layout of the read-only section has been changed in this case. - When SEPARATE_CODE_AND_RODATA=0, the layout is unchanged, i.e. the read-only section still looks like this (padding omitted): | ... | +-------------------+ | Exception vectors | +-------------------+ | Read-only data | +-------------------+ | Code | +-------------------+ BLx_BASE In this case, the linker script provides the limits of the whole read-only section. - When SEPARATE_CODE_AND_RODATA=1, the exception vectors and read-only data are swapped, such that the code and exception vectors are contiguous, followed by the read-only data. This gives the following new layout (padding omitted): | ... | +-------------------+ | Read-only data | +-------------------+ | Exception vectors | +-------------------+ | Code | +-------------------+ BLx_BASE In this case, the linker script now exports 2 sets of addresses instead: the limits of the code and the limits of the read-only data. Refer to the Firmware Design guide for more details. This provides platform code with a finer-grained view of the image layout and allows it to map these 2 regions with the appropriate access permissions. Note that SEPARATE_CODE_AND_RODATA applies to all BL images. Change-Id: I936cf80164f6b66b6ad52b8edacadc532c935a49
-
- 26 May, 2016 1 commit
-
-
Sandrine Bailleux authored
This patch introduces some assembler macros to simplify the declaration of the exception vectors. It abstracts the section the exception code is put into as well as the alignments constraints mandated by the ARMv8 architecture. For all TF images, the exception code has been updated to make use of these macros. This patch also updates some invalid comments in the exception vector code. Change-Id: I35737b8f1c8c24b6da89b0a954c8152a4096fa95
-
- 01 Apr, 2016 1 commit
-
-
Evan Lloyd authored
As an initial stage of making Trusted Firmware build environment more portable, we remove most uses of the $(shell ) function and replace them with more portable make function based solutions. Note that the setting of BUILD_STRING still uses $(shell ) since it's not possible to reimplement this as a make function. Avoiding invocation of this on incompatible host platforms will be implemented separately. Change-Id: I768e2f9a265c78814a4adf2edee4cc46cda0f5b8
-