- 11 Jun, 2021 1 commit
-
-
johpow01 authored
This patch enables BL2 to run in root world (EL3) which is needed as per the security model of RME-enabled systems. Signed-off-by: John Powell <john.powell@arm.com> Change-Id: I53ace51e326fcdd44d44c791a7cb9ffaa20ed3f5
-
- 21 Apr, 2021 2 commits
-
-
Yann Gautier authored
Only BL32 (SP_min) is supported at the moment, BL1 and BL2_AT_EL3 are just stubbed with _pie_fixup_size=0. The changes are an adaptation for AARCH32 on what has been done for PIE support on AARCH64. The RELA_SECTION is redefined for AARCH32, as the created section is .rel.dyn and the symbols are .rel*. Change-Id: I92bafe70e6b77735f6f890f32f2b637b98cf01b9 Signed-off-by: Yann Gautier <yann.gautier@st.com>
-
Yann Gautier authored
The use of end addresses is preferred over the size of sections. This was done for some AARCH64 files for PIE with commit [1], and some extra explanations can be found in its commit message. Align the missing AARCH64 files. For AARCH32 files, this is required to prepare PIE support introduction. [1] f1722b69 ("PIE: Use PC relative adrp/adr for symbol reference") Change-Id: I8f1c06580182b10c680310850f72904e58a54d7d Signed-off-by: Yann Gautier <yann.gautier@st.com>
-
- 11 Dec, 2020 1 commit
-
-
Javier Almansa Sobrino authored
If FEAT_PMUv3 is implemented and PMEVTYPER<n>(_EL0).MT bit is implemented as well, it is possible to control whether PMU counters take into account events happening on other threads. If FEAT_MTPMU is implemented, EL3 (or EL2) can override the MT bit leaving it to effective state of 0 regardless of any write to it. This patch introduces the DISABLE_MTPMU flag, which allows to diable multithread event count from EL3 (or EL2). The flag is disabled by default so the behavior is consistent with those architectures that do not implement FEAT_MTPMU. Signed-off-by: Javier Almansa Sobrino <javier.almansasobrino@arm.com> Change-Id: Iee3a8470ae8ba13316af1bd40c8d4aa86e0cb85e
-
- 21 Jul, 2020 1 commit
-
-
Alexei Fedorov authored
This patch adds support for Measured Boot driver functionality in BL1 and BL2 code. Change-Id: I7239a94c3e32b0a3e9e73768a0140e0b52ab0361 Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
-
- 29 Jun, 2020 1 commit
-
-
Masahiro Yamada authored
The .rela.dyn section is the same for BL2-AT-EL3, BL31, TSP. Move it to the common header file. I slightly changed the definition so that we can do "RELA_SECTION >RAM". It still produced equivalent elf images. Please note I got rid of '.' from the VMA field. Otherwise, if the end of previous .data section is not 8-byte aligned, it fails to link. aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes make: *** [Makefile:1071: build/qemu/release/bl31/bl31.elf] Error 1 Change-Id: Iba7422d99c0374d4d9e97e6fd47bae129dba5cc9 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 25 Apr, 2020 1 commit
-
-
Masahiro Yamada authored
Move the data section to the common header. I slightly tweaked some scripts as follows: [1] bl1.ld.S has ALIGN(16). I added DATA_ALIGN macro, which is 1 by default, but overridden by bl1.ld.S. Currently, ALIGN(16) of the .data section is redundant because commit 41286590 ("Fix boot failures on some builds linked with ld.lld.") padded out the previous section to work around the issue of LLD version <= 10.0. This will be fixed in the future release of LLVM, so I am keeping the proper way to align LMA. [2] bl1.ld.S and bl2_el3.ld.S define __DATA_RAM_{START,END}__ instead of __DATA_{START,END}__. I put them out of the .data section. [3] SORT_BY_ALIGNMENT() is missing tsp.ld.S, sp_min.ld.S, and mediatek/mt6795/bl31.ld.S. This commit adds SORT_BY_ALIGNMENT() for all images, so the symbol order in those three will change, but I do not think it is a big deal. Change-Id: I215bb23c319f045cd88e6f4e8ee2518c67f03692 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 24 Apr, 2020 1 commit
-
-
Masahiro Yamada authored
The stacks section is the same for all BL linker scripts. Move it to the common header file. Change-Id: Ibd253488667ab4f69702d56ff9e9929376704f6c Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 03 Apr, 2020 1 commit
-
-
John Powell authored
Attempts to address MISRA compliance issues in BL1, BL2, and BL31 code. Mainly issues like not using boolean expressions in conditionals, conflicting variable names, ignoring return values without (void), adding explicit casts, etc. Change-Id: If1fa18ab621b9c374db73fa6eaa6f6e5e55c146a Signed-off-by: John Powell <john.powell@arm.com>
-
- 02 Apr, 2020 3 commits
-
-
Masahiro Yamada authored
Move the bss section to the common header. This adds BAKERY_LOCK_NORMAL and PMF_TIMESTAMP, which previously existed only in BL31. This is not a big deal because unused data should not be compiled in the first place. I believe this should be controlled by BL*_SOURCES in Makefiles, not by linker scripts. I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3, BL31, BL31 for plat=uniphier. I did not see any more unexpected code addition. The bss section has bigger alignment. I added BSS_ALIGN for this. Currently, SORT_BY_ALIGNMENT() is missing in sp_min.ld.S, and with this change, the BSS symbols in SP_MIN will be sorted by the alignment. This is not a big deal (or, even better in terms of the image size). Change-Id: I680ee61f84067a559bac0757f9d03e73119beb33 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
Masahiro Yamada authored
The common section data are repeated in many linker scripts (often twice in each script to support SEPARATE_CODE_AND_RODATA). When you add a new read-only data section, you end up with touching lots of places. After this commit, you will only need to touch bl_common.ld.h when you add a new section to RODATA_COMMON. Replace a series of RO section with RODATA_COMMON, which contains 6 sections, some of which did not exist before. This is not a big deal because unneeded data should not be compiled in the first place. I believe this should be controlled by BL*_SOURCES in Makefiles, not by linker scripts. When I was working on this commit, the BL1 image size increased due to the fconf_populator. Commit c452ba15 ("fconf: exclude fconf_dyn_cfg_getter.c from BL1_SOURCES") fixed this issue. I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3, BL31, BL31 for plat=uniphier. I did not see any more unexpected code addition. Change-Id: I5d14d60dbe3c821765bce3ae538968ef266f1460 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
Masahiro Yamada authored
These are mostly used to collect data from special structure, and repeated in many linker scripts. To differentiate the alignment size between aarch32/aarch64, I added a new macro STRUCT_ALIGN. While I moved the PMF_SVC_DESCS, I dropped #if ENABLE_PMF conditional. As you can see in include/lib/pmf/pmf_helpers.h, PMF_REGISTER_SERVICE* are no-op when ENABLE_PMF=0. So, pmf_svc_descs and pmf_timestamp_array data are not populated. Change-Id: I3f4ab7fa18f76339f1789103407ba76bda7e56d0 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 11 Mar, 2020 1 commit
-
-
Masahiro Yamada authored
TF-A has so many linker scripts, at least one linker script for each BL image, and some platforms have their own ones. They duplicate quite similar code (and comments). When we add some changes to linker scripts, we end up with touching so many files. This is not nice in the maintainability perspective. When you look at Linux kernel, the common code is macrofied in include/asm-generic/vmlinux.lds.h, which is included from each arch linker script, arch/*/kernel/vmlinux.lds.S TF-A can follow this approach. Let's factor out the common code into include/common/bl_common.ld.h As a start point, this commit factors out the xlat_table section. Change-Id: Ifa369e9b48e8e12702535d721cc2a16d12397895 Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 01 Mar, 2020 1 commit
-
-
Madhukar Pappireddy authored
aarch32 CPUs speculatively execute instructions following a ERET as if it was not a jump instruction. This could lead to cache-based side channel vulnerabilities. The software fix is to place barrier instructions following ERET. The counterpart patch for aarch64 is merged: https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/commit/?id=f461fe346b728d0e88142fd7b8f2816415af18bc Change-Id: I2aa3105bee0b92238f389830b3a3b8650f33af3d Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
-
- 07 Feb, 2020 1 commit
-
-
Louis Mayencourt authored
Introduce the Firmware CONfiguration Framework (fconf). The fconf is an abstraction layer for platform specific data, allowing a "property" to be queried and a value retrieved without the requesting entity knowing what backing store is being used to hold the data. The default backing store used is C structure. If another backing store has to be used, the platform integrator needs to provide a "populate()" function to fill the corresponding C structure. The "populate()" function must be registered to the fconf framework with the "FCONF_REGISTER_POPULATOR()". This ensures that the function would be called inside the "fconf_populate()" function. A two level macro is used as getter: - the first macro takes 3 parameters and converts it to a function call: FCONF_GET_PROPERTY(a,b,c) -> a__b_getter(c). - the second level defines a__b_getter(c) to the matching C structure, variable, array, function, etc.. Ex: Get a Chain of trust property: 1) FCONF_GET_PROPERY(tbbr, cot, BL2_id) -> tbbr__cot_getter(BL2_id) 2) tbbr__cot_getter(BL2_id) -> cot_desc_ptr[BL2_id] Change-Id: Id394001353ed295bc680c3f543af0cf8da549469 Signed-off-by: Louis Mayencourt <louis.mayencourt@arm.com>
-
- 05 Feb, 2020 1 commit
-
-
Zelalem authored
Fix code that violates the MISRA rule: MISRA C-2012 Rule 11.9: Literal "0" shall not be used as null pointer constant. The fix explicitly checks whether a pointer is NULL. Change-Id: Ibc318dc0f464982be9a34783f24ccd1d44800551 Signed-off-by: Zelalem <zelalem.aweke@arm.com>
-
- 03 Feb, 2020 1 commit
-
-
Sandrine Bailleux authored
When Trusted Boot is enabled, images are loaded and authenticated following up the root of trust. This means that between the initial console message saying that an image is being loaded, and the final one where it says that it failed to load it, BL2 may print several messages about other images on the chain of trust being loaded, thus it is not always clear which image we failed loading at the end of the day. Change-Id: I3b189ec9d12c2a6203d16c8dbbb4fc117639c3c1 Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
-
- 24 Jan, 2020 1 commit
-
-
Masahiro Yamada authored
This implementation simply mimics that of BL31. I did not implement the ENABLE_PIE support for BL2_IN_XIP_MEM=1 case. It would make the linker script a bit uglier. Change-Id: If3215abd99f2758dfb232e44b50320d04eba808b Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 22 Jan, 2020 1 commit
-
-
Anthony Steinhauser authored
Even though ERET always causes a jump to another address, aarch64 CPUs speculatively execute following instructions as if the ERET instruction was not a jump instruction. The speculative execution does not cross privilege-levels (to the jump target as one would expect), but it continues on the kernel privilege level as if the ERET instruction did not change the control flow - thus execution anything that is accidentally linked after the ERET instruction. Later, the results of this speculative execution are always architecturally discarded, however they can leak data using microarchitectural side channels. This speculative execution is very reliable (seems to be unconditional) and it manages to complete even relatively performance-heavy operations (e.g. multiple dependent fetches from uncached memory). This was fixed in Linux, FreeBSD, OpenBSD and Optee OS: https://github.com/torvalds/linux/commit/679db70801da9fda91d26caf13bf5b5ccc74e8e8 https://github.com/freebsd/freebsd/commit/29fb48ace4186a41c409fde52bcf4216e9e50b61 https://github.com/openbsd/src/commit/3a08873ece1cb28ace89fd65e8f3c1375cc98de2 https://github.com/OP-TEE/optee_os/commit/abfd092aa19f9c0251e3d5551e2d68a9ebcfec8a It is demonstrated in a SafeSide example: https://github.com/google/safeside/blob/master/demos/eret_hvc_smc_wrapper.cc https://github.com/google/safeside/blob/master/kernel_modules/kmod_eret_hvc_smc/eret_hvc_smc_module.c Signed-off-by: Anthony Steinhauser <asteinhauser@google.com> Change-Id: Iead39b0b9fb4b8d8b5609daaa8be81497ba63a0f
-
- 12 Dec, 2019 1 commit
-
-
Manish Pandey authored
When a Firmware is complied as Position Independent Executable it needs to request GDT fixup by passing size of the memory region to el3_entrypoint_common macro. The Global descriptor table fixup will be done early on during cold boot process of primary core. Currently only BL31 supports PIE, but in future when BL2_AT_EL3 will be compiled as PIE, it can simply pass fixup size to the common el3 entrypoint macro to fixup GDT. The reason for this patch was to overcome the bug introduced by SHA 330ead80 which called fixup routine for each core causing re-initializing of global pointers thus overwriting any changes done by the previous core. Change-Id: I55c792cc3ea9e7eef34c2e4653afd04572c4f055 Signed-off-by: Manish Pandey <manish.pandey2@arm.com>
-
- 04 Dec, 2019 1 commit
-
-
Samuel Holland authored
Currently, sections within .text/.rodata/.data/.bss are emitted in the order they are seen by the linker. This leads to wasted space, when a section with a larger alignment follows one with a smaller alignment. We can avoid this wasted space by sorting the sections. To take full advantage of this, we must disable generation of common symbols, so "common" data can be sorted along with the rest of .bss. An example of the improvement, from `make DEBUG=1 PLAT=sun50i_a64 bl31`: .text => no change .rodata => 16 bytes saved .data => 11 bytes saved .bss => 576 bytes saved As a side effect, the addition of `-fno-common` in TF_CFLAGS makes it easier to spot bugs in header files. Signed-off-by: Samuel Holland <samuel@sholland.org> Change-Id: I073630a9b0b84e7302a7a500d4bb4b547be01d51
-
- 13 Sep, 2019 1 commit
-
-
Alexei Fedorov authored
This patch provides the following features and makes modifications listed below: - Individual APIAKey key generation for each CPU. - New key generation on every BL31 warm boot and TSP CPU On event. - Per-CPU storage of APIAKey added in percpu_data[] of cpu_data structure. - `plat_init_apiakey()` function replaced with `plat_init_apkey()` which returns 128-bit value and uses Generic timer physical counter value to increase the randomness of the generated key. The new function can be used for generation of all ARMv8.3-PAuth keys - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`. - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively; pauth_disable_el1()` and `pauth_disable_el3()` functions disable PAuth for EL1 and EL3 respectively; `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from cpu-data structure. - Combined `save_gp_pauth_registers()` function replaces calls to `save_gp_registers()` and `pauth_context_save()`; `restore_gp_pauth_registers()` replaces `pauth_context_restore()` and `restore_gp_registers()` calls. - `restore_gp_registers_eret()` function removed with corresponding code placed in `el3_exit()`. - Fixed the issue when `pauth_t pauth_ctx` structure allocated space for 12 uint64_t PAuth registers instead of 10 by removal of macro CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h` and assigning its value to CTX_PAUTH_REGS_END. - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions in `msr spsel` instruction instead of hard-coded values. - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI. Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211 Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
-
- 01 Aug, 2019 1 commit
-
-
Julius Werner authored
NOTE: AARCH32/AARCH64 macros are now deprecated in favor of __aarch64__. All common C compilers pre-define the same macros to signal which architecture the code is being compiled for: __arm__ for AArch32 (or earlier versions) and __aarch64__ for AArch64. There's no need for TF-A to define its own custom macros for this. In order to unify code with the export headers (which use __aarch64__ to avoid another dependency), let's deprecate the AARCH32 and AARCH64 macros and switch the code base over to the pre-defined standard macro. (Since it is somewhat unintuitive that __arm__ only means AArch32, let's standardize on only using __aarch64__.) Change-Id: Ic77de4b052297d77f38fc95f95f65a8ee70cf200 Signed-off-by: Julius Werner <jwerner@chromium.org>
-
- 14 Jun, 2019 1 commit
-
-
Masahiro Yamada authored
This linker script is so unreadable due to sprinkled #ifdef. Direct read-only data to 'ROM' and read-write data to 'RAM'. Both go to the same memory device when BL2_IN_XIP_MEM is disabled. Change-Id: Ieeac3f1a4e05e9e8599de2ec84260819c70f361e Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 24 May, 2019 1 commit
-
-
Alexei Fedorov authored
This patch adds the functionality needed for platforms to provide Branch Target Identification (BTI) extension, introduced to AArch64 in Armv8.5-A by adding BTI instruction used to mark valid targets for indirect branches. The patch sets new GP bit [50] to the stage 1 Translation Table Block and Page entries to denote guarded EL3 code pages which will cause processor to trap instructions in protected pages trying to perform an indirect branch to any instruction other than BTI. BTI feature is selected by BRANCH_PROTECTION option which supersedes the previous ENABLE_PAUTH used for Armv8.3-A Pointer Authentication and is disabled by default. Enabling BTI requires compiler support and was tested with GCC versions 9.0.0, 9.0.1 and 10.0.0. The assembly macros and helpers are modified to accommodate the BTI instruction. This is an experimental feature. Note. The previous ENABLE_PAUTH build option to enable PAuth in EL3 is now made as an internal flag and BRANCH_PROTECTION flag should be used instead to enable Pointer Authentication. Note. USE_LIBROM=1 option is currently not supported. Change-Id: Ifaf4438609b16647dc79468b70cd1f47a623362e Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
-
- 13 Mar, 2019 1 commit
-
-
Bryan O'Donoghue authored
Prior to entry into BL32 we set the SPSR by way of msr spsr, r1. This unfortunately only writes the bits f->[31:24] and c->[7:0]. This patch updates the bl2 exit path to write the x->[15:8] and c->[7:0] fields of the SPSR. For the purposes of initial setup of the SPSR the x and c fields should be sufficient and importantly will capture the necessary lower-order control bits that f:c alone do not. This is important to do to ensure the SPSR is set to the mode the platform intends prior to performing an eret. Fixes: b1d27b48 ("bl2-el3: Add BL2_EL3 image") Signed-off-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org>
-
- 12 Mar, 2019 1 commit
-
-
John Tsichritzis authored
The SCTLR.DSSBS bit is zero by default thus disabling speculative loads. However, we also explicitly set it to zero for BL2 and TSP images when each image initialises its context. This is done to ensure that the image environment is initialised in a safe state, regardless of the reset value of the bit. Change-Id: If25a8396641edb640f7f298b8d3309d5cba3cd79 Signed-off-by: John Tsichritzis <john.tsichritzis@arm.com>
-
- 27 Feb, 2019 2 commits
-
-
Antonio Nino Diaz authored
The size increase after enabling options related to ARMv8.3-PAuth is: +----------------------------+-------+-------+-------+--------+ | | text | bss | data | rodata | +----------------------------+-------+-------+-------+--------+ | CTX_INCLUDE_PAUTH_REGS = 1 | +44 | +0 | +0 | +0 | | | 0.2% | | | | +----------------------------+-------+-------+-------+--------+ | ENABLE_PAUTH = 1 | +712 | +0 | +16 | +0 | | | 3.1% | | 0.9% | | +----------------------------+-------+-------+-------+--------+ The results are valid for the following build configuration: make PLAT=fvp SPD=tspd DEBUG=1 \ BL2_AT_EL3=1 \ CTX_INCLUDE_PAUTH_REGS=1 \ ENABLE_PAUTH=1 Change-Id: I1c0616e7dea30962a92b4fd113428bc30a018320 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
The size increase after enabling options related to ARMv8.3-PAuth is: +----------------------------+-------+-------+-------+--------+ | | text | bss | data | rodata | +----------------------------+-------+-------+-------+--------+ | CTX_INCLUDE_PAUTH_REGS = 1 | +40 | +0 | +0 | +0 | | | 0.2% | | | | +----------------------------+-------+-------+-------+--------+ | ENABLE_PAUTH = 1 | +664 | +0 | +16 | +0 | | | 3.1% | | 0.9% | | +----------------------------+-------+-------+-------+--------+ Results calculated with the following build configuration: make PLAT=fvp SPD=tspd DEBUG=1 \ SDEI_SUPPORT=1 \ EL3_EXCEPTION_HANDLING=1 \ TSP_NS_INTR_ASYNC_PREEMPT=1 \ CTX_INCLUDE_PAUTH_REGS=1 \ ENABLE_PAUTH=1 The changes for BL2_AT_EL3 aren't done in this commit. Change-Id: I8c803b40c7160525a06173bc6cdca21c4505837d Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 15 Jan, 2019 1 commit
-
-
Antonio Nino Diaz authored
The definitions in bl1/bl1_private.h and bl2/bl2_private.h are useful for platforms that may need to access them. Change-Id: Ifd1880f855ddafcb3bfcaf1ed4a4e0f121eda174 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 04 Jan, 2019 1 commit
-
-
Antonio Nino Diaz authored
Enforce full include path for includes. Deprecate old paths. The following folders inside include/lib have been left unchanged: - include/lib/cpus/${ARCH} - include/lib/el3_runtime/${ARCH} The reason for this change is that having a global namespace for includes isn't a good idea. It defeats one of the advantages of having folders and it introduces problems that are sometimes subtle (because you may not know the header you are actually including if there are two of them). For example, this patch had to be created because two headers were called the same way: e0ea0928 ("Fix gpio includes of mt8173 platform to avoid collision."). More recently, this patch has had similar problems: 46f9b2c3 ("drivers: add tzc380 support"). This problem was introduced in commit 4ecca339 ("Move include and source files to logical locations"). At that time, there weren't too many headers so it wasn't a real issue. However, time has shown that this creates problems. Platforms that want to preserve the way they include headers may add the removed paths to PLAT_INCLUDES, but this is discouraged. Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 10 Dec, 2018 1 commit
-
-
Yann Gautier authored
This avoids the following warnings: no previous prototype for 'bl2_arch_setup' [-Wmissing-prototypes] no previous prototype for 'plat_log_get_prefix' [-Wmissing-prototypes] Also correct a compilation issue if BL2_IN_XIP_MEM is enabled: uintptr_t is not defined. Signed-off-by: Lionel Debieve <lionel.debieve@st.com> Signed-off-by: Yann Gautier <yann.gautier@st.com>
-
- 08 Nov, 2018 1 commit
-
-
Antonio Nino Diaz authored
All identifiers, regardless of use, that start with two underscores are reserved. This means they can't be used in header guards. The style that this project is now to use the full name of the file in capital letters followed by 'H'. For example, for a file called "uart_example.h", the header guard is UART_EXAMPLE_H. The exceptions are files that are imported from other projects: - CryptoCell driver - dt-bindings folders - zlib headers Change-Id: I50561bf6c88b491ec440d0c8385c74650f3c106e Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 29 Oct, 2018 1 commit
-
-
Soby Mathew authored
This patch fixes up the AArch64 assembly code to use adrp/adr instructions instead of ldr instruction for reference to symbols. This allows these assembly sequences to be Position Independant. Note that the the reference to sizes have been replaced with calculation of size at runtime. This is because size is a constant value and does not depend on execution address and using PC relative instructions for loading them makes them relative to execution address. Also we cannot use `ldr` instruction to load size as it generates a dynamic relocation entry which must *not* be fixed up and it is difficult for a dynamic loader to differentiate which entries need to be skipped. Change-Id: I8bf4ed5c58a9703629e5498a27624500ef40a836 Signed-off-by: Soby Mathew <soby.mathew@arm.com>
-
- 28 Sep, 2018 1 commit
-
-
Roberto Vargas authored
The code of LOAD_IMAGE_V2=0 has been removed. Change-Id: Iea03e5bebb90c66889bdb23f85c07d0c9717fffe Co-authored-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com> Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 17 Aug, 2018 1 commit
-
-
John Tsichritzis authored
If the system is in near idle conditions, this erratum could cause a deadlock or data corruption. This patch applies the workaround that prevents this. This DSU erratum affects only the DSUs that contain the ACP interface and it was fixed in r2p0. The workaround is applied only to the DSUs that are actually affected. Link to respective Arm documentation: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.epm138168/index.html Change-Id: I033213b3077685130fc1e3f4f79c4d15d7483ec9 Signed-off-by: John Tsichritzis <john.tsichritzis@arm.com>
-
- 03 Aug, 2018 1 commit
-
-
Roberto Vargas authored
TF Makefile was linking all the objects files generated for the Mbed TLS library instead of creating a static library that could be used in the linking stage. Change-Id: I8e4cd843ef56033c9d3faeee71601d110b7e4c12 Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
- 11 Jul, 2018 3 commits
-
-
Roberto Vargas authored
Check_vector_size checks if the size of the vector fits in the size reserved for it. This check creates problems in the Clang assembler. A new macro, end_vector_entry, is added and check_vector_size is deprecated. This new macro fills the current exception vector until the next exception vector. If the size of the current vector is bigger than 32 instructions then it gives an error. Change-Id: Ie8545cf1003a1e31656a1018dd6b4c28a4eaf671 Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
Roberto Vargas authored
These sections are required by clang when the code is compiled for aarch32. These sections are related to the unwind of the stack in exceptions, but in the way that clang defines and uses them, the garbage collector cannot get rid of them. Change-Id: I085efc0cf77eae961d522472f72c4b5bad2237ab Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
Roberto Vargas authored
Clang linker doesn't support NEXT. As we are not using the MEMORY command to define discontinuous memory for the output file in any of the linker scripts, ALIGN and NEXT are equivalent. Change-Id: I867ffb9c9a76d4e81c9ca7998280b2edf10efea0 Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-