- 19 Apr, 2017 1 commit
-
-
Soby Mathew authored
This patch introduces a build option to enable D-cache early on the CPU after warm boot. This is applicable for platforms which do not require interconnect programming to enable cache coherency (eg: single cluster platforms). If this option is enabled, then warm boot path enables D-caches immediately after enabling MMU. Fixes ARM-Software/tf-issues#456 Change-Id: I44c8787d116d7217837ced3bcf0b1d3441c8d80e Signed-off-by: Soby Mathew <soby.mathew@arm.com>
-
- 31 Mar, 2017 3 commits
-
-
Douglas Raillard authored
Introduce new build option ENABLE_STACK_PROTECTOR. It enables compilation of all BL images with one of the GCC -fstack-protector-* options. A new platform function plat_get_stack_protector_canary() is introduced. It returns a value that is used to initialize the canary for stack corruption detection. Returning a random value will prevent an attacker from predicting the value and greatly increase the effectiveness of the protection. A message is printed at the ERROR level when a stack corruption is detected. To be effective, the global data must be stored at an address lower than the base of the stacks. Failure to do so would allow an attacker to overwrite the canary as part of an attack which would void the protection. FVP implementation of plat_get_stack_protector_canary is weak as there is no real source of entropy on the FVP. It therefore relies on a timer's value, which could be predictable. Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
Antonio Nino Diaz authored
The desired behaviour is to call `plat_panic_handler()`, and to use `no_ret` to do so from ASM. Change-Id: I88b2feefa6e6c8f9bf057fd51ee0d2e9fb551e4f Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
Call console_flush() before execution either terminates or leaves an exception level. Fixes: ARM-software/tf-issues#123 Change-Id: I64eeb92effb039f76937ce89f877b68e355588e3 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 28 Mar, 2017 1 commit
-
-
Summer Qin authored
This patch adds an additional flag `XLAT_TABLE_NC` which marks the translation tables as Non-cacheable for MMU accesses. Change-Id: I7c28ab87f0ce67da237fadc3627beb6792860fd4 Signed-off-by: Summer Qin <summer.qin@arm.com>
-
- 20 Mar, 2017 2 commits
-
-
Andre Przywara authored
ARM erratum 855873 applies to all Cortex-A53 CPUs. The recommended workaround is to promote "data cache clean" instructions to "data cache clean and invalidate" instructions. For core revisions of r0p3 and later this can be done by setting a bit in the CPUACTLR_EL1 register, so that hardware takes care of the promotion. As CPUACTLR_EL1 is both IMPLEMENTATION DEFINED and can be trapped to EL3, we set the bit in firmware. Also we dump this register upon crashing to provide more debug information. Enable the workaround for the Juno boards. Change-Id: I3840114291958a406574ab6c49b01a9d9847fec8 Signed-off-by: Andre Przywara <andre.przywara@arm.com>
-
Douglas Raillard authored
ge, lt, gt and le condition codes in assembly provide a signed test whereas hs, lo, hi and ls provide the unsigned counterpart. Signed tests should only be used when strictly necessary, as using them on logically unsigned values can lead to inverting the test for high enough values. All offsets, addresses and usually counters are actually unsigned values, and should be tested as such. Replace the occurrences of signed condition codes where it was unnecessary by an unsigned test as the unsigned tests allow the full range of unsigned values to be used without inverting the result with some large operands. Change-Id: I58b7e98d03e3a4476dfb45230311f296d224980a Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
- 08 Mar, 2017 4 commits
-
-
Antonio Nino Diaz authored
TLBI instructions for EL3 won't have the desired effect under specific circumstances in Cortex-A57 r0p0. The workaround is to execute DSB and TLBI twice each time. Even though this errata is only needed in r0p0, the current errata framework is not prepared to apply run-time workarounds. The current one is always applied if compiled in, regardless of the CPU or its revision. This errata has been enabled for Juno. The `DSB` instruction used when initializing the translation tables has been changed to `DSB ISH` as an optimization and to be consistent with the barriers used for the workaround. Change-Id: Ifc1d70b79cb5e0d87e90d88d376a59385667d338 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
Added APIs to add and remove regions to the translation tables dynamically while the MMU is enabled. Only static regions are allowed to overlap other static ones (for backwards compatibility). A new private attribute (MT_DYNAMIC / MT_STATIC) has been added to flag each region as such. The dynamic mapping functionality can be enabled or disabled when compiling by setting the build option PLAT_XLAT_TABLES_DYNAMIC to 1 or 0. This can be done per-image. TLB maintenance code during dynamic table mapping and unmapping has also been added. Fixes ARM-software/tf-issues#310 Change-Id: I19e8992005c4292297a382824394490c5387aa3b Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
The printed output has been improved in two ways: - Whenever multiple invalid descriptors are found, only the first one is printed, and a line is added to inform about how many descriptors have been omitted. - At the beginning of each line there is an indication of the table level the entry belongs to. Example of the new output: `[LV3] VA:0x1000 PA:0x1000 size:0x1000 MEM-RO-S-EXEC` Change-Id: Ib6f1cd8dbd449452f09258f4108241eb11f8d445 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
The folder lib/xlat_tables_v2 has been created to store a new version of the translation tables library for further modifications in patches to follow. At the moment it only contains a basic implementation that supports static regions. This library allows different translation tables to be modified by using different 'contexts'. For now, the implementation defaults to the translation tables used by the current image, but it is possible to modify other tables than the ones in use. Added a new API to print debug information for the current state of the translation tables, rather than printing the information while the tables are being created. This allows subsequent debug printing of the xlat tables after they have been changed, which will be useful when dynamic regions are implemented in a patch to follow. The common definitions stored in `xlat_tables.h` header have been moved to a new file common to both versions, `xlat_tables_defs.h`. All headers related to the translation tables library have been moved to a the subfolder `xlat_tables`. Change-Id: Ia55962c33e0b781831d43a548e505206dffc5ea9 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 02 Mar, 2017 3 commits
-
-
Soby Mathew authored
This patch fixes a compilation issue with bakery locks when PSCI library is compiled with USE_COHERENT_MEM = 0 build option. Change-Id: Ic7f6cf9f2bb37f8a946eafbee9cbc3bf0dc7e900 Signed-off-by: Soby Mathew <soby.mathew@arm.com>
-
Jeenu Viswambharan authored
The current PSCI implementation can apply certain optimizations upon the assumption that all PSCI participants are cache-coherent. - Skip performing cache maintenance during power-up. - Skip performing cache maintenance during power-down: At present, on the power-down path, CPU driver disables caches and MMU, and performs cache maintenance in preparation for powering down the CPU. This means that PSCI must perform additional cache maintenance on the extant stack for correct functioning. If all participating CPUs are cache-coherent, CPU driver would neither disable MMU nor perform cache maintenance. The CPU being powered down, therefore, remain cache-coherent throughout all PSCI call paths. This in turn means that PSCI cache maintenance operations are not required during power down. - Choose spin locks instead of bakery locks: The current PSCI implementation must synchronize both cache-coherent and non-cache-coherent participants. Mutual exclusion primitives are not guaranteed to function on non-coherent memory. For this reason, the current PSCI implementation had to resort to bakery locks. If all participants are cache-coherent, the implementation can enable MMU and data caches early, and substitute bakery locks for spin locks. Spin locks make use of architectural mutual exclusion primitives, and are lighter and faster. The optimizations are applied when HW_ASSISTED_COHERENCY build option is enabled, as it's expected that all PSCI participants are cache-coherent in those systems. Change-Id: Iac51c3ed318ea7e2120f6b6a46fd2db2eae46ede Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
Jeenu Viswambharan authored
The PSCI implementation performs cache maintenance operations on its data structures to ensure their visibility to both cache-coherent and non-cache-coherent participants. These cache maintenance operations can be skipped if all PSCI participants are cache-coherent. When HW_ASSISTED_COHERENCY build option is enabled, we assume PSCI participants are cache-coherent. For usage abstraction, this patch introduces wrappers for PSCI cache maintenance and barrier operations used for state coordination: they are effectively NOPs when HW_ASSISTED_COHERENCY is enabled, but are applied otherwise. Also refactor local state usage and associated cache operations to make it clearer. Change-Id: I77f17a90cba41085b7188c1345fe5731c99fad87 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 28 Feb, 2017 1 commit
-
-
Varun Wadekar authored
This patch removes unnecessary `isb` from the enable DCO sequence as there is no need to synchronize this operation. Change-Id: I0191e684bbc7fdba635c3afbc4e4ecd793b6f06f Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
- 23 Feb, 2017 2 commits
-
-
Varun Wadekar authored
This patch moves the code to disable DCO operations out from common CPU files. This allows the platform code to call thsi API as and when required. There are certain CPU power down states which require the DCO to be kept ON and platforms can decide selectively now. Change-Id: Icb946fe2545a7d8c5903c420d1ee169c4921a2d1 Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
Douglas Raillard authored
The errata is enabled by default on r0p4, which is confusing given that we state we do not enable errata by default. This patch clarifies this sentence by saying it is enabled in hardware by default. Change-Id: I70a062d93e1da2416d5f6d5776a77a659da737aa Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
- 22 Feb, 2017 1 commit
-
-
Varun Wadekar authored
This patch adds support for all variants of the Denver CPUs. The variants export their cpu_ops to allow all Denver platforms to run the Trusted Firmware stack. Change-Id: I1488813ddfd506ffe363d8a32cda1b575e437035 Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
- 14 Feb, 2017 1 commit
-
-
Jeenu Viswambharan authored
The ARMv8v.1 architecture extension has introduced support for far atomics, which includes compare-and-swap. Compare and Swap instruction is only available for AArch64. Introduce build options to choose the architecture versions to target ARM Trusted Firmware: - ARM_ARCH_MAJOR: selects the major version of target ARM Architecture. Default value is 8. - ARM_ARCH_MINOR: selects the minor version of target ARM Architecture. Default value is 0. When: (ARM_ARCH_MAJOR > 8) || ((ARM_ARCH_MAJOR == 8) && (ARM_ARCH_MINOR >= 1)), for AArch64, Compare and Swap instruction is used to implement spin locks. Otherwise, the implementation falls back to using load-/store-exclusive instructions. Update user guide, and introduce a section in Firmware Design guide to summarize support for features introduced in ARMv8 Architecture Extensions. Change-Id: I73096a0039502f7aef9ec6ab3ae36680da033f16 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 13 Feb, 2017 2 commits
-
-
dp-arm authored
Perform stat accounting for retention/standby states also when requested at multiple power levels. Change-Id: I2c495ea7cdff8619bde323fb641cd84408eb5762 Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
-
dp-arm authored
This patch introduces the following three platform interfaces: * void plat_psci_stat_accounting_start(const psci_power_state_t *state_info) This is an optional hook that platforms can implement in order to perform accounting before entering a low power state. This typically involves capturing a timestamp. * void plat_psci_stat_accounting_stop(const psci_power_state_t *state_info) This is an optional hook that platforms can implement in order to perform accounting after exiting from a low power state. This typically involves capturing a timestamp. * u_register_t plat_psci_stat_get_residency(unsigned int lvl, const psci_power_state_t *state_info, unsigned int last_cpu_index) This is an optional hook that platforms can implement in order to calculate the PSCI stat residency. If any of these interfaces are overridden by the platform, it is recommended that all of them are. By default `ENABLE_PSCI_STAT` is disabled. If `ENABLE_PSCI_STAT` is set but `ENABLE_PMF` is not set then an alternative PSCI stat collection backend must be provided. If both are set, then default weak definitions of these functions are provided, using PMF to calculate the residency. NOTE: Previously, platforms did not have to explicitly set `ENABLE_PMF` since this was automatically done by the top-level Makefile. Change-Id: I17b47804dea68c77bc284df15ee1ccd66bc4b79b Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
-
- 06 Feb, 2017 2 commits
-
-
Douglas Raillard authored
Replace all use of memset by zeromem when zeroing moderately-sized structure by applying the following transformation: memset(x, 0, sizeof(x)) => zeromem(x, sizeof(x)) As the Trusted Firmware is compiled with -ffreestanding, it forbids the compiler from using __builtin_memset and forces it to generate calls to the slow memset implementation. Zeromem is a near drop in replacement for this use case, with a more efficient implementation on both AArch32 and AArch64. Change-Id: Ia7f3a90e888b96d056881be09f0b4d65b41aa79e Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
Douglas Raillard authored
Introduce zeromem_dczva function on AArch64 that can handle unaligned addresses and make use of DC ZVA instruction to zero a whole block at a time. This zeroing takes place directly in the cache to speed it up without doing external memory access. Remove the zeromem16 function on AArch64 and replace it with an alias to zeromem. This zeromem16 function is now deprecated. Remove the 16-bytes alignment constraint on __BSS_START__ in firmware-design.md as it is now not mandatory anymore (it used to comply with zeromem16 requirements). Change the 16-bytes alignment constraints in SP min's linker script to a 8-bytes alignment constraint as the AArch32 zeromem implementation is now more efficient on 8-bytes aligned addresses. Introduce zero_normalmem and zeromem helpers in platform agnostic header that are implemented this way: * AArch32: * zero_normalmem: zero using usual data access * zeromem: alias for zero_normalmem * AArch64: * zero_normalmem: zero normal memory using DC ZVA instruction (needs MMU enabled) * zeromem: zero using usual data access Usage guidelines: in most cases, zero_normalmem should be preferred. There are 2 scenarios where zeromem (or memset) must be used instead: * Code that must run with MMU disabled (which means all memory is considered device memory for data accesses). * Code that fills device memory with null bytes. Optionally, the following rule can be applied if performance is important: * Code zeroing small areas (few bytes) that are not secrets should use memset to take advantage of compiler optimizations. Note: Code zeroing security-related critical information should use zero_normalmem/zeromem instead of memset to avoid removal by compilers' optimizations in some cases or misbehaving versions of GCC. Fixes ARM-software/tf-issues#408 Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82 Signed-off-by: Douglas Raillard <douglas.raillard@arm.com>
-
- 30 Jan, 2017 1 commit
-
-
Jeenu Viswambharan authored
The errata reporting policy is as follows: - If an errata workaround is enabled: - If it applies (i.e. the CPU is affected by the errata), an INFO message is printed, confirming that the errata workaround has been applied. - If it does not apply, a VERBOSE message is printed, confirming that the errata workaround has been skipped. - If an errata workaround is not enabled, but would have applied had it been, a WARN message is printed, alerting that errata workaround is missing. The CPU errata messages are printed by both BL1 (primary CPU only) and runtime firmware on debug builds, once for each CPU/errata combination. Relevant output from Juno r1 console when ARM Trusted Firmware is built with PLAT=juno LOG_LEVEL=50 DEBUG=1: VERBOSE: BL1: cortex_a57: errata workaround for 806969 was not applied VERBOSE: BL1: cortex_a57: errata workaround for 813420 was not applied INFO: BL1: cortex_a57: errata workaround for disable_ldnp_overread was applied WARNING: BL1: cortex_a57: errata workaround for 826974 was missing! WARNING: BL1: cortex_a57: errata workaround for 826977 was missing! WARNING: BL1: cortex_a57: errata workaround for 828024 was missing! WARNING: BL1: cortex_a57: errata workaround for 829520 was missing! WARNING: BL1: cortex_a57: errata workaround for 833471 was missing! ... VERBOSE: BL31: cortex_a57: errata workaround for 806969 was not applied VERBOSE: BL31: cortex_a57: errata workaround for 813420 was not applied INFO: BL31: cortex_a57: errata workaround for disable_ldnp_overread was applied WARNING: BL31: cortex_a57: errata workaround for 826974 was missing! WARNING: BL31: cortex_a57: errata workaround for 826977 was missing! WARNING: BL31: cortex_a57: errata workaround for 828024 was missing! WARNING: BL31: cortex_a57: errata workaround for 829520 was missing! WARNING: BL31: cortex_a57: errata workaround for 833471 was missing! ... VERBOSE: BL31: cortex_a53: errata workaround for 826319 was not applied INFO: BL31: cortex_a53: errata workaround for disable_non_temporal_hint was applied Also update documentation. Change-Id: Iccf059d3348adb876ca121cdf5207bdbbacf2aba Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 24 Jan, 2017 2 commits
-
-
Antonio Nino Diaz authored
Some side-channel attacks involve an attacker inferring something from the time taken for a memory compare operation to complete, for example when comparing hashes during image authentication. To mitigate this, timingsafe_bcmp() must be used for such operations instead of the standard memcmp(). This function executes in constant time and so doesn't leak any timing information to the caller. Change-Id: I470a723dc3626a0ee6d5e3f7fd48d0a57b8aa5fd Signed-off-by: dp-arm <dimitris.papastamos@arm.com> Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Sandrine Bailleux authored
This code has been imported and slightly adapted from FreeBSD: https://github.com/freebsd/freebsd/blob/6253393ad8df55730481bf2aafd76bdd6182e2f5/lib/libc/string/strnlen.c Change-Id: Ie5ef5f92e6e904adb88f8628077fdf1d27470eb3 Signed-off-by: Sandrine Bailleux <sandrine.bailleux@arm.com>
-
- 23 Jan, 2017 1 commit
-
-
Masahiro Yamada authored
One nasty part of ATF is some of boolean macros are always defined as 1 or 0, and the rest of them are only defined under certain conditions. For the former group, "#if FOO" or "#if !FOO" must be used because "#ifdef FOO" is always true. (Options passed by $(call add_define,) are the cases.) For the latter, "#ifdef FOO" or "#ifndef FOO" should be used because checking the value of an undefined macro is strange. Here, IMAGE_BL* is handled by make_helpers/build_macro.mk like follows: $(eval IMAGE := IMAGE_BL$(call uppercase,$(3))) $(OBJ): $(2) @echo " CC $$<" $$(Q)$$(CC) $$(TF_CFLAGS) $$(CFLAGS) -D$(IMAGE) -c $$< -o $$@ This means, IMAGE_BL* is defined when building the corresponding image, but *undefined* for the other images. So, IMAGE_BL* belongs to the latter group where we should use #ifdef or #ifndef. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 17 Jan, 2017 1 commit
-
-
David Cunado authored
NOTE - this is patch does not address all occurrences of system includes not being in alphabetical order, just this one case. Change-Id: I3cd23702d69b1f60a4a9dd7fd4ae27418f15b7a3
-
- 16 Jan, 2017 3 commits
-
-
Antonio Nino Diaz authored
Delete old version of libfdt at lib/libfdt. Move new libfdt API headers to include/lib/libfdt and all other files to lib/libfdt. Change-Id: I32b7888f1f20d62205310e363accbef169ad7b1b Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
* Add libfdt.mk helper makefile * Remove unused libfdt files * Minor changes to fdt.h and libfdt.h to make them C99 compliant Adapted from 754d78b1 . Change-Id: I0847f1c2e6e11f0c899b0b7ecc522c0ad7de210c Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
Import libfdt code from https://git.kernel.org/cgit/utils/dtc/dtc.git tag "v1.4.2" commit ec02b34c05be04f249ffaaca4b666f5246877dea. This version includes commit d0b3ab0a0f46ac929b4713da46f7fdcd893dd3bd, which fixes a buffer overflow in fdt_offset_ptr(). Change-Id: I05a30511ea68417ee7ff26477da3f99e0bd4e06b Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 15 Dec, 2016 1 commit
-
-
Jeenu Viswambharan authored
Various CPU drivers in ARM Trusted Firmware register functions to handle power-down operations. At present, separate functions are registered to power down individual cores and clusters. This scheme operates on the basis of core and cluster, and doesn't cater for extending the hierarchy for power-down operations. For example, future CPUs might support multiple threads which might need powering down individually. This patch therefore reworks the CPU operations framework to allow for registering power down handlers on specific level basis. Henceforth: - Generic code invokes CPU power down operations by the level required. - CPU drivers explicitly mention CPU_NO_RESET_FUNC when the CPU has no reset function. - CPU drivers register power down handlers as a list: a mandatory handler for level 0, and optional handlers for higher levels. All existing CPU drivers are adapted to the new CPU operations framework without needing any functional changes within. Also update firmware design guide. Change-Id: I1826842d37a9e60a9e85fdcee7b4b8f6bc1ad043 Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 14 Dec, 2016 2 commits
-
-
Douglas Raillard authored
Unsigned conditions should be used instead of signed ones when comparing addresses or sizes in assembly. Signed-off-by: Douglas Raillard <douglas.raillard@arm.com> Change-Id: Id3bd9ccaf58c37037761af35ac600907c4bb0580
-
dp-arm authored
Testing showed that the time spent in a cluster power down operation is dominated by cache flushes. Add two more timestamps in runtime instrumentation to keep track of the time spent flushing the L1/L2 caches. Change-Id: I4c5a04e7663543225a85d3c6b271d7b706deffc4 Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
-
- 13 Dec, 2016 4 commits
-
-
Antonio Nino Diaz authored
In AArch64, depending on the granularity of the translation tables, level 0 and/or level 1 of the translation tables may not support block descriptors, only table descriptors. This patch introduces a check to make sure that, even if theoretically it could be possible to create a block descriptor to map a big memory region, a new subtable will be created to describe its mapping. Change-Id: Ieb9c302206bfa33fbaf0cdc6a5a82516d32ae2a7 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
Added the definitions `PLAT_PHY_ADDR_SPACE_SIZE` and `PLAT_VIRT_ADDR_SPACE_SIZE` which specify respectively the physical and virtual address space size a platform can use. `ADDR_SPACE_SIZE` is now deprecated. To maintain compatibility, if any of the previous defines aren't present, the value of `ADDR_SPACE_SIZE` will be used instead. For AArch64, register ID_AA64MMFR0_EL1 is checked to calculate the max PA supported by the hardware and to verify that the previously mentioned definition is valid. For AArch32, a 40 bit physical address space is considered. Added asserts to check for overflows. Porting guide updated. Change-Id: Ie8ce1da5967993f0c94dbd4eb9841fc03d5ef8d6 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
Each translation table level entry can only map a given virtual address onto physical addresses of the same granularity. For example, with the current configuration, a level 2 entry maps blocks of 2 MB, so the physical address must be aligned to 2 MB. If the address is not aligned, the MMU will just ignore the lower bits. This patch adds an assertion to make sure that physical addresses are always aligned to the correct boundary. Change-Id: I0ab43df71829d45cdbe323301b3053e08ca99c2c Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
dp-arm authored
There is no guarantee on the signedness of char. It can be either signed or unsigned. On ARM it is unsigned and hence this memcmp() implementation works as intended. On other machines, char can be signed (x86 for example). In that case (and assuming a 2's complement implementation), interpreting a bit-pattern of 0xFF as signed char can yield -1. If *s1 is 0 and *s2 is 255 then the difference *s1 - *s2 should be negative. The C integer promotion rules guarantee that the unsigned chars will be converted to int before the operation takes place. The current implementation will return a positive value (0 - (-1)) instead, which is wrong. Fix it by changing the signedness to unsigned to avoid surprises for anyone using this code on non-ARM systems. Change-Id: Ie222fcaa7c0c4272d7a521a6f2f51995fd5130cc Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
-
- 12 Dec, 2016 1 commit
-
-
Soby Mathew authored
The AArch32 Procedure call Standard mandates that the stack must be aligned to 8 byte boundary at external interfaces. This patch does the required changes. This problem was detected when a crash was encountered in `psci_print_power_domain_map()` while printing 64 bit values. Aligning the stack to 8 byte boundary resolved the problem. Fixes ARM-Software/tf-issues#437 Change-Id: I517bd8203601bb88e9311bd36d477fb7b3efb292 Signed-off-by: Soby Mathew <soby.mathew@arm.com>
-
- 05 Dec, 2016 1 commit
-
-
Jeenu Viswambharan authored
There are many instances in ARM Trusted Firmware where control is transferred to functions from which return isn't expected. Such jumps are made using 'bl' instruction to provide the callee with the location from which it was jumped to. Additionally, debuggers infer the caller by examining where 'lr' register points to. If a 'bl' of the nature described above falls at the end of an assembly function, 'lr' will be left pointing to a location outside of the function range. This misleads the debugger back trace. This patch defines a 'no_ret' macro to be used when jumping to functions from which return isn't expected. The macro ensures to use 'bl' instruction for the jump, and also, for debug builds, places a 'nop' instruction immediately thereafter (unless instructed otherwise) so as to leave 'lr' pointing within the function range. Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0 Co-authored-by: Douglas Raillard <douglas.raillard@arm.com> Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-