- 18 Feb, 2020 1 commit
-
-
Zelalem authored
Fixes for the following MISRA violations: - Missing explicit parentheses on sub-expression - An identifier or macro name beginning with an underscore, shall not be declared - Type mismatch in BL1 SMC handlers and tspd_main.c Change-Id: I7a92abf260da95acb0846b27c2997b59b059efc4 Signed-off-by: Zelalem <zelalem.aweke@arm.com>
-
- 29 Jan, 2020 1 commit
-
-
Andrew Walbran authored
This is based on the rpi implementation from https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/2746 . Signed-off-by: Andrew Walbran <qwandor@google.com> Change-Id: I5fe324fcd9d5e232091e01267ea12147c46bc9c1
-
- 10 Jan, 2020 1 commit
-
-
Deepika Bhavnani authored
NOTE for platform integrators: API `plat_psci_stat_get_residency()` third argument `last_cpu_idx` is changed from "signed int" to the "unsigned int" type. Issue / Trouble points 1. cpu_idx is used as mix of `unsigned int` and `signed int` in code with typecasting at some places leading to coverity issues. 2. Underlying platform API's return cpu_idx as `unsigned int` and comparison is performed with platform specific defines `PLAFORM_xxx` which is not consistent Misra Rule 10.4: The value of a complex expression of integer type may only be cast to a type that is narrower and of the same signedness as the underlying type of the expression. Based on above points, cpu_idx is kept as `unsigned int` to match the API's and low-level functions and platform defines are updated where ever required Signed-off-by: Deepika Bhavnani <deepika.bhavnani@arm.com> Change-Id: Ib26fd16e420c35527204b126b9b91e8babcc3a5c
-
- 26 Nov, 2019 1 commit
-
-
Pankaj Gupta authored
Same SoC has different personality by creating different number of: - cores - clusters. As a result, the platform specific power domain tree will be created after identify the personality of the SoC. Hence, platform specific power domain tree may not be same for all the personality of the soc. Thus, psci library code will deduce the 'plat_core_count', while populating the power domain tree topology and return the number of cores. PLATFORM_CORE_COUNT will still be valid for a SoC, such that psci_plat_core_count <= PLATFORM_CORE_COUNT. PLATFORM_CORE_COUNT will continued to be defined by platform to create the data structures. Signed-off-by: Pankaj Gupta <pankaj.gupta@nxp.com> Change-Id: I1f5c47647631cae2dcdad540d64cf09757db7185
-
- 26 Sep, 2019 1 commit
-
-
Madhukar Pappireddy authored
This PSCI hook is similar to pwr_domain_on_finish but is guaranteed to be invoked with the respective core and cluster are participating in coherency. This will be necessary to safely invoke the new GICv3 API which modifies shared GIC data structures concurrently. Change-Id: I8e54f05c9d4ef5712184c9c18ba45ac97a29eb7a Signed-off-by: Madhukar Pappireddy <madhukar.pappireddy@arm.com>
-
- 13 Sep, 2019 1 commit
-
-
Alexei Fedorov authored
This patch provides the following features and makes modifications listed below: - Individual APIAKey key generation for each CPU. - New key generation on every BL31 warm boot and TSP CPU On event. - Per-CPU storage of APIAKey added in percpu_data[] of cpu_data structure. - `plat_init_apiakey()` function replaced with `plat_init_apkey()` which returns 128-bit value and uses Generic timer physical counter value to increase the randomness of the generated key. The new function can be used for generation of all ARMv8.3-PAuth keys - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`. - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively; pauth_disable_el1()` and `pauth_disable_el3()` functions disable PAuth for EL1 and EL3 respectively; `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from cpu-data structure. - Combined `save_gp_pauth_registers()` function replaces calls to `save_gp_registers()` and `pauth_context_save()`; `restore_gp_pauth_registers()` replaces `pauth_context_restore()` and `restore_gp_registers()` calls. - `restore_gp_registers_eret()` function removed with corresponding code placed in `el3_exit()`. - Fixed the issue when `pauth_t pauth_ctx` structure allocated space for 12 uint64_t PAuth registers instead of 10 by removal of macro CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h` and assigning its value to CTX_PAUTH_REGS_END. - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions in `msr spsel` instruction instead of hard-coded values. - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI. Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211 Signed-off-by: Alexei Fedorov <Alexei.Fedorov@arm.com>
-
- 12 Sep, 2019 1 commit
-
-
Deepika Bhavnani authored
cpu_idx is used as mix of `unsigned int` and `signed int` in code with typecasting at some places. This change is to unify the cpu_idx as `unsigned int` as underlying API;s `plat_my_core_pos` returns `unsigned int` It was discovered via coverity issue CID 354715 Signed-off-by: Deepika Bhavnani <deepika.bhavnani@arm.com> Change-Id: I4f0adb0c596ff1177210c5fe803bff853f2e54ce
-
- 09 Sep, 2019 1 commit
-
-
Deepika Bhavnani authored
Signed-off-by: Deepika Bhavnani <deepika.bhavnani@arm.com> Change-Id: I4a496d5a8e7a9a127cd6224c968539eb74932fca
-
- 16 Aug, 2019 1 commit
-
-
Deepika Bhavnani authored
GCC diagnostics were added to ignore array boundaries, instead of ignoring GCC warning current code will check for array boundaries and perform and array update only for valid elements. Resolves: `CID 246574` `CID 246710` `CID 246651` Signed-off-by: Deepika Bhavnani <deepika.bhavnani@arm.com> Change-Id: I7530ecf7a1707351c6ee87e90cc3d33574088f57
-
- 01 Aug, 2019 1 commit
-
-
Julius Werner authored
NOTE: AARCH32/AARCH64 macros are now deprecated in favor of __aarch64__. All common C compilers pre-define the same macros to signal which architecture the code is being compiled for: __arm__ for AArch32 (or earlier versions) and __aarch64__ for AArch64. There's no need for TF-A to define its own custom macros for this. In order to unify code with the export headers (which use __aarch64__ to avoid another dependency), let's deprecate the AARCH32 and AARCH64 macros and switch the code base over to the pre-defined standard macro. (Since it is somewhat unintuitive that __arm__ only means AArch32, let's standardize on only using __aarch64__.) Change-Id: Ic77de4b052297d77f38fc95f95f65a8ee70cf200 Signed-off-by: Julius Werner <jwerner@chromium.org>
-
- 06 Jun, 2019 1 commit
-
-
Andrew F. Davis authored
When acquiring or releasing the power domain locks for a given CPU the parent nodes are looked up by walking the up the PD tree list on both the acquire and release path, only one set of lookups is needed. Fetch the parent nodes first and pass this list into both the acquire and release functions to avoid the double lookup. This also allows us to not have to do this lookup after coherency has been exited during the core power down sequence. The shared struct psci_cpu_pd_nodes is not placed in coherent memory like is done for psci_non_cpu_pd_nodes and doing so would negatively affect performance. With this patch we remove the need to have it in coherent memory by moving the access out of psci_release_pwr_domain_locks(). Signed-off-by: Andrew F. Davis <afd@ti.com> Change-Id: I7b9cfa9d31148dea0f5e21091c8b45ef7fe4c4ab
-
- 04 Jan, 2019 1 commit
-
-
Antonio Nino Diaz authored
Enforce full include path for includes. Deprecate old paths. The following folders inside include/lib have been left unchanged: - include/lib/cpus/${ARCH} - include/lib/el3_runtime/${ARCH} The reason for this change is that having a global namespace for includes isn't a good idea. It defeats one of the advantages of having folders and it introduces problems that are sometimes subtle (because you may not know the header you are actually including if there are two of them). For example, this patch had to be created because two headers were called the same way: e0ea0928 ("Fix gpio includes of mt8173 platform to avoid collision."). More recently, this patch has had similar problems: 46f9b2c3 ("drivers: add tzc380 support"). This problem was introduced in commit 4ecca339 ("Move include and source files to logical locations"). At that time, there weren't too many headers so it wasn't a real issue. However, time has shown that this creates problems. Platforms that want to preserve the way they include headers may add the removed paths to PLAT_INCLUDES, but this is discouraged. Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 26 Nov, 2018 1 commit
-
-
Joel Hutton authored
Initial Spectre Variant 1 mitigations (CVE-2017-5753). A potential speculative data leak was found in PSCI code, this depends on a non-robust implementation of the `plat_get_core_pos_by_mpidr()` function. This is considered very low-risk. This patch adds a macro to mitigate this. Note not all code paths could be analyzed with current tools. Add a macro which makes a variable 'speculation safe', using the __builtin_speculation_safe_value function of GCC and llvm. This will be available in GCC 9, and is planned for llvm, but is not currently in mainline GCC or llvm. In order to implement this mitigation the compiler must support this builtin. Support is indicated by the __HAVE_SPECULATION_SAFE_VALUE flag. The -mtrack-speculation option maintains a 'tracker' register, which determines if the processor is in false speculation at any point. This adds instructions and increases code size, but avoids the performance impact of a hard barrier. Without the -mtrack-speculation option, __builtin_speculation_safe_value expands to a ISB DSB SY sequence after a conditional branch, before the speculation safe variable is used. With -mtrack-speculation a CSEL tracker, tracker, XZR, [cond]; AND safeval,tracker; CSDB sequence is added instead, clearing the vulnerable variable by AND'ing it with the tracker register, which is zero during speculative execution. [cond] are the status flags which will only be true during speculative execution. For more information on __builtin_speculation_safe_value and the -mtrack-speculation option see https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/compiler-support-for-mitigations The -mtracking option was not added, as the performance impact of the mitigation is low, and there is only one occurence. Change-Id: Ic9e66d1f4a5155e42e3e4055594974c230bfba3c Signed-off-by: Joel Hutton <Joel.Hutton@Arm.com>
-
- 11 Oct, 2018 1 commit
-
-
ldts authored
Some platforms can only resume from system suspend from the boot CPU, hence they should only enter that state from that same core. The following commit presents an interface that allows the platform to reject system suspend entry near its very last stage (last CPU).
-
- 10 Oct, 2018 2 commits
-
-
Andrew F. Davis authored
When a platform enables its caches before it accesses the psci_non_cpu_pd_nodes structure then explicit cache maintenance is not needed. Signed-off-by: Andrew F. Davis <afd@ti.com>
-
Andrew F. Davis authored
The MMU is not disabled in this path, update the comment to reflect this. Also clarify that both paths call prepare_cpu_pwr_dwn(), but the second path does stack cache maintenance. Signed-off-by: Andrew F. Davis <afd@ti.com>
-
- 03 Oct, 2018 1 commit
-
-
Daniel Boulby authored
Mark the initialization functions in BL31, such as context management, EHF, RAS and PSCI as __init so that they can be reclaimed by the platform when no longer needed Change-Id: I7446aeee3dde8950b0f410cb766b7a2312c20130 Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
-
- 28 Sep, 2018 3 commits
-
-
Antonio Nino Diaz authored
Change-Id: Icd1cdd42afdc78895a9be6c46b414b0a155cfa63 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
Change-Id: I9fd8016527ad7706494f34356fdae8efacef5f72 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Roberto Vargas authored
Change-Id: I40d040aa05bcbf11536a96ce59827711456b93a8 Co-authored-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com> Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 07 Aug, 2018 1 commit
-
-
Antonio Nino Diaz authored
During cold boot, the initial translation tables are created with data caches disabled, so all modifications go to memory directly. After the MMU is enabled and data cache is enabled, any modification to the tables goes to data cache, and eventually may get flushed to memory. If CPU0 modifies the tables while CPU1 is off, CPU0 will have the modified tables in its data cache. When CPU1 is powered on, the MMU is enabled, then it enables coherency, and then it enables the data cache. Until this is done, CPU1 isn't in coherency, and the translation tables it sees can be outdated if CPU0 still has some modified entries in its data cache. This can be a problem in some cases. For example, the warm boot code uses only the tables mapped during cold boot, which don't normally change. However, if they are modified (and a RO page is made RW, or a XN page is made executable) the CPU will see the old attributes and crash when it tries to access it. This doesn't happen in systems with HW_ASSISTED_COHERENCY or WARMBOOT_ENABLE_DCACHE_EARLY. In these systems, the data cache is enabled at the same time as the MMU. As soon as this happens, the CPU is in coherency. There was an attempt of a fix in psci_helpers.S, but it didn't solve the problem. That code has been deleted. The code was introduced in commit <26441030 > ("Invalidate TLB entries during warm boot"). Now, during a map or unmap operation, the memory associated to each modified table is flushed. Traversing a table will also flush it's memory, as there is no way to tell in the current implementation if the table that has been traversed has also been modified. Change-Id: I4b520bca27502f1018878061bc5fb82af740bb92 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 02 Aug, 2018 1 commit
-
-
Antonio Nino Diaz authored
Change-Id: I77c9cd2d1d6d0122cc49917fa686014bee154589 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 26 Jul, 2018 1 commit
-
-
Andrew F. Davis authored
If either USE_COHERENT_MEM or HW_ASSISTED_COHERENCY being true should cause us to not enter the ifdef block, then the logic is not correct here. Posibly bad use of De Morgan's law? Fix this. Signed-off-by: Andrew F. Davis <afd@ti.com>
-
- 24 Jul, 2018 4 commits
-
-
Antonio Nino Diaz authored
MISRA C-2012 Rules 10.1, 10.3, 17.8 and 20.7. Change-Id: I3980bd2a1d845559af4bbe2887a0250d0506a064 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
MISRA C-2012 Rules 10.1, 10.3 and 20.7. Change-Id: I972ce63f0d8fa157ed17e826b84f218fe498c517 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
MISRA C-2012 Rules 10.1 and 10.3. Change-Id: I88cd5f56cda5780f2e0ba541c0f5b561309ab3af Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
Fix violations of MISRA C-2012 Rules 8.13, 10.1, 10.3, 17.7 and 20.7. Change-Id: I6f45a1069b742aebf9e1d6a403717b1522083f51 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 20 Jul, 2018 3 commits
-
-
Antonio Nino Diaz authored
Also change header guards to fix defects of MISRA C-2012 Rule 21.1. Change-Id: Ied0d4b0e557ef6119ab669d106d2ac5d99620c57 Acked-by: Sumit Garg <sumit.garg@linaro.org> Acked-by: Anson Huang <Anson.Huang@nxp.com> Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
Fix MISRA C-2012 Directive 4.9 defects. Change-Id: Ibd5364d8f138ddcf59c8074c32b35769366807dc Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
Antonio Nino Diaz authored
Fix MISRA C-2012 Directive 4.9 and Rule 21.1 defects. Change-Id: I96c216317d38741ee632d2640cd7b36e6723d5c2 Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 12 Jun, 2018 1 commit
-
-
Daniel Boulby authored
Use a _ prefix for Macro arguments to prevent that argument from hiding variables of the same name in the outer scope Rule 5.3: An identifier declared in an inner scope shall not hide an identifier declared in an outer scope Fixed For: make LOG_LEVEL=50 PLAT=fvp Change-Id: I67b6b05cbad4aeca65ce52981b4679b340604708 Signed-off-by: Daniel Boulby <daniel.boulby@arm.com>
-
- 27 Mar, 2018 1 commit
-
-
Jonathan Wright authored
Initializes each element of the last_cpu_in_non_cpu_pd array in PSCI stat implementation to -1, the reset value. This satisfies MISRA rule 9.3. Previously, only the first element of the array was initialized to -1. Change-Id: I666c71e6c073710c67c6d24c07a219b1feb5b773 Signed-off-by: Jonathan Wright <jonathan.wright@arm.com>
-
- 26 Mar, 2018 1 commit
-
-
Jonathan Wright authored
Ensure (where possible) that switch statements in lib comply with MISRA rules 16.1 - 16.7. Change-Id: I52bc896fb7094d2b7569285686ee89f39f1ddd84 Signed-off-by: Jonathan Wright <jonathan.wright@arm.com>
-
- 21 Mar, 2018 1 commit
-
-
Antonio Nino Diaz authored
When the source code says 'SMCC' it is talking about the SMC Calling Convention. The correct acronym is SMCCC. This affects a few definitions and file names. Some files have been renamed (smcc.h, smcc_helpers.h and smcc_macros.S) but the old files have been kept for compatibility, they include the new ones with an ERROR_DEPRECATED guard. Change-Id: I78f94052a502436fdd97ca32c0fe86bd58173f2f Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 28 Feb, 2018 1 commit
-
-
Roberto Vargas authored
Rule 8.3: All declarations of an object or function shall use the same names and type qualifiers. Change-Id: Iff384187c74a598a4e73f350a1893b60e9d16cec Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
- 27 Feb, 2018 1 commit
-
-
Antonio Nino Diaz authored
During the warm boot sequence: 1. The MMU is enabled with the data cache disabled. The MMU table walker is set up to access the translation tables as in cacheable memory, but its accesses are non-cacheable because SCTLR_EL3.C controls them as well. 2. The interconnect is set up and the CPU enters coherency with the rest of the system. 3. The data cache is enabled. If the support for dynamic translation tables is enabled and another CPU makes changes to a region, the changes may only be present in the data cache, not in RAM. The CPU that is booting isn't in coherency with the rest of the system, so the table walker of that CPU isn't either. This means that it may read old entries from RAM and it may have invalid TLB entries corresponding to the dynamic mappings. This is not a problem for the boot code because the mapping is 1:1 and the regions are static. However, the code that runs after the boot sequence may need to access the dynamically mapped regions. This patch invalidates all TLBs during warm boot when the dynamic translation tables support is enabled to prevent this problem. Change-Id: I80264802dc0aa1cb3edd77d0b66b91db6961af3d Signed-off-by: Antonio Nino Diaz <antonio.ninodiaz@arm.com>
-
- 29 Jan, 2018 1 commit
-
-
Dimitris Papastamos authored
On some platforms it may be necessary to discover the SMCCC version via a PSCI features call. Change-Id: I95281ac2263ca9aefda1809eb03464fbdb8ac24d Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
- 11 Jan, 2018 1 commit
-
-
Dimitris Papastamos authored
The suspend hook is published at the start of a CPU powerdown operation. The resume hook is published at the end of a CPU powerup operation. Change-Id: I50c05e2dde0d33834095ac41b4fcea4c161bb434 Signed-off-by: Dimitris Papastamos <dimitris.papastamos@arm.com>
-
- 20 Nov, 2017 1 commit
-
-
Roberto Vargas authored
There is an edge case where the cache maintaince done in psci_do_cpu_off may not seen by some cores. This case is handled in psci_cpu_on_start but it hasn't handled in psci_affinity_info. Change-Id: I4d64f3d1ca9528e364aea8d04e2d254f201e1702 Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
-
- 08 Nov, 2017 1 commit
-
-
Etienne Carriere authored
If ARMv7 based platform does not set ARM_CORTEX_Ax=yes, platform shall define ARMV7_SUPPORTS_GENERIC_TIMER to enable generic timer support. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
-