- 26 Jul, 2017 1 commit
-
-
David Cunado authored
There is a theoretical edge case during CPU_ON where the cache may contain stale data for the target CPU data - this can occur under the following conditions: - the target CPU is in another cluster from the current - the target CPU was the last CPU to shutdown on its cluster - the cluster was removed from coherency as part of the CPU shutdown In this case the cache maintenace that was performed as part of the target CPUs shutdown was not seen by the current CPU's cluster. And so the cache may contain stale data for the target CPU. This patch adds a cache maintenance operation (flush) for the cache-line containing the target CPU data - this ensures that the target CPU data is read from main memory. Change-Id: If8cfd42639b03174f60669429b7f7a757027d0fb Signed-off-by: David Cunado <david.cunado@arm.com>
-
- 12 Jul, 2017 1 commit
-
-
Isla Mitchell authored
This fix modifies the order of system includes to meet the ARM TF coding standard. There are some exceptions in order to retain header groupings, minimise changes to imported headers, and where there are headers within the #if and #ifndef statements. Change-Id: I65085a142ba6a83792b26efb47df1329153f1624 Signed-off-by: Isla Mitchell <isla.mitchell@arm.com>
-
- 03 May, 2017 1 commit
-
-
dp-arm authored
To make software license auditing simpler, use SPDX[0] license identifiers instead of duplicating the license text in every file. NOTE: Files that have been imported by FreeBSD have not been modified. [0]: https://spdx.org/ Change-Id: I80a00e1f641b8cc075ca5a95b10607ed9ed8761a Signed-off-by: dp-arm <dimitris.papastamos@arm.com>
-
- 19 Apr, 2017 1 commit
-
-
Soby Mathew authored
This patch introduces a build option to enable D-cache early on the CPU after warm boot. This is applicable for platforms which do not require interconnect programming to enable cache coherency (eg: single cluster platforms). If this option is enabled, then warm boot path enables D-caches immediately after enabling MMU. Fixes ARM-Software/tf-issues#456 Change-Id: I44c8787d116d7217837ced3bcf0b1d3441c8d80e Signed-off-by: Soby Mathew <soby.mathew@arm.com>
-
- 02 Mar, 2017 1 commit
-
-
Jeenu Viswambharan authored
The current PSCI implementation can apply certain optimizations upon the assumption that all PSCI participants are cache-coherent. - Skip performing cache maintenance during power-up. - Skip performing cache maintenance during power-down: At present, on the power-down path, CPU driver disables caches and MMU, and performs cache maintenance in preparation for powering down the CPU. This means that PSCI must perform additional cache maintenance on the extant stack for correct functioning. If all participating CPUs are cache-coherent, CPU driver would neither disable MMU nor perform cache maintenance. The CPU being powered down, therefore, remain cache-coherent throughout all PSCI call paths. This in turn means that PSCI cache maintenance operations are not required during power down. - Choose spin locks instead of bakery locks: The current PSCI implementation must synchronize both cache-coherent and non-cache-coherent participants. Mutual exclusion primitives are not guaranteed to function on non-coherent memory. For this reason, the current PSCI implementation had to resort to bakery locks. If all participants are cache-coherent, the implementation can enable MMU and data caches early, and substitute bakery locks for spin locks. Spin locks make use of architectural mutual exclusion primitives, and are lighter and faster. The optimizations are applied when HW_ASSISTED_COHERENCY build option is enabled, as it's expected that all PSCI participants are cache-coherent in those systems. Change-Id: Iac51c3ed318ea7e2120f6b6a46fd2db2eae46ede Signed-off-by: Jeenu Viswambharan <jeenu.viswambharan@arm.com>
-
- 19 Jul, 2016 1 commit
-
-
Soby Mathew authored
This patch introduces the PSCI Library interface. The major changes introduced are as follows: * Earlier BL31 was responsible for Architectural initialization during cold boot via bl31_arch_setup() whereas PSCI was responsible for the same during warm boot. This functionality is now consolidated by the PSCI library and it does Architectural initialization via psci_arch_setup() during both cold and warm boots. * Earlier the warm boot entry point was always `psci_entrypoint()`. This was not flexible enough as a library interface. Now PSCI expects the runtime firmware to provide the entry point via `psci_setup()`. A new function `bl31_warm_entrypoint` is introduced in BL31 and the previous `psci_entrypoint()` is deprecated. * The `smc_helpers.h` is reorganized to separate the SMC Calling Convention defines from the Trusted Firmware SMC helpers. The former is now in a new header file `smcc.h` and the SMC helpers are moved to Architecture specific header. * The CPU context is used by PSCI for context initialization and restoration after power down (PSCI Context). It is also used by BL31 for SMC handling and context management during Normal-Secure world switch (SMC Context). The `psci_smc_handler()` interface is redefined to not use SMC helper macros thus enabling to decouple the PSCI context from EL3 runtime firmware SMC context. This enables PSCI to be integrated with other runtime firmware using a different SMC context. NOTE: With this patch the architectural setup done in `bl31_arch_setup()` is done as part of `psci_setup()` and hence `bl31_platform_setup()` will be invoked prior to architectural setup. It is highly unlikely that the platform setup will depend on architectural setup and cause any failure. Please be be aware of this change in sequence. Change-Id: I7f497a08d33be234bbb822c28146250cb20dab73
-
- 18 Jul, 2016 1 commit
-
-
Soby Mathew authored
This patch moves the PSCI services and BL31 frameworks like context management and per-cpu data into new library components `PSCI` and `el3_runtime` respectively. This enables PSCI to be built independently from BL31. A new `psci_lib.mk` makefile is introduced which adds the relevant PSCI library sources and gets included by `bl31.mk`. Other changes which are done as part of this patch are: * The runtime services framework is now moved to the `common/` folder to enable reuse. * The `asm_macros.S` and `assert_macros.S` helpers are moved to architecture specific folder. * The `plat_psci_common.c` is moved from the `plat/common/aarch64/` folder to `plat/common` folder. The original file location now has a stub which just includes the file from new location to maintain platform compatibility. Most of the changes wouldn't affect platform builds as they just involve changes to the generic bl1.mk and bl31.mk makefiles. NOTE: THE `plat_psci_common.c` FILE HAS MOVED LOCATION AND THE STUB FILE AT THE ORIGINAL LOCATION IS NOW DEPRECATED. PLATFORMS SHOULD MODIFY THEIR MAKEFILES TO INCLUDE THE FILE FROM THE NEW LOCATION. Change-Id: I6bd87d5b59424995c6a65ef8076d4fda91ad5e86
-
- 25 Apr, 2016 2 commits
-
-
Sandrine Bailleux authored
The "end power level" value passed as the 3rd argument to the psci_cpu_on_start() function is not used so this patch removes it. Change-Id: Icaa68b8c4ecd94507287970455fbff354faaa41e
-
Sandrine Bailleux authored
This patch introduces some debug assertions in the function psci_cpu_on_start() to check the arguments it receives are valid. Change-Id: If4d23c9f668fb46f2d18c5e2ed1929498cc6736b
-
- 01 Feb, 2016 1 commit
-
-
Soby Mathew authored
When a CPU is powered down using PSCI CPU OFF API, it disables its caches and updates its `aff_info_state` to OFF. The corresponding cache line is invalidated by the CPU so that the update will be observed by other CPUs running with caches enabled. There is a possibility that another CPU which has been trying to turn ON this CPU via PSCI CPU ON API, has already seen the update to `aff_info_state` and proceeds to update the state to ON_PENDING prior to the cache invalidation. This may result in the update of the state to ON_PENDING being discarded. This patch fixes this issue by making sure that the update of `aff_info_state` to ON_PENDING sticks by reading back the value after the cache flush and retrying it if not updated. The patch also adds a dsbish() to `psci_do_cpu_off()` to ensure ordering of the update to `aff_info_state` prior to cache line invalidation. Fixes ARM-software/tf-issues#349 Change-Id: I225de99957fe89871f8c57bcfc243956e805dcca
-
- 14 Sep, 2015 1 commit
-
-
Achin Gupta authored
On the ARMv8 architecture, cache maintenance operations by set/way on the last level of integrated cache do not affect the system cache. This means that such a flush or clean operation could result in the data being pushed out to the system cache rather than main memory. Another CPU could access this data before it enables its data cache or MMU. Such accesses could be serviced from the main memory instead of the system cache. If the data in the sysem cache has not yet been flushed or evicted to main memory then there could be a loss of coherency. The only mechanism to guarantee that the main memory will be updated is to use cache maintenance operations to the PoC by MVA(See section D3.4.11 (System level caches) of ARMv8-A Reference Manual (Issue A.g/ARM DDI0487A.G). This patch removes the reliance of Trusted Firmware on the flush by set/way operation to ensure visibility of data in the main memory. Cache maintenance operations by MVA are now used instead. The following are the broad category of changes: 1. The RW areas of BL2/BL31/BL32 are invalidated by MVA before the C runtime is initialised. This ensures that any stale cache lines at any level of cache are removed. 2. Updates to global data in runtime firmware (BL31) by the primary CPU are made visible to secondary CPUs using a cache clean operation by MVA. 3. Cache maintenance by set/way operations are only used prior to power down. NOTE: NON-UPSTREAM TRUSTED FIRMWARE CODE SHOULD MAKE EQUIVALENT CHANGES IN ORDER TO FUNCTION CORRECTLY ON PLATFORMS WITH SUPPORT FOR SYSTEM CACHES. Fixes ARM-software/tf-issues#205 Change-Id: I64f1b398de0432813a0e0881d70f8337681f6e9a
-
- 13 Aug, 2015 6 commits
-
-
Soby Mathew authored
This patch reworks the PSCI generic implementation to conform to ARM Trusted Firmware coding guidelines as described here: https://github.com/ARM-software/arm-trusted-firmware/wiki This patch also reviews the use of signed data types within PSCI Generic code and replaces them with their unsigned counterparts wherever they are not appropriate. The PSCI_INVALID_DATA macro which was defined to -1 is now replaced with PSCI_INVALID_PWR_LVL macro which is defined to PLAT_MAX_PWR_LVL + 1. Change-Id: Iaea422d0e46fc314e0b173c2b4c16e0d56b2515a
-
Soby Mathew authored
This commit does the switch to the new PSCI framework implementation replacing the existing files in PSCI folder with the ones in PSCI1.0 folder. The corresponding makefiles are modified as required for the new implementation. The platform.h header file is also is switched to the new one as required by the new frameworks. The build flag ENABLE_PLAT_COMPAT defaults to 1 to enable compatibility layer which let the existing platform ports to continue to build and run with minimal changes. The default weak implementation of platform_get_core_pos() is now removed from platform_helpers.S and is provided by the compatibility layer. Note: The Secure Payloads and their dispatchers still use the old platform and framework APIs and hence it is expected that the ENABLE_PLAT_COMPAT build flag will remain enabled in subsequent patch. The compatibility for SPDs using the older APIs on platforms migrated to the new APIs will be added in the following patch. Change-Id: I18c51b3a085b564aa05fdd98d11c9f3335712719
-
Sandrine Bailleux authored
There used to be 2 warm reset entry points: - the "on finisher", for when the core has been turned on using a PSCI CPU_ON call; - the "suspend finisher", entered upon resumption from a previous PSCI CPU_SUSPEND call. The appropriate warm reset entry point used to be programmed into the mailboxes by the power management hooks. However, it is not required to provide this information to the PSCI entry point code, as it can figure it out by itself. By querying affinity info state, a core is able to determine on which execution path it is. If the state is ON_PENDING then it means it's been turned on else it is resuming from suspend. This patch unifies the 2 warm reset entry points into a single one: psci_entrypoint(). The patch also implements the necessary logic to distinguish between the 2 types of warm resets in the power up finisher. The plat_setup_psci_ops() API now takes the secure entry point as an additional parameter to enable the platforms to configure their mailbox. The platform hooks `pwr_domain_on` and `pwr_domain_suspend` no longer take secure entry point as a parameter. Change-Id: I7d1c93787b54213aefdbc046b8cd66a555dfbfd9
-
Soby Mathew authored
The state-id field in the power-state parameter of a CPU_SUSPEND call can be used to describe composite power states specific to a platform. The current PSCI implementation does not interpret the state-id field. It relies on the target power level and the state type fields in the power-state parameter to perform state coordination and power management operations. The framework introduced in this patch allows the PSCI implementation to intepret generic global states like RUN, RETENTION or OFF from the State-ID to make global state coordination decisions and reduce the complexity of platform ports. It adds support to involve the platform in state coordination which facilitates the use of composite power states and improves the support for entering standby states at multiple power domains. The patch also includes support for extended state-id format for the power state parameter as specified by PSCIv1.0. The PSCI implementation now defines a generic representation of the power-state parameter. It depends on the platform port to convert the power-state parameter (possibly encoding a composite power state) passed in a CPU_SUSPEND call to this representation via the `validate_power_state()` plat_psci_ops handler. It is an array where each index corresponds to a power level. Each entry contains the local power state the power domain at that power level could enter. The meaning of the local power state values is platform defined, and may vary between levels in a single platform. The PSCI implementation constrains the values only so that it can classify the state as RUN, RETENTION or OFF as required by the specification: * zero means RUN * all OFF state values at all levels must be higher than all RETENTION state values at all levels * the platform provides PLAT_MAX_RET_STATE and PLAT_MAX_OFF_STATE values to the framework The platform also must define the macros PLAT_MAX_RET_STATE and PLAT_MAX_OFF_STATE which lets the PSCI implementation find out which power domains have been requested to enter a retention or power down state. The PSCI implementation does not interpret the local power states defined by the platform. The only constraint is that the PLAT_MAX_RET_STATE < PLAT_MAX_OFF_STATE. For a power domain tree, the generic implementation maintains an array of local power states. These are the states requested for each power domain by all the cores contained within the domain. During a request to place multiple power domains in a low power state, the platform is passed an array of requested power-states for each power domain through the plat_get_target_pwr_state() API. It coordinates amongst these states to determine a target local power state for the power domain. A default weak implementation of this API is provided in the platform layer which returns the minimum of the requested power-states back to the PSCI state coordination. Finally, the plat_psci_ops power management handlers are passed the target local power states for each affected power domain using the generic representation described above. The platform executes operations specific to these target states. The platform power management handler for placing a power domain in a standby state (plat_pm_ops_t.pwr_domain_standby()) is now only used as a fast path for placing a core power domain into a standby or retention state should now be used to only place the core power domain in a standby or retention state. The extended state-id power state format can be enabled by setting the build flag PSCI_EXTENDED_STATE_ID=1 and it is disabled by default. Change-Id: I9d4123d97e179529802c1f589baaa4101759d80c
-
Soby Mathew authored
This patch removes the assumption in the current PSCI implementation that MPIDR based affinity levels map directly to levels in a power domain tree. This enables PSCI generic code to support complex power domain topologies as envisaged by PSCIv1.0 specification. The platform interface for querying the power domain topology has been changed such that: 1. The generic PSCI code does not generate MPIDRs and use them to query the platform about the number of power domains at a particular power level. The platform now provides a description of the power domain tree on the SoC through a data structure. The existing platform APIs to provide the same information have been removed. 2. The linear indices returned by plat_core_pos_by_mpidr() and plat_my_core_pos() are used to retrieve core power domain nodes from the power domain tree. Power domains above the core level are accessed using a 'parent' field in the tree node descriptors. The platform describes the power domain tree in an array of 'unsigned char's. The first entry in the array specifies the number of power domains at the highest power level implemented in the system. Each susbsequent entry corresponds to a power domain and contains the number of power domains that are its direct children. This array is exported to the generic PSCI implementation via the new `plat_get_power_domain_tree_desc()` platform API. The PSCI generic code uses this array to populate its internal power domain tree using the Breadth First Search like algorithm. The tree is split into two arrays: 1. An array that contains all the core power domain nodes 2. An array that contains all the other power domain nodes A separate array for core nodes allows certain core specific optimisations to be implemented e.g. remove the bakery lock, re-use per-cpu data framework for storing some information. Entries in the core power domain array are allocated such that the array index of the domain is equal to the linear index returned by plat_core_pos_by_mpidr() and plat_my_core_pos() for the MPIDR corresponding to that domain. This relationship is key to be able to use an MPIDR to find the corresponding core power domain node, traverse to higher power domain nodes and index into arrays that contain core specific information. An introductory document has been added to briefly describe the new interface. Change-Id: I4b444719e8e927ba391cae48a23558308447da13
-
Soby Mathew authored
This patch introduces new platform APIs and context management helper APIs to support the new topology framework based on linear core position. This framework will be introduced in the follwoing patch and it removes the assumption that the MPIDR based affinity levels map directly to levels in a power domain tree. The new platforms APIs and context management helpers based on core position are as described below: * plat_my_core_pos() and plat_core_pos_by_mpidr() These 2 new mandatory platform APIs are meant to replace the existing 'platform_get_core_pos()' API. The 'plat_my_core_pos()' API returns the linear index of the calling core and 'plat_core_pos_by_mpidr()' returns the linear index of a core specified by its MPIDR. The latter API will also validate the MPIDR passed as an argument and will return an error code (-1) if an invalid MPIDR is passed as the argument. This enables the caller to safely convert an MPIDR of another core to its linear index without querying the PSCI topology tree e.g. during a call to PSCI CPU_ON. Since the 'plat_core_pos_by_mpidr()' API verifies an MPIDR, which is always platform specific, it is no longer possible to maintain a default implementation of this API. Also it might not be possible for a platform port to verify an MPIDR before the C runtime has been setup or the topology has been initialized. This would prevent 'plat_core_pos_by_mpidr()' from being callable prior to topology setup. As a result, the generic Trusted Firmware code does not call this API before the topology setup has been done. The 'plat_my_core_pos' API should be able to run without a C runtime. Since this API needs to return a core position which is equal to the one returned by 'plat_core_pos_by_mpidr()' API for the corresponding MPIDR, this too cannot have default implementation and is a mandatory API for platform ports. These APIs will be implemented by the ARM reference platform ports later in the patch stack. * plat_get_my_stack() and plat_set_my_stack() These APIs are the stack management APIs which set/return stack addresses appropriate for the calling core. These replace the 'platform_get_stack()' and 'platform_set_stack()' APIs. A default weak MP version and a global UP version of these APIs are provided for the platforms. * Context management helpers based on linear core position A set of new context management(CM) helpers viz cm_get_context_by_index(), cm_set_context_by_index(), cm_init_my_context() and cm_init_context_by_index() are defined which are meant to replace the old helpers which took MPIDR as argument. The old CM helpers are implemented based on the new helpers to allow for code consolidation and will be deprecated once the switch to the new framework is done. Change-Id: I89758632b370c2812973a4b2efdd9b81a41f9b69
-
- 05 Aug, 2015 3 commits
-
-
Soby Mathew authored
As per Section 4.2.2. in the PSCI specification, the term "affinity" is used in the context of describing the hierarchical arrangement of cores. This often, but not always, maps directly to the processor power domain topology of the system. The current PSCI implementation assumes that this is always the case i.e. MPIDR based levels of affinity always map to levels in a power domain topology tree. This patch is the first in a series of patches which remove this assumption. It removes all occurences of the terms "affinity instances and levels" when used to describe the power domain topology. Only the terminology is changed in this patch. Subsequent patches will implement functional changes to remove the above mentioned assumption. Change-Id: Iee162f051b228828310610c5a320ff9d31009b4e
-
Soby Mathew authored
This patch optimizes the invocation of the platform power management hooks for ON, OFF and SUSPEND such that they are called only for the highest affinity level which will be powered off/on. Earlier, the hooks were being invoked for all the intermediate levels as well. This patch requires that the platforms migrate to the new semantics of the PM hooks. It also removes the `state` parameter from the pm hooks as the `afflvl` parameter now indicates the highest affinity level for which power management operations are required. Change-Id: I57c87931d8a2723aeade14acc710e5b78ac41732
-
Soby Mathew authored
This patch creates a copy of the existing PSCI files and related psci.h and platform.h header files in a new `PSCI1.0` directory. The changes for the new PSCI power domain topology and extended state-ID frameworks will be added incrementally to these files. This incremental approach will aid in review and in understanding the changes better. Once all the changes have been introduced, these files will replace the existing PSCI files. Change-Id: Ibb8a52e265daa4204e34829ed050bddd7e3316ff
-
- 13 May, 2015 1 commit
-
-
Soby Mathew authored
In the debug build of the function get_power_on_target_afflvl(), there is a check to ensure that the CPU is emerging from a SUSPEND or ON_PENDING state. The state is checked without acquiring the lock for the CPU node. The state could be updated to ON_PENDING in psci_afflvl_on() after the target CPU has been powered up. This results in a race condition which could cause the check for the ON_PENDING state in get_power_on_target_afflvl() to fail. This patch resolves this race condition by setting the state of the target CPU to ON_PENDING before the platform port attempts to power it on. The target CPU is thus guaranteed to read the correct the state. In case the power on operation fails, the state of the CPU is restored to OFF. Fixes ARM-software/tf-issues#302 Change-Id: I3f2306a78c58d47b1a0fb7e33ab04f917a2d5044
-
- 26 Jan, 2015 1 commit
-
-
Soby Mathew authored
This patch implements conditional checks in psci_smc_handler() to verify that the psci function invoked by the caller is supported by the platform or SPD implementation. The level of support is saved in the 'psci_caps' variable. This check allows the PSCI implementation to return an error early. As a result of the above verification, the checks performed within the psci handlers for the pm hooks are now removed and replaced with assertions. Change-Id: I9b5b646a01d8566dc28c4d77dd3aa54e9bf3981a
-
- 23 Jan, 2015 4 commits
-
-
Soby Mathew authored
This patch allows the platform to validate the power_state and entrypoint information from the normal world early on in PSCI calls so that we can return the error safely. New optional pm_ops hooks `validate_power_state` and `validate_ns_entrypoint` are introduced to do this. As a result of these changes, all the other pm_ops handlers except the PSCI_ON handler are expected to be successful. Also, the PSCI implementation will now assert if a PSCI API is invoked without the corresponding pm_ops handler being registered by the platform. NOTE : PLATFORM PORTS WILL BREAK ON MERGE OF THIS COMMIT. The pm hooks have 2 additional optional callbacks and the return type of the other hooks have changed. Fixes ARM-Software/tf-issues#229 Change-Id: I036bc0cff2349187c7b8b687b9ee0620aa7e24dc
-
Soby Mathew authored
This patch replaces the internal psci_save_ns_entry() API with a psci_get_ns_ep_info() API. The new function splits the work done by the previous one such that it populates and returns an 'entry_point_info_t' structure with the information to enter the normal world upon completion of the CPU_SUSPEND or CPU_ON call. This information is used to populate the non-secure context structure separately. This allows the new internal API `psci_get_ns_ep_info` to return error and enable the code to return safely. Change-Id: Ifd87430a4a3168eac0ebac712f59c93cbad1b231
-
Soby Mathew authored
This patch moves the check for valid CPU state during PSCI_CPU_ON to before the non secure entry point is programmed so as to enable it to return early on error. Change-Id: I1b1a21be421e2b2a6e33db236e91dee8688efffa
-
Soby Mathew authored
This patch removes the non-secure entry point information being passed to the platform pm_ops which is not needed. Also, it removes the `mpidr` parameter for platform pm hooks which are meant to do power management operations only on the current cpu. NOTE: PLATFORM PORTS MUST BE UPDATED AFTER MERGING THIS COMMIT. Change-Id: If632376a990b7f3b355f910e78771884bf6b12e7
-
- 19 Aug, 2014 2 commits
-
-
Achin Gupta authored
This patch implements the following cleanups in PSCI generic code: 1. It reworks the affinity level specific handlers in the PSCI implementation such that. a. Usage of the 'rc' local variable is restricted to only where it is absolutely needed b. 'plat_state' local variable is defined only when a direct invocation of plat_get_phys_state() does not suffice. c. If a platform handler is not registered then the level specific handler returns early. 2. It limits the use of the mpidr_aff_map_nodes_t typedef to declaration of arrays of the type instead of using it in function prototypes as well. 3. It removes dangling declarations of __psci_cpu_off() and __psci_cpu_suspend(). The definitions of these functions were removed in earlier patches. Change-Id: I51e851967c148be9c2eeda3a3c41878f7b4d6978
-
Achin Gupta authored
This patch pulls out state management from the affinity level specific handlers into the top level functions specific to the operation i.e. psci_afflvl_suspend(), psci_afflvl_on() etc. In the power down path this patch will allow an affinity instance at level X to determine the state that an affinity instance at level X+1 will enter before the level specific handlers are called. This will be useful to determine whether a CPU is the last in the cluster during a suspend/off request and so on. Similarly, in the power up path this patch will allow an affinity instance at level X to determine the state that an affinity instance at level X+1 has emerged from, even after the level specific handlers have been called. This will be useful in determining whether a CPU is the first in the cluster during a on/resume request and so on. As before, while powering down, state is updated before the level specific handlers are invoked so that they can perform actions based upon their target state. While powering up, state is updated after the level specific handlers have been invoked so that they can perform actions based upon the state they emerged from. Change-Id: I40fe64cb61bb096c66f88f6d493a1931243cfd37
-
- 19 Jul, 2014 2 commits
-
-
Achin Gupta authored
This patch uses stacks allocated in normal memory to enable the MMU early in the warm boot path thus removing the dependency on stacks allocated in coherent memory. Necessary cache and stack maintenance is performed when a cpu is being powered down and up. This avoids any coherency issues that can arise from reading speculatively fetched stale stack memory from another CPUs cache. These changes affect the warm boot path in both BL3-1 and BL3-2. The EL3 system registers responsible for preserving the MMU state are not saved and restored any longer. Static values are used to program these system registers when a cpu is powered on or resumed from suspend. Change-Id: I8357e2eb5eb6c5f448492c5094b82b8927603784
-
Achin Gupta authored
This patch adds a 'flags' parameter to each exception level specific function responsible for enabling the MMU. At present only a single flag which indicates whether the data cache should also be enabled is implemented. Subsequent patches will use this flag when enabling the MMU in the warm boot paths. Change-Id: I0eafae1e678c9ecc604e680851093f1680e9cefa
-
- 25 Jun, 2014 1 commit
-
-
Andrew Thoelke authored
Many of the interfaces internal to PSCI pass the current CPU MPIDR_EL1 value from function to function. This is not required, and with inline access to the system registers is less efficient than requiring the code to read that register whenever required. This patch remove the mpidr parameter from the affected interfaces and reduces code in FVP BL3-1 size by 160 bytes. Change-Id: I16120a7c6944de37232016d7e109976540775602
-
- 23 Jun, 2014 1 commit
-
-
Andrew Thoelke authored
Consolidate all BL3-1 CPU context initialization for cold boot, PSCI and SPDs into two functions: * The first uses entry_point_info to initialize the relevant cpu_context for first entry into a lower exception level on a CPU * The second populates the EL1 and EL2 system registers as needed from the cpu_context to ensure correct entry into the lower EL This patch alters the way that BL3-1 determines which exception level is used when first entering EL1 or EL2 during cold boot - this is now fully determined by the SPSR value in the entry_point_info for BL3-3, as set up by the platform code in BL2 (or otherwise provided to BL3-1). In the situation that EL1 (or svc mode) is selected for a processor that supports EL2, the context management code will now configure all essential EL2 register state to ensure correct execution of EL1. This allows the platform code to run non-secure EL1 payloads directly without requiring a small EL2 stub or OS loader. Change-Id: If9fbb2417e82d2226e47568203d5a369f39d3b0f
-
- 17 Jun, 2014 1 commit
-
-
Andrew Thoelke authored
The crash reporting support and early initialisation of the cpu_data allow the runtime_exception vectors to be used from the start in BL3-1, removing the need for the additional early_exception vectors and 2KB of code from BL3-1. Change-Id: I5f8997dabbaafd8935a7455910b7db174a25d871
-
- 16 Jun, 2014 1 commit
-
-
Andrew Thoelke authored
This patch prepares the per-cpu pointer cache for wider use by: * renaming the structure to cpu_data and placing in new header * providing accessors for this CPU, or other CPUs * splitting the initialization of the TPIDR pointer from the initialization of the cpu_data content * moving the crash stack initialization to a crash stack function * setting the TPIDR pointer very early during boot Change-Id: Icef9004ff88f8eb241d48c14be3158087d7e49a3
-
- 11 Jun, 2014 1 commit
-
-
Andrew Thoelke authored
All callers of cm_get_context() pass the calling CPU MPIDR to the function. Providing a specialised version for the current CPU results in a reduction in code size and better readability. The current function has been renamed to cm_get_context_by_mpidr() and the existing name is now used for the current-CPU version. The same treatment has been done to cm_set_context(), although only both forms are used at present in the PSCI and TSPD code. Change-Id: I91cb0c2f7bfcb950a045dbd9ff7595751c0c0ffb
-
- 23 May, 2014 2 commits
-
-
Dan Handley authored
Previously, the enable_mmu_elX() functions were implicitly part of the platform porting layer since they were included by generic code. These functions have been placed behind 2 new platform functions, bl31_plat_enable_mmu() and bl32_plat_enable_mmu(). These are weakly defined so that they can be optionally overridden by platform ports. Also, the enable_mmu_elX() functions have been moved to lib/aarch64/xlat_tables.c for optional re-use by platform ports. These functions are tightly coupled with the translation table initialization code. Fixes ARM-software/tf-issues#152 Change-Id: I0a2251ce76acfa3c27541f832a9efaa49135cc1c
-
Dan Handley authored
Previously, platform.h contained many declarations and definitions used for different purposes. This file has been split so that: * Platform definitions used by common code that must be defined by the platform are now in platform_def.h. The exact include path is exported through $PLAT_INCLUDES in the platform makefile. * Platform definitions specific to the FVP platform are now in /plat/fvp/fvp_def.h. * Platform API declarations specific to the FVP platform are now in /plat/fvp/fvp_private.h. * The remaining platform API declarations that must be ported by each platform are still in platform.h but this file has been moved to /include/plat/common since this can be shared by all platforms. Change-Id: Ieb3bb22fbab3ee8027413c6b39a783534aee474a
-
- 16 May, 2014 1 commit
-
-
Soby Mathew authored
This patch implements the register reporting when unhandled exceptions are taken in BL3-1. Unhandled exceptions will result in a dump of registers to the console, before halting execution by that CPU. The Crash Stack, previously called the Exception Stack, is used for this activity. This stack is used to preserve the CPU context and runtime stack contents for debugging and analysis. This also introduces the per_cpu_ptr_cache, referenced by tpidr_el3, to provide easy access to some of BL3-1 per-cpu data structures. Initially, this is used to provide a pointer to the Crash stack. panic() now prints the the error file and line number in Debug mode and prints the PC value in release mode. The Exception Stack is renamed to Crash Stack with this patch. The original intention of exception stack is no longer valid since we intend to support several valid exceptions like IRQ and FIQ in the trusted firmware context. This stack is now utilized for dumping and reporting the system state when a crash happens and hence the rename. Fixes ARM-software/tf-issues#79 Improve reporting of unhandled exception Change-Id: I260791dc05536b78547412d147193cdccae7811a
-
- 09 May, 2014 1 commit
-
-
Sandrine Bailleux authored
Instead of having a single version of the MMU setup functions for all bootloader images that can execute either in EL3 or in EL1, provide separate functions for EL1 and EL3. Each bootloader image can then call the appropriate version of these functions. The aim is to reduce the amount of code compiled in each BL image by embedding only what's needed (e.g. BL1 to embed only EL3 variants). Change-Id: Ib86831d5450cf778ae78c9c1f7553fe91274c2fa
-
- 06 May, 2014 1 commit
-
-
Dan Handley authored
Reduce the number of header files included from other header files as much as possible without splitting the files. Use forward declarations where possible. This allows removal of some unnecessary "#ifndef __ASSEMBLY__" statements. Also, review the .c and .S files for which header files really need including and reorder the #include statements alphabetically. Fixes ARM-software/tf-issues#31 Change-Id: Iec92fb976334c77453e010b60bcf56f3be72bd3e
-