- 25 Apr, 2016 1 commit
-
-
Sandrine Bailleux authored
This patch introduces some debug assertions in the function psci_cpu_on_start() to check the arguments it receives are valid. Change-Id: If4d23c9f668fb46f2d18c5e2ed1929498cc6736b
-
- 01 Apr, 2016 1 commit
-
-
Ashutosh Singh authored
In new communication protocol between optee os and linux driver, r0-r6 registers are used. opteed need to copy these registers as well when optee context registers are initialized. Change-Id: Ifb47b73f847c61746cb58ea78411c1c71f208030 Signed-off-by: Ashutosh Singh <ashutosh.singh@arm.com>
-
- 08 Feb, 2016 1 commit
-
-
Soby Mathew authored
When BL31 is compiled at `-O3` optimization level using Linaro GCC 4.9 AArch64 toolchain, it reports the following error: ``` services/std_svc/psci/psci_common.c: In function 'psci_do_state_coordination': services/std_svc/psci/psci_common.c:220:27: error: array subscript is above array bounds [-Werror=array-bounds] psci_req_local_pwr_states[pwrlvl - 1][cpu_idx] = req_pwr_state; ^ ``` This error is a false positive and this patch resolves the error by asserting the array bounds in `psci_do_state_coordination()`. Fixes ARM-software/tf-issues#347 Change-Id: I3584ed7b2e28faf455b082cb3281d6e1d11d6495
-
- 01 Feb, 2016 1 commit
-
-
Soby Mathew authored
When a CPU is powered down using PSCI CPU OFF API, it disables its caches and updates its `aff_info_state` to OFF. The corresponding cache line is invalidated by the CPU so that the update will be observed by other CPUs running with caches enabled. There is a possibility that another CPU which has been trying to turn ON this CPU via PSCI CPU ON API, has already seen the update to `aff_info_state` and proceeds to update the state to ON_PENDING prior to the cache invalidation. This may result in the update of the state to ON_PENDING being discarded. This patch fixes this issue by making sure that the update of `aff_info_state` to ON_PENDING sticks by reading back the value after the cache flush and retrying it if not updated. The patch also adds a dsbish() to `psci_do_cpu_off()` to ensure ordering of the update to `aff_info_state` prior to cache line invalidation. Fixes ARM-software/tf-issues#349 Change-Id: I225de99957fe89871f8c57bcfc243956e805dcca
-
- 14 Jan, 2016 1 commit
-
-
Soren Brinkmann authored
Migrate all direct usage of __attribute__ to usage of their corresponding macros from cdefs.h. e.g.: - __attribute__((unused)) -> __unused Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
-
- 21 Dec, 2015 1 commit
-
-
Sandrine Bailleux authored
Change-Id: I6f49bd779f2a4d577c6443dd160290656cdbc59b
-
- 14 Dec, 2015 1 commit
-
-
Juan Castillo authored
This patch removes the dash character from the image name, to follow the image terminology in the Trusted Firmware Wiki page: https://github.com/ARM-software/arm-trusted-firmware/wiki Changes apply to output messages, comments and documentation. non-ARM platform files have been left unmodified. Change-Id: Ic2a99be4ed929d52afbeb27ac765ceffce46ed76
-
- 09 Dec, 2015 1 commit
-
-
Soby Mathew authored
Earlier the TSP only ever expected to be preempted during Standard SMC processing. If a S-EL1 interrupt triggered while in the normal world, it will routed to S-EL1 `synchronously` for handling. The `synchronous` S-EL1 interrupt handler `tsp_sel1_intr_entry` used to panic if this S-EL1 interrupt was preempted by another higher priority pending interrupt which should be handled in EL3 e.g. Group0 interrupt in GICv3. With this patch, the `tsp_sel1_intr_entry` now expects `TSP_PREEMPTED` as the return code from the `tsp_common_int_handler` in addition to 0 (interrupt successfully handled) and in both cases it issues an SMC with id `TSP_HANDLED_S_EL1_INTR`. The TSPD switches the context and returns back to normal world. In case a higher priority EL3 interrupt was pending, the execution will be routed to EL3 where interrupt will be handled. On return back to normal world, the pending S-EL1 interrupt which was preempted will get routed to S-EL1 to be handled `synchronously` via `tsp_sel1_intr_entry`. Change-Id: I2087c7fedb37746fbd9200cdda9b6dba93e16201
-
- 04 Dec, 2015 2 commits
-
-
Soby Mathew authored
On a GICv2 system, interrupts that should be handled in the secure world are typically signalled as FIQs. On a GICv3 system, these interrupts are signalled as IRQs instead. The mechanism for handling both types of interrupts is the same in both cases. This patch enables the TSP to run on a GICv3 system by: 1. adding support for handling IRQs in the exception handling code. 2. removing use of "fiq" in the names of data structures, macros and functions. The build option TSPD_ROUTE_IRQ_TO_EL3 is deprecated and is replaced with a new build flag TSP_NS_INTR_ASYNC_PREEMPT. For compatibility reasons, if the former build flag is defined, it will be used to define the value for the new build flag. The documentation is also updated accordingly. Change-Id: I1807d371f41c3656322dd259340a57649833065e
-
Soby Mathew authored
The TSP is expected to pass control back to EL3 if it gets preempted due to an interrupt while handling a Standard SMC in the following scenarios: 1. An FIQ preempts Standard SMC execution and that FIQ is not a TSP Secure timer interrupt or is preempted by a higher priority interrupt by the time the TSP acknowledges it. In this case, the TSP issues an SMC with the ID as `TSP_EL3_FIQ`. Currently this case is never expected to happen as only the TSP Secure Timer is expected to generate FIQ. 2. An IRQ preempts Standard SMC execution and in this case the TSP issues an SMC with the ID as `TSP_PREEMPTED`. In both the cases, the TSPD hands control back to the normal world and returns returns an error code to the normal world to indicate that the standard SMC it had issued has been preempted but not completed. This patch unifies the handling of these two cases in the TSPD and ensures that the TSP only uses TSP_PREEMPTED instead of separate SMC IDs. Also instead of 2 separate error codes, SMC_PREEMPTED and TSP_EL3_FIQ, only SMC_PREEMPTED is returned as error code back to the normal world. Background information: On a GICv3 system, when the secure world has affinity routing enabled, in 2. an FIQ will preempt TSP execution instead of an IRQ. The FIQ could be a result of a Group 0 or a Group 1 NS interrupt. In both case, the TSPD passes control back to the normal world upon receipt of the TSP_PREEMPTED SMC. A Group 0 interrupt will immediately preempt execution to EL3 where it will be handled. This allows for unified interrupt handling in TSP for both GICv3 and GICv2 systems. Change-Id: I9895344db74b188021e3f6a694701ad272fb40d4
-
- 26 Nov, 2015 1 commit
-
-
Soby Mathew authored
The IMF_READ_INTERRUPT_ID build option enables a feature where the interrupt ID of the highest priority pending interrupt is passed as a parameter to the interrupt handler registered for that type of interrupt. This additional read of highest pending interrupt id from GIC is problematic as it is possible that the original interrupt may get deasserted and another interrupt of different type maybe become the highest pending interrupt. Hence it is safer to prevent such behaviour by removing the IMF_READ_INTERRUPT_ID build option. The `id` parameter of the interrupt handler `interrupt_type_handler_t` is now made a reserved parameter with this patch. It will always contain INTR_ID_UNAVAILABLE. Fixes ARM-software/tf-issues#307 Change-Id: I2173aae1dd37edad7ba6bdfb1a99868635fa34de
-
- 09 Oct, 2015 1 commit
-
-
Varun Wadekar authored
TLK sends the "preempted" event to the NS world along with an identifier for certain use cases. The NS world driver is then expected to take appropriate action depending on the identifier value. Upon completion, the NS world driver then sends the results to TLK (via x1-x3) with the TLK_RESUME_FID function ID. This patch uses the already present code to pass the results from the NS world to TLK for the TLK_RESUME_FID function ID. Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
- 06 Oct, 2015 1 commit
-
-
Soby Mathew authored
This patch fixes an issue in the PSCI framework where the affinity info state of a core was being set to OFF even when the SPD had denied the CPU_OFF request. Now, the state remains set to ON instead. Fixes ARM-software/tf-issues#323 Change-Id: Ia9042aa41fae574eaa07fd2ce3f50cf8cae1b6fc
-
- 30 Sep, 2015 1 commit
-
-
Varun Wadekar authored
This patch adds PM handlers to TLKD for the system suspend/resume and system poweroff/reset cases. TLK expects all SMCs through a single handler, which then fork out into multiple handlers depending on the SMC. We tap into the same single entrypoint by restoring the S-EL1 context before passing the PM event via register 'x0'. On completion of the PM event, TLK sends a completion SMC and TLKD then moves on with the PM process. Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
- 14 Sep, 2015 1 commit
-
-
Achin Gupta authored
On the ARMv8 architecture, cache maintenance operations by set/way on the last level of integrated cache do not affect the system cache. This means that such a flush or clean operation could result in the data being pushed out to the system cache rather than main memory. Another CPU could access this data before it enables its data cache or MMU. Such accesses could be serviced from the main memory instead of the system cache. If the data in the sysem cache has not yet been flushed or evicted to main memory then there could be a loss of coherency. The only mechanism to guarantee that the main memory will be updated is to use cache maintenance operations to the PoC by MVA(See section D3.4.11 (System level caches) of ARMv8-A Reference Manual (Issue A.g/ARM DDI0487A.G). This patch removes the reliance of Trusted Firmware on the flush by set/way operation to ensure visibility of data in the main memory. Cache maintenance operations by MVA are now used instead. The following are the broad category of changes: 1. The RW areas of BL2/BL31/BL32 are invalidated by MVA before the C runtime is initialised. This ensures that any stale cache lines at any level of cache are removed. 2. Updates to global data in runtime firmware (BL31) by the primary CPU are made visible to secondary CPUs using a cache clean operation by MVA. 3. Cache maintenance by set/way operations are only used prior to power down. NOTE: NON-UPSTREAM TRUSTED FIRMWARE CODE SHOULD MAKE EQUIVALENT CHANGES IN ORDER TO FUNCTION CORRECTLY ON PLATFORMS WITH SUPPORT FOR SYSTEM CACHES. Fixes ARM-software/tf-issues#205 Change-Id: I64f1b398de0432813a0e0881d70f8337681f6e9a
-
- 11 Sep, 2015 1 commit
-
-
Andrew Thoelke authored
This patch unifies the bakery lock api's across coherent and normal memory implementation of locks by using same data type `bakery_lock_t` and similar arguments to functions. A separate section `bakery_lock` has been created and used to allocate memory for bakery locks using `DEFINE_BAKERY_LOCK`. When locks are allocated in normal memory, each lock for a core has to spread across multiple cache lines. By using the total size allocated in a separate cache line for a single core at compile time, the memory for other core locks is allocated at link time by multiplying the single core locks size with (PLATFORM_CORE_COUNT - 1). The normal memory lock algorithm now uses lock address instead of the `id` in the per_cpu_data. For locks allocated in coherent memory, it moves locks from tzfw_coherent_memory to bakery_lock section. The bakery locks are allocated as part of bss or in coherent memory depending on usage of coherent memory. Both these regions are initialised to zero as part of run_time_init before locks are used. Hence, bakery_lock_init() is made an empty function as the lock memory is already initialised to zero. The above design lead to the removal of psci bakery locks from non_cpu_power_pd_node to psci_locks. NOTE: THE BAKERY LOCK API WHEN USE_COHERENT_MEM IS NOT SET HAS CHANGED. THIS IS A BREAKING CHANGE FOR ALL PLATFORM PORTS THAT ALLOCATE BAKERY LOCKS IN NORMAL MEMORY. Change-Id: Ic3751c0066b8032dcbf9d88f1d4dc73d15f61d8b
-
- 10 Sep, 2015 1 commit
-
-
Achin Gupta authored
In certain Trusted OS implementations it is a requirement to pass them the highest power level which will enter a power down state during a PSCI CPU_SUSPEND or SYSTEM_SUSPEND API invocation. This patch passes this power level to the SPD in the "max_off_pwrlvl" parameter of the svc_suspend() hook. Currently, the highest power level which was requested to be placed in a low power state (retention or power down) is passed to the SPD svc_suspend_finish() hook. This hook is called after emerging from the low power state. It is more useful to pass the highest power level which was powered down instead. This patch does this by changing the semantics of the parameter passed to an SPD's svc_suspend_finish() hook. The name of the parameter has been changed from "suspend_level" to "max_off_pwrlvl" as well. Same changes have been made to the parameter passed to the tsp_cpu_resume_main() function. NOTE: THIS PATCH CHANGES THE SEMANTICS OF THE EXISTING "svc_suspend_finish()" API BETWEEN THE PSCI AND SPD/SP IMPLEMENTATIONS. THE LATTER MIGHT NEED UPDATES TO ENSURE CORRECT BEHAVIOUR. Change-Id: If3a9d39b13119bbb6281f508a91f78a2f46a8b90
-
- 13 Aug, 2015 9 commits
-
-
Soby Mathew authored
This patch reworks the PSCI generic implementation to conform to ARM Trusted Firmware coding guidelines as described here: https://github.com/ARM-software/arm-trusted-firmware/wiki This patch also reviews the use of signed data types within PSCI Generic code and replaces them with their unsigned counterparts wherever they are not appropriate. The PSCI_INVALID_DATA macro which was defined to -1 is now replaced with PSCI_INVALID_PWR_LVL macro which is defined to PLAT_MAX_PWR_LVL + 1. Change-Id: Iaea422d0e46fc314e0b173c2b4c16e0d56b2515a
-
Soby Mathew authored
As per PSCI1.0 specification, the error code to be returned when an invalid non secure entrypoint address is specified by the PSCI client for CPU_SUSPEND, CPU_ON or SYSTEM_SUSPEND must be PSCI_E_INVALID_ADDRESS. The current PSCI implementation returned PSCI_E_INVAL_PARAMS. This patch rectifies this error and also implements a common helper function to validate the entrypoint information to be used across these PSCI API implementations. Change-Id: I52d697d236c8bf0cd3297da4008c8e8c2399b170
-
Soby Mathew authored
The new PSCI frameworks mandates that the platform APIs and the various frameworks in Trusted Firmware migrate away from MPIDR based core identification to one based on core index. Deprecated versions of the old APIs are still present to provide compatibility but their implementations are not optimal. This patch migrates the various SPDs exisiting within Trusted Firmware tree and TSP to the new APIs. Change-Id: Ifc37e7071c5769b5ded21d0b6a071c8c4cab7836
-
Soby Mathew authored
This commit does the switch to the new PSCI framework implementation replacing the existing files in PSCI folder with the ones in PSCI1.0 folder. The corresponding makefiles are modified as required for the new implementation. The platform.h header file is also is switched to the new one as required by the new frameworks. The build flag ENABLE_PLAT_COMPAT defaults to 1 to enable compatibility layer which let the existing platform ports to continue to build and run with minimal changes. The default weak implementation of platform_get_core_pos() is now removed from platform_helpers.S and is provided by the compatibility layer. Note: The Secure Payloads and their dispatchers still use the old platform and framework APIs and hence it is expected that the ENABLE_PLAT_COMPAT build flag will remain enabled in subsequent patch. The compatibility for SPDs using the older APIs on platforms migrated to the new APIs will be added in the following patch. Change-Id: I18c51b3a085b564aa05fdd98d11c9f3335712719
-
Soby Mathew authored
The new PSCI topology framework and PSCI extended State framework introduces a breaking change in the platform port APIs. To ease the migration of the platform ports to the new porting interface, a compatibility layer is introduced which essentially defines the new platform API in terms of the old API. The old PSCI helpers to retrieve the power-state, its associated fields and the highest coordinated physical OFF affinity level of a core are also implemented for compatibility. This allows the existing platform ports to work with the new PSCI framework without significant rework. This layer will be enabled by default once the switch to the new PSCI framework is done and is controlled by the build flag ENABLE_PLAT_COMPAT. Change-Id: I4b17cac3a4f3375910a36dba6b03d8f1700d07e3
-
Sandrine Bailleux authored
There used to be 2 warm reset entry points: - the "on finisher", for when the core has been turned on using a PSCI CPU_ON call; - the "suspend finisher", entered upon resumption from a previous PSCI CPU_SUSPEND call. The appropriate warm reset entry point used to be programmed into the mailboxes by the power management hooks. However, it is not required to provide this information to the PSCI entry point code, as it can figure it out by itself. By querying affinity info state, a core is able to determine on which execution path it is. If the state is ON_PENDING then it means it's been turned on else it is resuming from suspend. This patch unifies the 2 warm reset entry points into a single one: psci_entrypoint(). The patch also implements the necessary logic to distinguish between the 2 types of warm resets in the power up finisher. The plat_setup_psci_ops() API now takes the secure entry point as an additional parameter to enable the platforms to configure their mailbox. The platform hooks `pwr_domain_on` and `pwr_domain_suspend` no longer take secure entry point as a parameter. Change-Id: I7d1c93787b54213aefdbc046b8cd66a555dfbfd9
-
Soby Mathew authored
The state-id field in the power-state parameter of a CPU_SUSPEND call can be used to describe composite power states specific to a platform. The current PSCI implementation does not interpret the state-id field. It relies on the target power level and the state type fields in the power-state parameter to perform state coordination and power management operations. The framework introduced in this patch allows the PSCI implementation to intepret generic global states like RUN, RETENTION or OFF from the State-ID to make global state coordination decisions and reduce the complexity of platform ports. It adds support to involve the platform in state coordination which facilitates the use of composite power states and improves the support for entering standby states at multiple power domains. The patch also includes support for extended state-id format for the power state parameter as specified by PSCIv1.0. The PSCI implementation now defines a generic representation of the power-state parameter. It depends on the platform port to convert the power-state parameter (possibly encoding a composite power state) passed in a CPU_SUSPEND call to this representation via the `validate_power_state()` plat_psci_ops handler. It is an array where each index corresponds to a power level. Each entry contains the local power state the power domain at that power level could enter. The meaning of the local power state values is platform defined, and may vary between levels in a single platform. The PSCI implementation constrains the values only so that it can classify the state as RUN, RETENTION or OFF as required by the specification: * zero means RUN * all OFF state values at all levels must be higher than all RETENTION state values at all levels * the platform provides PLAT_MAX_RET_STATE and PLAT_MAX_OFF_STATE values to the framework The platform also must define the macros PLAT_MAX_RET_STATE and PLAT_MAX_OFF_STATE which lets the PSCI implementation find out which power domains have been requested to enter a retention or power down state. The PSCI implementation does not interpret the local power states defined by the platform. The only constraint is that the PLAT_MAX_RET_STATE < PLAT_MAX_OFF_STATE. For a power domain tree, the generic implementation maintains an array of local power states. These are the states requested for each power domain by all the cores contained within the domain. During a request to place multiple power domains in a low power state, the platform is passed an array of requested power-states for each power domain through the plat_get_target_pwr_state() API. It coordinates amongst these states to determine a target local power state for the power domain. A default weak implementation of this API is provided in the platform layer which returns the minimum of the requested power-states back to the PSCI state coordination. Finally, the plat_psci_ops power management handlers are passed the target local power states for each affected power domain using the generic representation described above. The platform executes operations specific to these target states. The platform power management handler for placing a power domain in a standby state (plat_pm_ops_t.pwr_domain_standby()) is now only used as a fast path for placing a core power domain into a standby or retention state should now be used to only place the core power domain in a standby or retention state. The extended state-id power state format can be enabled by setting the build flag PSCI_EXTENDED_STATE_ID=1 and it is disabled by default. Change-Id: I9d4123d97e179529802c1f589baaa4101759d80c
-
Soby Mathew authored
This patch removes the assumption in the current PSCI implementation that MPIDR based affinity levels map directly to levels in a power domain tree. This enables PSCI generic code to support complex power domain topologies as envisaged by PSCIv1.0 specification. The platform interface for querying the power domain topology has been changed such that: 1. The generic PSCI code does not generate MPIDRs and use them to query the platform about the number of power domains at a particular power level. The platform now provides a description of the power domain tree on the SoC through a data structure. The existing platform APIs to provide the same information have been removed. 2. The linear indices returned by plat_core_pos_by_mpidr() and plat_my_core_pos() are used to retrieve core power domain nodes from the power domain tree. Power domains above the core level are accessed using a 'parent' field in the tree node descriptors. The platform describes the power domain tree in an array of 'unsigned char's. The first entry in the array specifies the number of power domains at the highest power level implemented in the system. Each susbsequent entry corresponds to a power domain and contains the number of power domains that are its direct children. This array is exported to the generic PSCI implementation via the new `plat_get_power_domain_tree_desc()` platform API. The PSCI generic code uses this array to populate its internal power domain tree using the Breadth First Search like algorithm. The tree is split into two arrays: 1. An array that contains all the core power domain nodes 2. An array that contains all the other power domain nodes A separate array for core nodes allows certain core specific optimisations to be implemented e.g. remove the bakery lock, re-use per-cpu data framework for storing some information. Entries in the core power domain array are allocated such that the array index of the domain is equal to the linear index returned by plat_core_pos_by_mpidr() and plat_my_core_pos() for the MPIDR corresponding to that domain. This relationship is key to be able to use an MPIDR to find the corresponding core power domain node, traverse to higher power domain nodes and index into arrays that contain core specific information. An introductory document has been added to briefly describe the new interface. Change-Id: I4b444719e8e927ba391cae48a23558308447da13
-
Soby Mathew authored
This patch introduces new platform APIs and context management helper APIs to support the new topology framework based on linear core position. This framework will be introduced in the follwoing patch and it removes the assumption that the MPIDR based affinity levels map directly to levels in a power domain tree. The new platforms APIs and context management helpers based on core position are as described below: * plat_my_core_pos() and plat_core_pos_by_mpidr() These 2 new mandatory platform APIs are meant to replace the existing 'platform_get_core_pos()' API. The 'plat_my_core_pos()' API returns the linear index of the calling core and 'plat_core_pos_by_mpidr()' returns the linear index of a core specified by its MPIDR. The latter API will also validate the MPIDR passed as an argument and will return an error code (-1) if an invalid MPIDR is passed as the argument. This enables the caller to safely convert an MPIDR of another core to its linear index without querying the PSCI topology tree e.g. during a call to PSCI CPU_ON. Since the 'plat_core_pos_by_mpidr()' API verifies an MPIDR, which is always platform specific, it is no longer possible to maintain a default implementation of this API. Also it might not be possible for a platform port to verify an MPIDR before the C runtime has been setup or the topology has been initialized. This would prevent 'plat_core_pos_by_mpidr()' from being callable prior to topology setup. As a result, the generic Trusted Firmware code does not call this API before the topology setup has been done. The 'plat_my_core_pos' API should be able to run without a C runtime. Since this API needs to return a core position which is equal to the one returned by 'plat_core_pos_by_mpidr()' API for the corresponding MPIDR, this too cannot have default implementation and is a mandatory API for platform ports. These APIs will be implemented by the ARM reference platform ports later in the patch stack. * plat_get_my_stack() and plat_set_my_stack() These APIs are the stack management APIs which set/return stack addresses appropriate for the calling core. These replace the 'platform_get_stack()' and 'platform_set_stack()' APIs. A default weak MP version and a global UP version of these APIs are provided for the platforms. * Context management helpers based on linear core position A set of new context management(CM) helpers viz cm_get_context_by_index(), cm_set_context_by_index(), cm_init_my_context() and cm_init_context_by_index() are defined which are meant to replace the old helpers which took MPIDR as argument. The old CM helpers are implemented based on the new helpers to allow for code consolidation and will be deprecated once the switch to the new framework is done. Change-Id: I89758632b370c2812973a4b2efdd9b81a41f9b69
-
- 05 Aug, 2015 3 commits
-
-
Soby Mathew authored
As per Section 4.2.2. in the PSCI specification, the term "affinity" is used in the context of describing the hierarchical arrangement of cores. This often, but not always, maps directly to the processor power domain topology of the system. The current PSCI implementation assumes that this is always the case i.e. MPIDR based levels of affinity always map to levels in a power domain topology tree. This patch is the first in a series of patches which remove this assumption. It removes all occurences of the terms "affinity instances and levels" when used to describe the power domain topology. Only the terminology is changed in this patch. Subsequent patches will implement functional changes to remove the above mentioned assumption. Change-Id: Iee162f051b228828310610c5a320ff9d31009b4e
-
Soby Mathew authored
This patch optimizes the invocation of the platform power management hooks for ON, OFF and SUSPEND such that they are called only for the highest affinity level which will be powered off/on. Earlier, the hooks were being invoked for all the intermediate levels as well. This patch requires that the platforms migrate to the new semantics of the PM hooks. It also removes the `state` parameter from the pm hooks as the `afflvl` parameter now indicates the highest affinity level for which power management operations are required. Change-Id: I57c87931d8a2723aeade14acc710e5b78ac41732
-
Soby Mathew authored
This patch creates a copy of the existing PSCI files and related psci.h and platform.h header files in a new `PSCI1.0` directory. The changes for the new PSCI power domain topology and extended state-ID frameworks will be added incrementally to these files. This incremental approach will aid in review and in understanding the changes better. Once all the changes have been introduced, these files will replace the existing PSCI files. Change-Id: Ibb8a52e265daa4204e34829ed050bddd7e3316ff
-
- 24 Jul, 2015 1 commit
-
-
Varun Wadekar authored
Remove the 'NEED_BL32' flag from the makefile. TLK compiles using a completely different build system and is present on the device as a binary blob. The NEED_BL32 flag does not influence the TLK load/boot sequence at all. Moreover, it expects that TLK binary be present on the host before we can compile BL31 support for Tegra. This patch removes the flag from the makefile and thus decouples both the build systems. Tested by booting TLK without the NEED_BL32 flag. Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
- 22 Jun, 2015 1 commit
-
-
Soby Mathew authored
This patch adds support for SYSTEM_SUSPEND API as mentioned in the PSCI 1.0 specification. This API, on being invoked on the last running core on a supported platform, will put the system into a low power mode with memory retention. The psci_afflvl_suspend() internal API has been reused as most of the actions to suspend a system are the same as invoking the PSCI CPU_SUSPEND API with the target affinity level as 'system'. This API needs the 'power state' parameter for the target low power state. This parameter is not passed by the caller of the SYSTEM_SUSPEND API. Hence, the platform needs to implement the get_sys_suspend_power_state() platform function to provide this information. Also, the platform also needs to add support for suspending the system to the existing 'plat_pm_ops' functions: affinst_suspend() and affinst_suspend_finish(). Change-Id: Ib6bf10809cb4e9b92f463755608889aedd83cef5
-
- 19 Jun, 2015 1 commit
-
-
Andrew Thoelke authored
mpidr_set_aff_inst() is left shifting an int constant and an unsigned char value to construct an MPIDR. For affinity level 3 a shift of 32 would result in shifting out of the 32-bit type and have no effect on the MPIDR. These values need to be extended to unsigned long before shifting to ensure correct results for affinity level 3. Change-Id: I1ef40afea535f14cfd820c347a065a228e8f4536
-
- 04 Jun, 2015 2 commits
-
-
Sandrine Bailleux authored
This patch introduces a new platform build option, called PROGRAMMABLE_RESET_ADDRESS, which tells whether the platform has a programmable or fixed reset vector address. If the reset vector address is fixed then the code relies on the platform_get_entrypoint() mailbox mechanism to figure out where it is supposed to jump. On the other hand, if it is programmable then it is assumed that the platform code will program directly the right address into the RVBAR register (instead of using the mailbox redirection) so the mailbox is ignored in this case. Change-Id: If59c3b11fb1f692976e1d8b96c7e2da0ebfba308
-
Sandrine Bailleux authored
The attempt to run the CPU reset code as soon as possible after reset results in highly complex conditional code relating to the RESET_TO_BL31 option. This patch relaxes this requirement a little. In the BL1, BL3-1 and PSCI entrypoints code, the sequence of operations is now as follows: 1) Detect whether it is a cold or warm boot; 2) For cold boot, detect whether it is the primary or a secondary CPU. This is needed to handle multiple CPUs entering cold reset simultaneously; 3) Run the CPU init code. This patch also abstracts the EL3 registers initialisation done by the BL1, BL3-1 and PSCI entrypoints into common code. This improves code re-use and consolidates the code flows for different types of systems. NOTE: THE FUNCTION plat_secondary_cold_boot() IS NOW EXPECTED TO NEVER RETURN. THIS PATCH FORCES PLATFORM PORTS THAT RELIED ON THE FORMER RETRY LOOP AT THE CALL SITE TO MODIFY THEIR IMPLEMENTATION. OTHERWISE, SECONDARY CPUS WILL PANIC. Change-Id: If5ecd74d75bee700b1bd718d23d7556b8f863546
-
- 13 May, 2015 1 commit
-
-
Soby Mathew authored
In the debug build of the function get_power_on_target_afflvl(), there is a check to ensure that the CPU is emerging from a SUSPEND or ON_PENDING state. The state is checked without acquiring the lock for the CPU node. The state could be updated to ON_PENDING in psci_afflvl_on() after the target CPU has been powered up. This results in a race condition which could cause the check for the ON_PENDING state in get_power_on_target_afflvl() to fail. This patch resolves this race condition by setting the state of the target CPU to ON_PENDING before the platform port attempts to power it on. The target CPU is thus guaranteed to read the correct the state. In case the power on operation fails, the state of the CPU is restored to OFF. Fixes ARM-software/tf-issues#302 Change-Id: I3f2306a78c58d47b1a0fb7e33ab04f917a2d5044
-
- 13 Apr, 2015 1 commit
-
-
Varun Wadekar authored
This patch removes the need for a shared buffer between the EL3 and S-EL1 levels. We now use the CPU registers, x0-x7, while passing data between the two levels. Since TLK is a 32-bit Trusted OS, tlkd has to unpack the arguments in the x0-x7 registers. TLK in turn gets these values via r0-r7. Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
- 08 Apr, 2015 1 commit
-
-
Kévin Petit authored
In order for the symbol table in the ELF file to contain the size of functions written in assembly, it is necessary to report it to the assembler using the .size directive. To fulfil the above requirements, this patch introduces an 'endfunc' macro which contains the .endfunc and .size directives. It also adds a .func directive to the 'func' assembler macro. The .func/.endfunc have been used so the assembler can fail if endfunc is omitted. Fixes ARM-Software/tf-issues#295 Change-Id: If8cb331b03d7f38fe7e3694d4de26f1075b278fc Signed-off-by: Kévin Petit <kevin.petit@arm.com>
-
- 31 Mar, 2015 3 commits
-
-
Varun Wadekar authored
This patch adds support to open/close secure sessions with Trusted Apps and later send commands/events. Modify TLK_NUM_FID to indicate the total number of FIDs available to the NS world. Change-Id: I3f1153dfa5510bd44fc25f1fee85cae475b1abf1 Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
Varun Wadekar authored
This patch allows servicing of the non-secure world IRQs when the CPU is in the secure world. Once the interrupt is handled, the non-secure world issues the Resume FID to allow the secure payload complete the preempted standard FID. Change-Id: Ia52c41adf45014ab51d8447bed6605ca2f935587 Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-
Varun Wadekar authored
This patch adds functionality to translate virtual addresses from secure or non-secure worlds. This functionality helps Trusted Apps to share virtual addresses directly and allows the NS world to pass virtual addresses to TLK directly. Change-Id: I77b0892963e0e839c448b5d0532920fb7e54dc8e Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
-