1. 25 Jul, 2016 1 commit
    • Sandrine Bailleux's avatar
      Validate psci_find_target_suspend_lvl() result · a1c3faa6
      Sandrine Bailleux authored
      This patch adds a runtime check that psci_find_target_suspend_lvl()
      returns a valid value back to psci_cpu_suspend() and psci_get_stat().
      If it is invalid, BL31 will now panic.
      
      Note that on the PSCI CPU suspend path there is already a debug
      assertion checking the validity of the target composite power state,
      which effectively also checks the validity of the target suspend level.
      Therefore, the error condition would already be caught in debug builds,
      but in a release build this assertion would be compiled out.
      
      On the PSCI stat path, there is currently no debug assertion checking
      the validity of the power state before using it as an index into
      the power domain state array.
      
      Although BL31 platforms ports are responsible for validating the
      power state parameter, the security impact (i.e. an out-of-bounds
      array access) of a potential platform port bug in this code would
      be quite high, given that this parameter comes from an untrusted
      source. The cost of checking this in runtime generic code is low.
      
      Change-Id: Icea85b8020e39928ac03ec0cd49805b5857b3906
      a1c3faa6
  2. 19 Jul, 2016 1 commit
    • Soby Mathew's avatar
      Introduce PSCI Library Interface · cf0b1492
      Soby Mathew authored
      This patch introduces the PSCI Library interface. The major changes
      introduced are as follows:
      
      * Earlier BL31 was responsible for Architectural initialization during cold
      boot via bl31_arch_setup() whereas PSCI was responsible for the same during
      warm boot. This functionality is now consolidated by the PSCI library
      and it does Architectural initialization via psci_arch_setup() during both
      cold and warm boots.
      
      * Earlier the warm boot entry point was always `psci_entrypoint()`. This was
      not flexible enough as a library interface. Now PSCI expects the runtime
      firmware to provide the entry point via `psci_setup()`. A new function
      `bl31_warm_entrypoint` is introduced in BL31 and the previous
      `psci_entrypoint()` is deprecated.
      
      * The `smc_helpers.h` is reorganized to separate the SMC Calling Convention
      defines from the Trusted Firmware SMC helpers. The former is now in a new
      header file `smcc.h` and the SMC helpers are moved to Architecture specific
      header.
      
      * The CPU context is used by PSCI for context initialization and
      restoration after power down (PSCI Context). It is also used by BL31 for SMC
      handling and context management during Normal-Secure world switch (SMC
      Context). The `psci_smc_handler()` interface is redefined to not use SMC
      helper macros thus enabling to decouple the PSCI context from EL3 runtime
      firmware SMC context. This enables PSCI to be integrated with other runtime
      firmware using a different SMC context.
      
      NOTE: With this patch the architectural setup done in `bl31_arch_setup()`
      is done as part of `psci_setup()` and hence `bl31_platform_setup()` will be
      invoked prior to architectural setup. It is highly unlikely that the platform
      setup will depend on architectural setup and cause any failure. Please be
      be aware of this change in sequence.
      
      Change-Id: I7f497a08d33be234bbb822c28146250cb20dab73
      cf0b1492
  3. 18 Jul, 2016 2 commits
    • Soby Mathew's avatar
      Introduce `el3_runtime` and `PSCI` libraries · 532ed618
      Soby Mathew authored
      This patch moves the PSCI services and BL31 frameworks like context
      management and per-cpu data into new library components `PSCI` and
      `el3_runtime` respectively. This enables PSCI to be built independently from
      BL31. A new `psci_lib.mk` makefile is introduced which adds the relevant
      PSCI library sources and gets included by `bl31.mk`. Other changes which
      are done as part of this patch are:
      
      * The runtime services framework is now moved to the `common/` folder to
        enable reuse.
      * The `asm_macros.S` and `assert_macros.S` helpers are moved to architecture
        specific folder.
      * The `plat_psci_common.c` is moved from the `plat/common/aarch64/` folder
        to `plat/common` folder. The original file location now has a stub which
        just includes the file from new location to maintain platform compatibility.
      
      Most of the changes wouldn't affect platform builds as they just involve
      changes to the generic bl1.mk and bl31.mk makefiles.
      
      NOTE: THE `plat_psci_common.c` FILE HAS MOVED LOCATION AND THE STUB FILE AT
      THE ORIGINAL LOCATION IS NOW DEPRECATED. PLATFORMS SHOULD MODIFY THEIR
      MAKEFILES TO INCLUDE THE FILE FROM THE NEW LOCATION.
      
      Change-Id: I6bd87d5b59424995c6a65ef8076d4fda91ad5e86
      532ed618
    • Soby Mathew's avatar
      Rework type usage in Trusted Firmware · 4c0d0390
      Soby Mathew authored
      This patch reworks type usage in generic code, drivers and ARM platform files
      to make it more portable. The major changes done with respect to
      type usage are as listed below:
      
      * Use uintptr_t for storing address instead of uint64_t or unsigned long.
      * Review usage of unsigned long as it can no longer be assumed to be 64 bit.
      * Use u_register_t for register values whose width varies depending on
        whether AArch64 or AArch32.
      * Use generic C types where-ever possible.
      
      In addition to the above changes, this patch also modifies format specifiers
      in print invocations so that they are AArch64/AArch32 agnostic. Only files
      related to upcoming feature development have been reworked.
      
      Change-Id: I9f8c78347c5a52ba7027ff389791f1dad63ee5f8
      4c0d0390
  4. 12 Jul, 2016 1 commit
  5. 08 Jul, 2016 3 commits
    • Sandrine Bailleux's avatar
      Introduce utils.h header file · ed81f3eb
      Sandrine Bailleux authored
      This patch introduces a new header file: include/lib/utils.h.
      Its purpose is to provide generic macros and helper functions that
      are independent of any BL image, architecture, platform and even
      not specific to Trusted Firmware.
      
      For now, it contains only 2 macros: ARRAY_SIZE() and
      IS_POWER_OF_TWO(). These were previously defined in bl_common.h and
      xlat_tables.c respectively.
      
      bl_common.h includes utils.h to retain compatibility for platforms
      that relied on bl_common.h for the ARRAY_SIZE() macro. Upstream
      platform ports that use this macro have been updated to include
      utils.h.
      
      Change-Id: I960450f54134f25d1710bfbdc4184f12c049a9a9
      ed81f3eb
    • Sandrine Bailleux's avatar
      xlat lib: Introduce MT_EXECUTE/MT_EXECUTE_NEVER attributes · b9161469
      Sandrine Bailleux authored
      This patch introduces the MT_EXECUTE/MT_EXECUTE_NEVER memory mapping
      attributes in the translation table library to specify the
      access permissions for instruction execution of a memory region.
      These new attributes should be used only for normal, read-only
      memory regions. For other types of memory, the translation table
      library still enforces the following rules, regardless of the
      MT_EXECUTE/MT_EXECUTE_NEVER attribute:
      
       - Device memory is always marked as execute-never.
       - Read-write normal memory is always marked as execute-never.
      
      Change-Id: I8bd27800a8c1d8ac1559910caf4a4840cf25b8b0
      b9161469
    • Sandrine Bailleux's avatar
      xlat lib: Refactor mmap_desc() function · bcbe19af
      Sandrine Bailleux authored
      This patch clarifies the mmap_desc() function by adding some comments
      and reorganising its code. No functional change has been introduced.
      
      Change-Id: I873493be17b4e60a89c1dc087dd908b425065401
      bcbe19af
  6. 16 Jun, 2016 1 commit
    • Yatharth Kochar's avatar
      Add Performance Measurement Framework(PMF) · a31d8983
      Yatharth Kochar authored
      This patch adds Performance Measurement Framework(PMF) in the
      ARM Trusted Firmware. PMF is implemented as a library and the
      SMC interface is provided through ARM SiP service.
      
      The PMF provides capturing, storing, dumping and retrieving the
      time-stamps, by enabling the development of services by different
      providers, that can be easily integrated into ARM Trusted Firmware.
      The PMF capture and retrieval APIs can also do appropriate cache
      maintenance operations to the timestamp memory when the caller
      indicates so.
      
      `pmf_main.c` consists of core functions that implement service
      registration, initialization, storing, dumping and retrieving
      the time-stamp.
      `pmf_smc.c` consists SMC handling for registered PMF services.
      `pmf.h` consists of the macros that can be used by the PMF service
      providers to register service and declare time-stamp functions.
      `pmf_helpers.h` consists of internal macros that are used by `pmf.h`
      
      By default this feature is disabled in the ARM trusted firmware.
      To enable it set the boolean flag `ENABLE_PMF` to 1.
      
      NOTE: The caller is responsible for specifying the appropriate cache
      maintenance flags and for acquiring/releasing appropriate locks
      before/after capturing/retrieving the time-stamps.
      
      Change-Id: Ib45219ac07c2a81b9726ef6bd9c190cc55e81854
      a31d8983
  7. 03 Jun, 2016 3 commits
    • Dan Handley's avatar
      Minor libfdt changes to enable TF integration · 754d78b1
      Dan Handley authored
      
      
      * Move libfdt API headers to include/lib/libfdt
      * Add libfdt.mk helper makefile
      * Remove unused libfdt files
      * Minor changes to fdt.h and libfdt.h to make them C99 compliant
      Co-Authored-By: default avatarJens Wiklander <jens.wiklander@linaro.org>
      
      Change-Id: I425842c2b111dcd5fb6908cc698064de4f77220e
      754d78b1
    • Dan Handley's avatar
      Import libfdt v1.4.1 · 91176bc6
      Dan Handley authored
      Imports libfdt code from https://git.kernel.org/cgit/utils/dtc/dtc.git
      tag "v1.4.1" commit 302fca9f4c283e1994cf0a5a9ce1cf43ca15e6d2.
      
      Change-Id: Ia0d966058beee55a9047e80d8a05bbe4f71d8446
      91176bc6
    • Dan Handley's avatar
      Move stdlib header files to include/lib/stdlib · f0b489c1
      Dan Handley authored
      * Move stdlib header files from include/stdlib to include/lib/stdlib for
        consistency with other library headers.
      * Fix checkpatch paths to continue excluding stdlib files.
      * Create stdlib.mk to define the stdlib source files and include directories.
      * Include stdlib.mk from the top level Makefile.
      * Update stdlib header path in the fip_create Makefile.
      * Update porting-guide.md with the new paths.
      
      Change-Id: Ia92c2dc572e9efb54a783e306b5ceb2ce24d27fa
      f0b489c1
  8. 01 Jun, 2016 1 commit
    • Yatharth Kochar's avatar
      Add support for ARM Cortex-A73 MPCore Processor · 2460ac18
      Yatharth Kochar authored
      This patch adds ARM Cortex-A73 MPCore Processor support
      in the CPU specific operations framework. It also includes
      this support for the Base FVP port.
      
      Change-Id: I0e26b594f2ec1d28eb815db9810c682e3885716d
      2460ac18
  9. 26 Apr, 2016 1 commit
    • Sandrine Bailleux's avatar
      Fix computation of L1 bitmask in the translation table lib · aa447b9c
      Sandrine Bailleux authored
      This patch fixes the computation of the bitmask used to isolate
      the level 1 field of a virtual address. The whole computation needs
      to work on 64-bit values to produce the correct bitmask value.
      XLAT_TABLE_ENTRIES_MASK being a C constant, it is a 32-bit value
      so it needs to be extended to a 64-bit value before it takes part
      in any other computation.
      
      This patch fixes this bug by casting XLAT_TABLE_ENTRIES_MASK as
      an unsigned long long.
      
      Note that this bug doesn't manifest itself in practice because
      address spaces larger than 39 bits are not yet supported in the
      Trusted Firmware.
      
      Change-Id: I955fd263ecb691ca94b29b9c9f576008ce1d87ee
      aa447b9c
  10. 21 Apr, 2016 6 commits
  11. 15 Apr, 2016 1 commit
    • Antonio Nino Diaz's avatar
      Limit support for region overlaps in xlat_tables · e1ea9290
      Antonio Nino Diaz authored
      The only case in which regions can now overlap is if they are
      identity mapped or they have the same virtual to physical address
      offset (identity mapping is just a particular case of the latter).
      They must overlap completely (i.e. one of them must be completely
      inside the other one) and not cover the same area.
      
      This allow future enhancements to the xlat_tables library without
      having to support unnecessarily complex edge cases.
      
      Outer regions are now sorted by mmap_add_region() before inner
      regions with the same base virtual address for consistency: all
      regions contained inside another one must be placed after the outer
      one in the list.
      
      If an inner region has the same attributes as the outer ones it will
      be merged when creating the tables with init_xlation_table(). This
      cannot be done as regions are added because there may be cases where
      adding a region makes previously mergeable regions no longer
      mergeable.
      
      If the attributes of an inner region are different than the outer
      region, new pages will be generated regardless of how "restrictive"
      they are. For example, RO memory is more restrictive than RW. The
      old implementation would give priority to RO if there is an overlap,
      the new one doesn't.
      
      NOTE: THIS IS THEORETICALLY A COMPATABILITY BREAK FOR PLATFORMS THAT
      USE THE XLAT_TABLES LIBRARY IN AN UNEXPECTED WAY. PLEASE RAISE A
      TF-ISSUE IF YOUR PLATFORM IS AFFECTED.
      
      Change-Id: I75fba5cf6db627c2ead70da3feb3cc648c4fe2af
      e1ea9290
  12. 13 Apr, 2016 1 commit
    • Soby Mathew's avatar
      Refactor the xlat_tables library code · 3ca9928d
      Soby Mathew authored
      The AArch32 long descriptor format and the AArch64 descriptor format
      correspond to each other which allows possible sharing of xlat_tables
      library code between AArch64 and AArch32. This patch refactors the
      xlat_tables library code to seperate the common functionality from
      architecture specific code. Prior to this patch, all of the xlat_tables
      library code were in `lib/aarch64/xlat_tables.c` file. The refactored code
      is now in `lib/xlat_tables/` directory. The AArch64 specific programming
      for xlat_tables is in `lib/xlat_tables/aarch64/xlat_tables.c` and the rest
      of the code common to AArch64 and AArch32 is in
      `lib/xlat_tables/xlat_tables_common.c`. Also the data types used in
      xlat_tables library APIs are reworked to make it compatible between AArch64
      and AArch32.
      
      The `lib/aarch64/xlat_tables.c` file now includes the new xlat_tables
      library files to retain compatibility for existing platform ports.
      The macros related to xlat_tables library are also moved from
      `include/lib/aarch64/arch.h` to the header `include/lib/xlat_tables.h`.
      
      NOTE: THE `lib/aarch64/xlat_tables.c` FILE IS DEPRECATED AND PLATFORM PORTS
      ARE EXPECTED TO INCLUDE THE NEW XLAT_TABLES LIBRARY FILES IN THEIR MAKEFILES.
      
      Change-Id: I3d17217d24aaf3a05a4685d642a31d4d56255a0f
      3ca9928d
  13. 31 Mar, 2016 1 commit
    • Antonio Nino Diaz's avatar
      Remove xlat_helpers.c · f33fbb2f
      Antonio Nino Diaz authored
      lib/aarch64/xlat_helpers.c defines helper functions to build
      translation descriptors, but no common code or upstream platform
      port uses them. As the rest of the xlat_tables code evolves, there
      may be conflicts with these helpers, therefore this code should be
      removed.
      
      Change-Id: I9f5be99720f929264818af33db8dada785368711
      f33fbb2f
  14. 22 Mar, 2016 1 commit
    • Soby Mathew's avatar
      Make cpu operations warning a VERBOSE print · 1319e7b1
      Soby Mathew authored
      The assembler helper function `print_revision_warning` is used when a
      CPU specific operation is enabled in the debug build (e.g. an errata
      workaround) but doesn't apply to the executing CPU's revision/part number.
      However, in some cases the system integrator may want a single binary to
      support multiple platforms with different IP versions, only some of which
      contain a specific erratum.  In this case, the warning can be emitted very
      frequently when CPUs are being powered on/off.
      
      This patch modifies this warning print behaviour so that it is emitted only
      when LOG_LEVEL >= LOG_LEVEL_VERBOSE. The `debug.h` header file now contains
      guard macros so that it can be included in assembly code.
      
      Change-Id: Ic6e7a07f128dcdb8498a5bfdae920a8feeea1345
      1319e7b1
  15. 07 Mar, 2016 1 commit
    • Kristina Martsenko's avatar
      Initialize all translation table entries · 2af926dd
      Kristina Martsenko authored
      The current translation table code maps in a series of regions, zeroing
      the unmapped table entries before and in between the mapped regions. It
      doesn't, however, zero the unmapped entries after the last mapped
      region, leaving those entries at whatever value that memory has
      initially.
      
      This is bad because those values can look like valid translation table
      entries, pointing to valid physical addresses. The CPU is allowed to do
      speculative reads from any such addresses. If the addresses point to
      device memory, the results can be unpredictable.
      
      This patch zeroes the translation table entries following the last
      mapped region, ensuring all table entries are either valid or zero
      (invalid).
      
      In addition, it limits the value of ADDR_SPACE_SIZE to those allowed by
      the architecture and supported by the current code (see D4.2.5 in the
      Architecture Reference Manual). This simplifies this patch a lot and
      ensures existing code doesn't do unexpected things.
      
      Change-Id: Ic28b6c3f89d73ef58fa80319a9466bb2c7131c21
      2af926dd
  16. 03 Mar, 2016 1 commit
    • Sandrine Bailleux's avatar
      Extend memory attributes to map non-cacheable memory · 5f654975
      Sandrine Bailleux authored
      At the moment, the memory translation library allows to create memory
      mappings of 2 types:
      
       - Device nGnRE memory (named MT_DEVICE in the library);
      
       - Normal, Inner Write-back non-transient, Outer Write-back
         non-transient memory (named MT_MEMORY in the library).
      
      As a consequence, the library code treats the memory type field as a
      boolean: everything that is not device memory is normal memory and
      vice-versa.
      
      In reality, the ARMv8 architecture allows up to 8 types of memory to
      be used at a single time for a given exception level. This patch
      reworks the memory attributes such that the memory type is now defined
      as an integer ranging from 0 to 7 instead of a boolean. This makes it
      possible to extend the list of memory types supported by the memory
      translation library.
      
      The priority system dictating memory attributes for overlapping
      memory regions has been extended to cope with these changes but the
      algorithm at its core has been preserved. When a memory region is
      re-mapped with different memory attributes, the memory translation
      library examines the former attributes and updates them only if
      the new attributes create a more restrictive mapping. This behaviour
      is unchanged, only the manipulation of the value has been modified
      to cope with the new format.
      
      This patch also introduces a new type of memory mapping in the memory
      translation library: MT_NON_CACHEABLE, meaning Normal, Inner
      Non-cacheable, Outer Non-cacheable memory. This can be useful to map
      a non-cacheable memory region, such as a DMA buffer for example.
      
      The rules around the Execute-Never (XN) bit in a translation table
      for an MT_NON_CACHEABLE memory mapping have been aligned on the rules
      used for MT_MEMORY mappings:
       - If the memory is read-only then it is also executable (XN = 0);
       - If the memory is read-write then it is not executable (XN = 1).
      
      The shareability field for MT_NON_CACHEABLE mappings is always set as
      'Outer-Shareable'. Note that this is not strictly needed since
      shareability is only relevant if the memory is a Normal Cacheable
      memory type, but this is to align with the existing device memory
      mappings setup. All Device and Normal Non-cacheable memory regions
      are always treated as Outer Shareable, regardless of the translation
      table shareability attributes.
      
      This patch also removes the 'ATTR_SO' and 'ATTR_SO_INDEX' #defines.
      They were introduced to map memory as Device nGnRnE (formerly called
      "Strongly-Ordered" memory in the ARMv7 architecture) but were not
      used anywhere in the code base. Removing them avoids any confusion
      about the memory types supported by the library.
      
      Upstream platforms do not currently use the MT_NON_CACHEABLE memory
      type.
      
      NOTE: THIS CHANGE IS SOURCE COMPATIBLE BUT PLATFORMS THAT RELY ON THE
      BINARY VALUES OF `mmap_attr_t` or the `attr` argument of
      `mmap_add_region()` MAY BE BROKEN.
      
      Change-Id: I717d6ed79b4c845a04e34132432f98b93d661d79
      5f654975
  17. 26 Feb, 2016 1 commit
    • Antonio Nino Diaz's avatar
      Compile stdlib C files individually · 191a0088
      Antonio Nino Diaz authored
      All C files of stdlib were included into std.c, which was the file
      that the Makefile actually compiled. This is a poor way of compiling
      all the files and, while it may work fine most times, it's
      discouraged.
      
      In this particular case, each C file included its own headers, which
      were later included into std.c. For example, this caused problems
      because a duplicated typedef of u_short in both subr_prf.c and
      types.h. While that may require an issue on its own, this kind of
      problems are avoided if all C files are as independent as possible.
      
      Change-Id: I9a7833fd2933003f19a5d7db921ed8542ea2d04a
      191a0088
  18. 08 Feb, 2016 2 commits
    • Sandrine Bailleux's avatar
      Cortex-Axx: Unconditionally apply CPU reset operations · c66fad93
      Sandrine Bailleux authored
      In the Cortex-A35/A53/A57 CPUs library code, some of the CPU specific
      reset operations are skipped if they have already been applied in a
      previous invocation of the reset handler. This precaution is not
      required, as all these operations can be reapplied safely.
      
      This patch removes the unneeded test-before-set instructions in
      the reset handler for these CPUs.
      
      Change-Id: Ib175952c814dc51f1b5125f76ed6c06a22b95167
      c66fad93
    • Sandrine Bailleux's avatar
      Disable non-temporal hint on Cortex-A53/57 · 54035fc4
      Sandrine Bailleux authored
      The LDNP/STNP instructions as implemented on Cortex-A53 and
      Cortex-A57 do not behave in a way most programmers expect, and will
      most probably result in a significant speed degradation to any code
      that employs them. The ARMv8-A architecture (see Document ARM DDI
      0487A.h, section D3.4.3) allows cores to ignore the non-temporal hint
      and treat LDNP/STNP as LDP/STP instead.
      
      This patch introduces 2 new build flags:
      A53_DISABLE_NON_TEMPORAL_HINT and A57_DISABLE_NON_TEMPORAL_HINT
      to enforce this behaviour on Cortex-A53 and Cortex-A57. They are
      enabled by default.
      
      The string printed in debug builds when a specific CPU errata
      workaround is compiled in but skipped at runtime has been
      generalised, so that it can be reused for the non-temporal hint use
      case as well.
      
      Change-Id: I3e354f4797fd5d3959872a678e160322b13867a1
      54035fc4
  19. 01 Feb, 2016 1 commit
    • Soby Mathew's avatar
      Use tf_printf() for debug logs from xlat_tables.c · d30ac1c3
      Soby Mathew authored
      The debug prints used to debug translation table setup in xlat_tables.c
      used the `printf()` standard library function instead of the stack
      optimized `tf_printf()` API. DEBUG_XLAT_TABLE option was used to enable
      debug logs within xlat_tables.c and it configured a much larger stack
      size for the platform in case it was enabled. This patch modifies these
      debug prints within xlat_tables.c to use tf_printf() and modifies the format
      specifiers to be compatible with tf_printf(). The debug prints are now enabled
      if the VERBOSE prints are enabled in Trusted Firmware via LOG_LEVEL build
      option.
      
      The much larger stack size definition when DEBUG_XLAT_TABLE is defined
      is no longer required and the platform ports are modified to remove this
      stack size definition.
      
      Change-Id: I2f7d77ea12a04b827fa15e2adc3125b1175e4c23
      d30ac1c3
  20. 14 Jan, 2016 1 commit
  21. 12 Jan, 2016 1 commit
  22. 14 Sep, 2015 1 commit
    • Achin Gupta's avatar
      Make generic code work in presence of system caches · 54dc71e7
      Achin Gupta authored
      On the ARMv8 architecture, cache maintenance operations by set/way on the last
      level of integrated cache do not affect the system cache. This means that such a
      flush or clean operation could result in the data being pushed out to the system
      cache rather than main memory. Another CPU could access this data before it
      enables its data cache or MMU. Such accesses could be serviced from the main
      memory instead of the system cache. If the data in the sysem cache has not yet
      been flushed or evicted to main memory then there could be a loss of
      coherency. The only mechanism to guarantee that the main memory will be updated
      is to use cache maintenance operations to the PoC by MVA(See section D3.4.11
      (System level caches) of ARMv8-A Reference Manual (Issue A.g/ARM DDI0487A.G).
      
      This patch removes the reliance of Trusted Firmware on the flush by set/way
      operation to ensure visibility of data in the main memory. Cache maintenance
      operations by MVA are now used instead. The following are the broad category of
      changes:
      
      1. The RW areas of BL2/BL31/BL32 are invalidated by MVA before the C runtime is
         initialised. This ensures that any stale cache lines at any level of cache
         are removed.
      
      2. Updates to global data in runtime firmware (BL31) by the primary CPU are made
         visible to secondary CPUs using a cache clean operation by MVA.
      
      3. Cache maintenance by set/way operations are only used prior to power down.
      
      NOTE: NON-UPSTREAM TRUSTED FIRMWARE CODE SHOULD MAKE EQUIVALENT CHANGES IN
      ORDER TO FUNCTION CORRECTLY ON PLATFORMS WITH SUPPORT FOR SYSTEM CACHES.
      
      Fixes ARM-software/tf-issues#205
      
      Change-Id: I64f1b398de0432813a0e0881d70f8337681f6e9a
      54dc71e7
  23. 11 Sep, 2015 1 commit
    • Andrew Thoelke's avatar
      Re-design bakery lock memory allocation and algorithm · ee7b35c4
      Andrew Thoelke authored
      This patch unifies the bakery lock api's across coherent and normal
      memory implementation of locks by using same data type `bakery_lock_t`
      and similar arguments to functions.
      
      A separate section `bakery_lock` has been created and used to allocate
      memory for bakery locks using `DEFINE_BAKERY_LOCK`. When locks are
      allocated in normal memory, each lock for a core has to spread
      across multiple cache lines. By using the total size allocated in a
      separate cache line for a single core at compile time, the memory for
      other core locks is allocated at link time by multiplying the single
      core locks size with (PLATFORM_CORE_COUNT - 1). The normal memory lock
      algorithm now uses lock address instead of the `id` in the per_cpu_data.
      For locks allocated in coherent memory, it moves locks from
      tzfw_coherent_memory to bakery_lock section.
      
      The bakery locks are allocated as part of bss or in coherent memory
      depending on usage of coherent memory. Both these regions are
      initialised to zero as part of run_time_init before locks are used.
      Hence, bakery_lock_init() is made an empty function as the lock memory
      is already initialised to zero.
      
      The above design lead to the removal of psci bakery locks from
      non_cpu_power_pd_node to psci_locks.
      
      NOTE: THE BAKERY LOCK API WHEN USE_COHERENT_MEM IS NOT SET HAS CHANGED.
      THIS IS A BREAKING CHANGE FOR ALL PLATFORM PORTS THAT ALLOCATE BAKERY
      LOCKS IN NORMAL MEMORY.
      
      Change-Id: Ic3751c0066b8032dcbf9d88f1d4dc73d15f61d8b
      ee7b35c4
  24. 13 Aug, 2015 1 commit
    • Soby Mathew's avatar
      PSCI: Migrate TF to the new platform API and CM helpers · 85a181ce
      Soby Mathew authored
      This patch migrates the rest of Trusted Firmware excluding Secure Payload and
      the dispatchers to the new platform and context management API. The per-cpu
      data framework APIs which took MPIDRs as their arguments are deleted and only
      the ones which take core index as parameter are retained.
      
      Change-Id: I839d05ad995df34d2163a1cfed6baa768a5a595d
      85a181ce
  25. 05 Aug, 2015 1 commit
  26. 24 Jul, 2015 1 commit
    • Varun Wadekar's avatar
      Add "Project Denver" CPU support · 3a8c55f6
      Varun Wadekar authored
      
      
      Denver is NVIDIA's own custom-designed, 64-bit, dual-core CPU which is
      fully ARMv8 architecture compatible.  Each of the two Denver cores
      implements a 7-way superscalar microarchitecture (up to 7 concurrent
      micro-ops can be executed per clock), and includes a 128KB 4-way L1
      instruction cache, a 64KB 4-way L1 data cache, and a 2MB 16-way L2
      cache, which services both cores.
      
      Denver implements an innovative process called Dynamic Code Optimization,
      which optimizes frequently used software routines at runtime into dense,
      highly tuned microcode-equivalent routines. These are stored in a
      dedicated, 128MB main-memory-based optimization cache. After being read
      into the instruction cache, the optimized micro-ops are executed,
      re-fetched and executed from the instruction cache as long as needed and
      capacity allows.
      
      Effectively, this reduces the need to re-optimize the software routines.
      Instead of using hardware to extract the instruction-level parallelism
      (ILP) inherent in the code, Denver extracts the ILP once via software
      techniques, and then executes those routines repeatedly, thus amortizing
      the cost of ILP extraction over the many execution instances.
      
      Denver also features new low latency power-state transitions, in addition
      to extensive power-gating and dynamic voltage and clock scaling based on
      workloads.
      Signed-off-by: default avatarVarun Wadekar <vwadekar@nvidia.com>
      3a8c55f6
  27. 16 Jul, 2015 1 commit
    • Juan Castillo's avatar
      Fix bug in semihosting write function · 31833aff
      Juan Castillo authored
      The return value from the SYS_WRITE semihosting operation is 0 if
      the call is successful or the number of bytes not written, if there
      is an error. The implementation of the write function in the
      semihosting driver treats the return value as the number of bytes
      written, which is wrong. This patch fixes it.
      
      Change-Id: Id39dac3d17b5eac557408b8995abe90924c85b85
      31833aff
  28. 13 Apr, 2015 1 commit
    • Soby Mathew's avatar
      Fix recursive crash prints on FVP AEM model · 6fa11a5e
      Soby Mathew authored
      This patch fixes an issue in the cpu specific register reporting
      of FVP AEM model whereby crash reporting itself triggers an exception
      thus resulting in recursive crash prints. The input to the
      'size_controlled_print' in the crash reporting framework should
      be a NULL terminated string. As there were no cpu specific register
      to be reported on FVP AEM model, the issue was caused by passing 0
      instead of NULL terminated string to the above mentioned function.
      
      Change-Id: I664427b22b89977b389175dfde84c815f02c705a
      6fa11a5e
  29. 08 Apr, 2015 1 commit
    • Kévin Petit's avatar
      Add support to indicate size and end of assembly functions · 8b779620
      Kévin Petit authored
      
      
      In order for the symbol table in the ELF file to contain the size of
      functions written in assembly, it is necessary to report it to the
      assembler using the .size directive.
      
      To fulfil the above requirements, this patch introduces an 'endfunc'
      macro which contains the .endfunc and .size directives. It also adds
      a .func directive to the 'func' assembler macro.
      
      The .func/.endfunc have been used so the assembler can fail if
      endfunc is omitted.
      
      Fixes ARM-Software/tf-issues#295
      
      Change-Id: If8cb331b03d7f38fe7e3694d4de26f1075b278fc
      Signed-off-by: default avatarKévin Petit <kevin.petit@arm.com>
      8b779620