1. 03 Jun, 2016 2 commits
    • Dan Handley's avatar
      Import libfdt v1.4.1 · 91176bc6
      Dan Handley authored
      Imports libfdt code from https://git.kernel.org/cgit/utils/dtc/dtc.git
      tag "v1.4.1" commit 302fca9f4c283e1994cf0a5a9ce1cf43ca15e6d2.
      
      Change-Id: Ia0d966058beee55a9047e80d8a05bbe4f71d8446
      91176bc6
    • Dan Handley's avatar
      Move stdlib header files to include/lib/stdlib · f0b489c1
      Dan Handley authored
      * Move stdlib header files from include/stdlib to include/lib/stdlib for
        consistency with other library headers.
      * Fix checkpatch paths to continue excluding stdlib files.
      * Create stdlib.mk to define the stdlib source files and include directories.
      * Include stdlib.mk from the top level Makefile.
      * Update stdlib header path in the fip_create Makefile.
      * Update porting-guide.md with the new paths.
      
      Change-Id: Ia92c2dc572e9efb54a783e306b5ceb2ce24d27fa
      f0b489c1
  2. 26 Apr, 2016 1 commit
    • Sandrine Bailleux's avatar
      Fix computation of L1 bitmask in the translation table lib · aa447b9c
      Sandrine Bailleux authored
      This patch fixes the computation of the bitmask used to isolate
      the level 1 field of a virtual address. The whole computation needs
      to work on 64-bit values to produce the correct bitmask value.
      XLAT_TABLE_ENTRIES_MASK being a C constant, it is a 32-bit value
      so it needs to be extended to a 64-bit value before it takes part
      in any other computation.
      
      This patch fixes this bug by casting XLAT_TABLE_ENTRIES_MASK as
      an unsigned long long.
      
      Note that this bug doesn't manifest itself in practice because
      address spaces larger than 39 bits are not yet supported in the
      Trusted Firmware.
      
      Change-Id: I955fd263ecb691ca94b29b9c9f576008ce1d87ee
      aa447b9c
  3. 21 Apr, 2016 6 commits
  4. 15 Apr, 2016 1 commit
    • Antonio Nino Diaz's avatar
      Limit support for region overlaps in xlat_tables · e1ea9290
      Antonio Nino Diaz authored
      The only case in which regions can now overlap is if they are
      identity mapped or they have the same virtual to physical address
      offset (identity mapping is just a particular case of the latter).
      They must overlap completely (i.e. one of them must be completely
      inside the other one) and not cover the same area.
      
      This allow future enhancements to the xlat_tables library without
      having to support unnecessarily complex edge cases.
      
      Outer regions are now sorted by mmap_add_region() before inner
      regions with the same base virtual address for consistency: all
      regions contained inside another one must be placed after the outer
      one in the list.
      
      If an inner region has the same attributes as the outer ones it will
      be merged when creating the tables with init_xlation_table(). This
      cannot be done as regions are added because there may be cases where
      adding a region makes previously mergeable regions no longer
      mergeable.
      
      If the attributes of an inner region are different than the outer
      region, new pages will be generated regardless of how "restrictive"
      they are. For example, RO memory is more restrictive than RW. The
      old implementation would give priority to RO if there is an overlap,
      the new one doesn't.
      
      NOTE: THIS IS THEORETICALLY A COMPATABILITY BREAK FOR PLATFORMS THAT
      USE THE XLAT_TABLES LIBRARY IN AN UNEXPECTED WAY. PLEASE RAISE A
      TF-ISSUE IF YOUR PLATFORM IS AFFECTED.
      
      Change-Id: I75fba5cf6db627c2ead70da3feb3cc648c4fe2af
      e1ea9290
  5. 13 Apr, 2016 1 commit
    • Soby Mathew's avatar
      Refactor the xlat_tables library code · 3ca9928d
      Soby Mathew authored
      The AArch32 long descriptor format and the AArch64 descriptor format
      correspond to each other which allows possible sharing of xlat_tables
      library code between AArch64 and AArch32. This patch refactors the
      xlat_tables library code to seperate the common functionality from
      architecture specific code. Prior to this patch, all of the xlat_tables
      library code were in `lib/aarch64/xlat_tables.c` file. The refactored code
      is now in `lib/xlat_tables/` directory. The AArch64 specific programming
      for xlat_tables is in `lib/xlat_tables/aarch64/xlat_tables.c` and the rest
      of the code common to AArch64 and AArch32 is in
      `lib/xlat_tables/xlat_tables_common.c`. Also the data types used in
      xlat_tables library APIs are reworked to make it compatible between AArch64
      and AArch32.
      
      The `lib/aarch64/xlat_tables.c` file now includes the new xlat_tables
      library files to retain compatibility for existing platform ports.
      The macros related to xlat_tables library are also moved from
      `include/lib/aarch64/arch.h` to the header `include/lib/xlat_tables.h`.
      
      NOTE: THE `lib/aarch64/xlat_tables.c` FILE IS DEPRECATED AND PLATFORM PORTS
      ARE EXPECTED TO INCLUDE THE NEW XLAT_TABLES LIBRARY FILES IN THEIR MAKEFILES.
      
      Change-Id: I3d17217d24aaf3a05a4685d642a31d4d56255a0f
      3ca9928d
  6. 31 Mar, 2016 1 commit
    • Antonio Nino Diaz's avatar
      Remove xlat_helpers.c · f33fbb2f
      Antonio Nino Diaz authored
      lib/aarch64/xlat_helpers.c defines helper functions to build
      translation descriptors, but no common code or upstream platform
      port uses them. As the rest of the xlat_tables code evolves, there
      may be conflicts with these helpers, therefore this code should be
      removed.
      
      Change-Id: I9f5be99720f929264818af33db8dada785368711
      f33fbb2f
  7. 22 Mar, 2016 1 commit
    • Soby Mathew's avatar
      Make cpu operations warning a VERBOSE print · 1319e7b1
      Soby Mathew authored
      The assembler helper function `print_revision_warning` is used when a
      CPU specific operation is enabled in the debug build (e.g. an errata
      workaround) but doesn't apply to the executing CPU's revision/part number.
      However, in some cases the system integrator may want a single binary to
      support multiple platforms with different IP versions, only some of which
      contain a specific erratum.  In this case, the warning can be emitted very
      frequently when CPUs are being powered on/off.
      
      This patch modifies this warning print behaviour so that it is emitted only
      when LOG_LEVEL >= LOG_LEVEL_VERBOSE. The `debug.h` header file now contains
      guard macros so that it can be included in assembly code.
      
      Change-Id: Ic6e7a07f128dcdb8498a5bfdae920a8feeea1345
      1319e7b1
  8. 07 Mar, 2016 1 commit
    • Kristina Martsenko's avatar
      Initialize all translation table entries · 2af926dd
      Kristina Martsenko authored
      The current translation table code maps in a series of regions, zeroing
      the unmapped table entries before and in between the mapped regions. It
      doesn't, however, zero the unmapped entries after the last mapped
      region, leaving those entries at whatever value that memory has
      initially.
      
      This is bad because those values can look like valid translation table
      entries, pointing to valid physical addresses. The CPU is allowed to do
      speculative reads from any such addresses. If the addresses point to
      device memory, the results can be unpredictable.
      
      This patch zeroes the translation table entries following the last
      mapped region, ensuring all table entries are either valid or zero
      (invalid).
      
      In addition, it limits the value of ADDR_SPACE_SIZE to those allowed by
      the architecture and supported by the current code (see D4.2.5 in the
      Architecture Reference Manual). This simplifies this patch a lot and
      ensures existing code doesn't do unexpected things.
      
      Change-Id: Ic28b6c3f89d73ef58fa80319a9466bb2c7131c21
      2af926dd
  9. 03 Mar, 2016 1 commit
    • Sandrine Bailleux's avatar
      Extend memory attributes to map non-cacheable memory · 5f654975
      Sandrine Bailleux authored
      At the moment, the memory translation library allows to create memory
      mappings of 2 types:
      
       - Device nGnRE memory (named MT_DEVICE in the library);
      
       - Normal, Inner Write-back non-transient, Outer Write-back
         non-transient memory (named MT_MEMORY in the library).
      
      As a consequence, the library code treats the memory type field as a
      boolean: everything that is not device memory is normal memory and
      vice-versa.
      
      In reality, the ARMv8 architecture allows up to 8 types of memory to
      be used at a single time for a given exception level. This patch
      reworks the memory attributes such that the memory type is now defined
      as an integer ranging from 0 to 7 instead of a boolean. This makes it
      possible to extend the list of memory types supported by the memory
      translation library.
      
      The priority system dictating memory attributes for overlapping
      memory regions has been extended to cope with these changes but the
      algorithm at its core has been preserved. When a memory region is
      re-mapped with different memory attributes, the memory translation
      library examines the former attributes and updates them only if
      the new attributes create a more restrictive mapping. This behaviour
      is unchanged, only the manipulation of the value has been modified
      to cope with the new format.
      
      This patch also introduces a new type of memory mapping in the memory
      translation library: MT_NON_CACHEABLE, meaning Normal, Inner
      Non-cacheable, Outer Non-cacheable memory. This can be useful to map
      a non-cacheable memory region, such as a DMA buffer for example.
      
      The rules around the Execute-Never (XN) bit in a translation table
      for an MT_NON_CACHEABLE memory mapping have been aligned on the rules
      used for MT_MEMORY mappings:
       - If the memory is read-only then it is also executable (XN = 0);
       - If the memory is read-write then it is not executable (XN = 1).
      
      The shareability field for MT_NON_CACHEABLE mappings is always set as
      'Outer-Shareable'. Note that this is not strictly needed since
      shareability is only relevant if the memory is a Normal Cacheable
      memory type, but this is to align with the existing device memory
      mappings setup. All Device and Normal Non-cacheable memory regions
      are always treated as Outer Shareable, regardless of the translation
      table shareability attributes.
      
      This patch also removes the 'ATTR_SO' and 'ATTR_SO_INDEX' #defines.
      They were introduced to map memory as Device nGnRnE (formerly called
      "Strongly-Ordered" memory in the ARMv7 architecture) but were not
      used anywhere in the code base. Removing them avoids any confusion
      about the memory types supported by the library.
      
      Upstream platforms do not currently use the MT_NON_CACHEABLE memory
      type.
      
      NOTE: THIS CHANGE IS SOURCE COMPATIBLE BUT PLATFORMS THAT RELY ON THE
      BINARY VALUES OF `mmap_attr_t` or the `attr` argument of
      `mmap_add_region()` MAY BE BROKEN.
      
      Change-Id: I717d6ed79b4c845a04e34132432f98b93d661d79
      5f654975
  10. 26 Feb, 2016 1 commit
    • Antonio Nino Diaz's avatar
      Compile stdlib C files individually · 191a0088
      Antonio Nino Diaz authored
      All C files of stdlib were included into std.c, which was the file
      that the Makefile actually compiled. This is a poor way of compiling
      all the files and, while it may work fine most times, it's
      discouraged.
      
      In this particular case, each C file included its own headers, which
      were later included into std.c. For example, this caused problems
      because a duplicated typedef of u_short in both subr_prf.c and
      types.h. While that may require an issue on its own, this kind of
      problems are avoided if all C files are as independent as possible.
      
      Change-Id: I9a7833fd2933003f19a5d7db921ed8542ea2d04a
      191a0088
  11. 08 Feb, 2016 2 commits
    • Sandrine Bailleux's avatar
      Cortex-Axx: Unconditionally apply CPU reset operations · c66fad93
      Sandrine Bailleux authored
      In the Cortex-A35/A53/A57 CPUs library code, some of the CPU specific
      reset operations are skipped if they have already been applied in a
      previous invocation of the reset handler. This precaution is not
      required, as all these operations can be reapplied safely.
      
      This patch removes the unneeded test-before-set instructions in
      the reset handler for these CPUs.
      
      Change-Id: Ib175952c814dc51f1b5125f76ed6c06a22b95167
      c66fad93
    • Sandrine Bailleux's avatar
      Disable non-temporal hint on Cortex-A53/57 · 54035fc4
      Sandrine Bailleux authored
      The LDNP/STNP instructions as implemented on Cortex-A53 and
      Cortex-A57 do not behave in a way most programmers expect, and will
      most probably result in a significant speed degradation to any code
      that employs them. The ARMv8-A architecture (see Document ARM DDI
      0487A.h, section D3.4.3) allows cores to ignore the non-temporal hint
      and treat LDNP/STNP as LDP/STP instead.
      
      This patch introduces 2 new build flags:
      A53_DISABLE_NON_TEMPORAL_HINT and A57_DISABLE_NON_TEMPORAL_HINT
      to enforce this behaviour on Cortex-A53 and Cortex-A57. They are
      enabled by default.
      
      The string printed in debug builds when a specific CPU errata
      workaround is compiled in but skipped at runtime has been
      generalised, so that it can be reused for the non-temporal hint use
      case as well.
      
      Change-Id: I3e354f4797fd5d3959872a678e160322b13867a1
      54035fc4
  12. 01 Feb, 2016 1 commit
    • Soby Mathew's avatar
      Use tf_printf() for debug logs from xlat_tables.c · d30ac1c3
      Soby Mathew authored
      The debug prints used to debug translation table setup in xlat_tables.c
      used the `printf()` standard library function instead of the stack
      optimized `tf_printf()` API. DEBUG_XLAT_TABLE option was used to enable
      debug logs within xlat_tables.c and it configured a much larger stack
      size for the platform in case it was enabled. This patch modifies these
      debug prints within xlat_tables.c to use tf_printf() and modifies the format
      specifiers to be compatible with tf_printf(). The debug prints are now enabled
      if the VERBOSE prints are enabled in Trusted Firmware via LOG_LEVEL build
      option.
      
      The much larger stack size definition when DEBUG_XLAT_TABLE is defined
      is no longer required and the platform ports are modified to remove this
      stack size definition.
      
      Change-Id: I2f7d77ea12a04b827fa15e2adc3125b1175e4c23
      d30ac1c3
  13. 14 Jan, 2016 1 commit
  14. 12 Jan, 2016 1 commit
  15. 14 Sep, 2015 1 commit
    • Achin Gupta's avatar
      Make generic code work in presence of system caches · 54dc71e7
      Achin Gupta authored
      On the ARMv8 architecture, cache maintenance operations by set/way on the last
      level of integrated cache do not affect the system cache. This means that such a
      flush or clean operation could result in the data being pushed out to the system
      cache rather than main memory. Another CPU could access this data before it
      enables its data cache or MMU. Such accesses could be serviced from the main
      memory instead of the system cache. If the data in the sysem cache has not yet
      been flushed or evicted to main memory then there could be a loss of
      coherency. The only mechanism to guarantee that the main memory will be updated
      is to use cache maintenance operations to the PoC by MVA(See section D3.4.11
      (System level caches) of ARMv8-A Reference Manual (Issue A.g/ARM DDI0487A.G).
      
      This patch removes the reliance of Trusted Firmware on the flush by set/way
      operation to ensure visibility of data in the main memory. Cache maintenance
      operations by MVA are now used instead. The following are the broad category of
      changes:
      
      1. The RW areas of BL2/BL31/BL32 are invalidated by MVA before the C runtime is
         initialised. This ensures that any stale cache lines at any level of cache
         are removed.
      
      2. Updates to global data in runtime firmware (BL31) by the primary CPU are made
         visible to secondary CPUs using a cache clean operation by MVA.
      
      3. Cache maintenance by set/way operations are only used prior to power down.
      
      NOTE: NON-UPSTREAM TRUSTED FIRMWARE CODE SHOULD MAKE EQUIVALENT CHANGES IN
      ORDER TO FUNCTION CORRECTLY ON PLATFORMS WITH SUPPORT FOR SYSTEM CACHES.
      
      Fixes ARM-software/tf-issues#205
      
      Change-Id: I64f1b398de0432813a0e0881d70f8337681f6e9a
      54dc71e7
  16. 11 Sep, 2015 1 commit
    • Andrew Thoelke's avatar
      Re-design bakery lock memory allocation and algorithm · ee7b35c4
      Andrew Thoelke authored
      This patch unifies the bakery lock api's across coherent and normal
      memory implementation of locks by using same data type `bakery_lock_t`
      and similar arguments to functions.
      
      A separate section `bakery_lock` has been created and used to allocate
      memory for bakery locks using `DEFINE_BAKERY_LOCK`. When locks are
      allocated in normal memory, each lock for a core has to spread
      across multiple cache lines. By using the total size allocated in a
      separate cache line for a single core at compile time, the memory for
      other core locks is allocated at link time by multiplying the single
      core locks size with (PLATFORM_CORE_COUNT - 1). The normal memory lock
      algorithm now uses lock address instead of the `id` in the per_cpu_data.
      For locks allocated in coherent memory, it moves locks from
      tzfw_coherent_memory to bakery_lock section.
      
      The bakery locks are allocated as part of bss or in coherent memory
      depending on usage of coherent memory. Both these regions are
      initialised to zero as part of run_time_init before locks are used.
      Hence, bakery_lock_init() is made an empty function as the lock memory
      is already initialised to zero.
      
      The above design lead to the removal of psci bakery locks from
      non_cpu_power_pd_node to psci_locks.
      
      NOTE: THE BAKERY LOCK API WHEN USE_COHERENT_MEM IS NOT SET HAS CHANGED.
      THIS IS A BREAKING CHANGE FOR ALL PLATFORM PORTS THAT ALLOCATE BAKERY
      LOCKS IN NORMAL MEMORY.
      
      Change-Id: Ic3751c0066b8032dcbf9d88f1d4dc73d15f61d8b
      ee7b35c4
  17. 13 Aug, 2015 1 commit
    • Soby Mathew's avatar
      PSCI: Migrate TF to the new platform API and CM helpers · 85a181ce
      Soby Mathew authored
      This patch migrates the rest of Trusted Firmware excluding Secure Payload and
      the dispatchers to the new platform and context management API. The per-cpu
      data framework APIs which took MPIDRs as their arguments are deleted and only
      the ones which take core index as parameter are retained.
      
      Change-Id: I839d05ad995df34d2163a1cfed6baa768a5a595d
      85a181ce
  18. 05 Aug, 2015 1 commit
  19. 24 Jul, 2015 1 commit
    • Varun Wadekar's avatar
      Add "Project Denver" CPU support · 3a8c55f6
      Varun Wadekar authored
      
      
      Denver is NVIDIA's own custom-designed, 64-bit, dual-core CPU which is
      fully ARMv8 architecture compatible.  Each of the two Denver cores
      implements a 7-way superscalar microarchitecture (up to 7 concurrent
      micro-ops can be executed per clock), and includes a 128KB 4-way L1
      instruction cache, a 64KB 4-way L1 data cache, and a 2MB 16-way L2
      cache, which services both cores.
      
      Denver implements an innovative process called Dynamic Code Optimization,
      which optimizes frequently used software routines at runtime into dense,
      highly tuned microcode-equivalent routines. These are stored in a
      dedicated, 128MB main-memory-based optimization cache. After being read
      into the instruction cache, the optimized micro-ops are executed,
      re-fetched and executed from the instruction cache as long as needed and
      capacity allows.
      
      Effectively, this reduces the need to re-optimize the software routines.
      Instead of using hardware to extract the instruction-level parallelism
      (ILP) inherent in the code, Denver extracts the ILP once via software
      techniques, and then executes those routines repeatedly, thus amortizing
      the cost of ILP extraction over the many execution instances.
      
      Denver also features new low latency power-state transitions, in addition
      to extensive power-gating and dynamic voltage and clock scaling based on
      workloads.
      Signed-off-by: default avatarVarun Wadekar <vwadekar@nvidia.com>
      3a8c55f6
  20. 16 Jul, 2015 1 commit
    • Juan Castillo's avatar
      Fix bug in semihosting write function · 31833aff
      Juan Castillo authored
      The return value from the SYS_WRITE semihosting operation is 0 if
      the call is successful or the number of bytes not written, if there
      is an error. The implementation of the write function in the
      semihosting driver treats the return value as the number of bytes
      written, which is wrong. This patch fixes it.
      
      Change-Id: Id39dac3d17b5eac557408b8995abe90924c85b85
      31833aff
  21. 13 Apr, 2015 1 commit
    • Soby Mathew's avatar
      Fix recursive crash prints on FVP AEM model · 6fa11a5e
      Soby Mathew authored
      This patch fixes an issue in the cpu specific register reporting
      of FVP AEM model whereby crash reporting itself triggers an exception
      thus resulting in recursive crash prints. The input to the
      'size_controlled_print' in the crash reporting framework should
      be a NULL terminated string. As there were no cpu specific register
      to be reported on FVP AEM model, the issue was caused by passing 0
      instead of NULL terminated string to the above mentioned function.
      
      Change-Id: I664427b22b89977b389175dfde84c815f02c705a
      6fa11a5e
  22. 08 Apr, 2015 1 commit
    • Kévin Petit's avatar
      Add support to indicate size and end of assembly functions · 8b779620
      Kévin Petit authored
      
      
      In order for the symbol table in the ELF file to contain the size of
      functions written in assembly, it is necessary to report it to the
      assembler using the .size directive.
      
      To fulfil the above requirements, this patch introduces an 'endfunc'
      macro which contains the .endfunc and .size directives. It also adds
      a .func directive to the 'func' assembler macro.
      
      The .func/.endfunc have been used so the assembler can fail if
      endfunc is omitted.
      
      Fixes ARM-Software/tf-issues#295
      
      Change-Id: If8cb331b03d7f38fe7e3694d4de26f1075b278fc
      Signed-off-by: default avatarKévin Petit <kevin.petit@arm.com>
      8b779620
  23. 27 Mar, 2015 2 commits
    • Soby Mathew's avatar
      Remove the `owner` field in bakery_lock_t data structure · 548579f5
      Soby Mathew authored
      This patch removes the `owner` field in bakery_lock_t structure which
      is the data structure used in the bakery lock implementation that uses
      coherent memory. The assertions to protect against recursive lock
      acquisition were based on the 'owner' field. They are now done based
      on the bakery lock ticket number. These assertions are also added
      to the bakery lock implementation that uses normal memory as well.
      
      Change-Id: If4850a00dffd3977e218c0f0a8d145808f36b470
      548579f5
    • Soby Mathew's avatar
      Optimize the bakery lock structure for coherent memory · 1c9573a1
      Soby Mathew authored
      This patch optimizes the data structure used with the bakery lock
      implementation for coherent memory to save memory and minimize memory
      accesses. These optimizations were already part of the bakery lock
      implementation for normal memory and this patch now implements
      it for the coherent memory implementation as well. Also
      included in the patch is a cleanup to use the do-while loop while
      waiting for other contenders to finish choosing their tickets.
      
      Change-Id: Iedb305473133dc8f12126726d8329b67888b70f1
      1c9573a1
  24. 18 Mar, 2015 1 commit
  25. 16 Mar, 2015 1 commit
  26. 13 Mar, 2015 1 commit
    • Vikram Kanigiri's avatar
      Initialise cpu ops after enabling data cache · 12e7c4ab
      Vikram Kanigiri authored
      The cpu-ops pointer was initialized before enabling the data cache in the cold
      and warm boot paths. This required a DCIVAC cache maintenance operation to
      invalidate any stale cache lines resident in other cpus.
      
      This patch moves this initialization to the bl31_arch_setup() function
      which is always called after the data cache and MMU has been enabled.
      
      This change removes the need:
       1. for the DCIVAC cache maintenance operation.
       2. to initialise the CPU ops upon resumption from a PSCI CPU_SUSPEND
          call since memory contents are always preserved in this case.
      
      Change-Id: Ibb2fa2f7460d1a1f1e721242025e382734c204c6
      12e7c4ab
  27. 30 Jan, 2015 1 commit
    • Soby Mathew's avatar
      Fix the Cortex-A57 reset handler register usage · 683f788f
      Soby Mathew authored
      The CPU specific reset handlers no longer have the freedom
      of using any general purpose register because it is being invoked
      by the BL3-1 entry point in addition to BL1. The Cortex-A57 CPU
      specific reset handler was overwriting x20 register which was being
      used by the BL3-1 entry point to save the entry point information.
      This patch fixes this bug by reworking the register allocation in the
      Cortex-A57 reset handler to avoid using x20. The patch also
      explicitly mentions the register clobber list for each of the
      callee functions invoked by the reset handler
      
      Change-Id: I28fcff8e742aeed883eaec8f6c4ee2bd3fce30df
      683f788f
  28. 28 Jan, 2015 1 commit
    • Juan Castillo's avatar
      stdlib: add missing features to build PolarSSL · e509d057
      Juan Castillo authored
      This patch adds the missing features to the C library included
      in the Trusted Firmware to build PolarSSL:
      
        - strcasecmp() function
        - exit() function
        - sscanf()* function
        - time.h header file (and its dependencies)
      
      * NOTE: the sscanf() function is not a real implementation. It just
      returns the number of expected arguments by counting the number of
      '%' characters present in the formar string. This return value is
      good enough for PolarSSL because during the certificate parsing
      only the return value is checked. The certificate validity period
      is ignored.
      
      Change-Id: I43bb3742f26f0bd458272fccc3d72a7f2176ab3d
      e509d057
  29. 26 Jan, 2015 1 commit
    • Yatharth Kochar's avatar
      Call reset handlers upon BL3-1 entry. · 79a97b2e
      Yatharth Kochar authored
      This patch adds support to call the reset_handler() function in BL3-1 in the
      cold and warm boot paths when another Boot ROM reset_handler() has already run.
      
      This means the BL1 and BL3-1 versions of the CPU and platform specific reset
      handlers may execute different code to each other. This enables a developer to
      perform additional actions or undo actions already performed during the first
      call of the reset handlers e.g. apply additional errata workarounds.
      
      Typically, the reset handler will be first called from the BL1 Boot ROM. Any
      additional functionality can be added to the reset handler when it is called
      from BL3-1 resident in RW memory. The constant FIRST_RESET_HANDLER_CALL is used
      to identify whether this is the first version of the reset handler code to be
      executed or an overridden version of the code.
      
      The Cortex-A57 errata workarounds are applied only if they have not already been
      applied.
      
      Fixes ARM-software/tf-issue#275
      
      Change-Id: Id295f106e4fda23d6736debdade2ac7f2a9a9053
      79a97b2e
  30. 22 Jan, 2015 2 commits
    • Soby Mathew's avatar
      Move bakery algorithm implementation out of coherent memory · 8c5fe0b5
      Soby Mathew authored
      This patch moves the bakery locks out of coherent memory to normal memory.
      This implies that the lock information needs to be placed on a separate cache
      line for each cpu. Hence the bakery_lock_info_t structure is allocated in the
      per-cpu data so as to minimize memory wastage. A similar platform per-cpu
      data is introduced for the platform locks.
      
      As a result of the above changes, the bakery lock api is completely changed.
      Earlier, a reference to the lock structure was passed to the lock implementation.
      Now a unique-id (essentially an index into the per-cpu data array) and an offset
      into the per-cpu data for bakery_info_t needs to be passed to the lock
      implementation.
      
      Change-Id: I1e76216277448713c6c98b4c2de4fb54198b39e0
      8c5fe0b5
    • Soby Mathew's avatar
      Remove the wfe() for bounded wait in bakery_lock · d4f4ad90
      Soby Mathew authored
      This patch is an optimization in the bakery_lock_get() function
      which removes the wfe() when waiting for other contenders to choose
      their ticket i.e when their `entering` flag is set. Since the time
      taken to execute bakery_get_ticket() by other contenders is bounded,
      this wait is a bounded time wait. Hence the removal of wfe() and the
      corresponding sev() and dsb() in bakery_get_ticket() may result
      in better time performance during lock acquisition.
      
      Change-Id: I141bb21294226b54cb6e89e7cac0175c553afd8d
      d4f4ad90
  31. 13 Jan, 2015 1 commit
    • Soby Mathew's avatar
      Invalidate the dcache after initializing cpu-ops · 09997346
      Soby Mathew authored
      This patch fixes a crash due to corruption of cpu_ops
      data structure. During the secondary CPU boot, after the
      cpu_ops has been initialized in the per cpu-data, the
      dcache lines need to invalidated so that the update in
      memory can be seen later on when the dcaches are turned ON.
      Also, after initializing the psci per cpu data, the dcache
      lines are flushed so that they are written back to memory
      and dirty dcache lines are avoided.
      
      Fixes ARM-Software/tf-issues#271
      
      Change-Id: Ia90f55e9882690ead61226eea5a5a9146d35f313
      09997346