1. 28 Mar, 2017 1 commit
  2. 20 Mar, 2017 2 commits
    • Andre Przywara's avatar
      Add workaround for ARM Cortex-A53 erratum 855873 · b75dc0e4
      Andre Przywara authored
      
      
      ARM erratum 855873 applies to all Cortex-A53 CPUs.
      The recommended workaround is to promote "data cache clean"
      instructions to "data cache clean and invalidate" instructions.
      For core revisions of r0p3 and later this can be done by setting a bit
      in the CPUACTLR_EL1 register, so that hardware takes care of the promotion.
      As CPUACTLR_EL1 is both IMPLEMENTATION DEFINED and can be trapped to EL3,
      we set the bit in firmware.
      Also we dump this register upon crashing to provide more debug
      information.
      
      Enable the workaround for the Juno boards.
      
      Change-Id: I3840114291958a406574ab6c49b01a9d9847fec8
      Signed-off-by: default avatarAndre Przywara <andre.przywara@arm.com>
      b75dc0e4
    • Douglas Raillard's avatar
      Replace ASM signed tests with unsigned · 355a5d03
      Douglas Raillard authored
      
      
      ge, lt, gt and le condition codes in assembly provide a signed test
      whereas hs, lo, hi and ls provide the unsigned counterpart. Signed tests
      should only be used when strictly necessary, as using them on logically
      unsigned values can lead to inverting the test for high enough values.
      All offsets, addresses and usually counters are actually unsigned
      values, and should be tested as such.
      
      Replace the occurrences of signed condition codes where it was
      unnecessary by an unsigned test as the unsigned tests allow the full
      range of unsigned values to be used without inverting the result with
      some large operands.
      
      Change-Id: I58b7e98d03e3a4476dfb45230311f296d224980a
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      355a5d03
  3. 08 Mar, 2017 4 commits
    • Antonio Nino Diaz's avatar
      Apply workaround for errata 813419 of Cortex-A57 · ccbec91c
      Antonio Nino Diaz authored
      
      
      TLBI instructions for EL3 won't have the desired effect under specific
      circumstances in Cortex-A57 r0p0. The workaround is to execute DSB and
      TLBI twice each time.
      
      Even though this errata is only needed in r0p0, the current errata
      framework is not prepared to apply run-time workarounds. The current one
      is always applied if compiled in, regardless of the CPU or its revision.
      
      This errata has been enabled for Juno.
      
      The `DSB` instruction used when initializing the translation tables has
      been changed to `DSB ISH` as an optimization and to be consistent with
      the barriers used for the workaround.
      
      Change-Id: Ifc1d70b79cb5e0d87e90d88d376a59385667d338
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      ccbec91c
    • Antonio Nino Diaz's avatar
      Add dynamic region support to xlat tables lib v2 · 0b64f4ef
      Antonio Nino Diaz authored
      
      
      Added APIs to add and remove regions to the translation tables
      dynamically while the MMU is enabled. Only static regions are allowed
      to overlap other static ones (for backwards compatibility).
      
      A new private attribute (MT_DYNAMIC / MT_STATIC) has been added to
      flag each region as such.
      
      The dynamic mapping functionality can be enabled or disabled when
      compiling by setting the build option PLAT_XLAT_TABLES_DYNAMIC to 1
      or 0. This can be done per-image.
      
      TLB maintenance code during dynamic table mapping and unmapping has
      also been added.
      
      Fixes ARM-software/tf-issues#310
      
      Change-Id: I19e8992005c4292297a382824394490c5387aa3b
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      0b64f4ef
    • Antonio Nino Diaz's avatar
      Improve debug output of the translation tables · f10644c5
      Antonio Nino Diaz authored
      
      
      The printed output has been improved in two ways:
      
      - Whenever multiple invalid descriptors are found, only the first one
        is printed, and a line is added to inform about how many descriptors
        have been omitted.
      
      - At the beginning of each line there is an indication of the table
        level the entry belongs to. Example of the new output:
        `[LV3] VA:0x1000 PA:0x1000 size:0x1000 MEM-RO-S-EXEC`
      
      Change-Id: Ib6f1cd8dbd449452f09258f4108241eb11f8d445
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      f10644c5
    • Antonio Nino Diaz's avatar
      Add version 2 of xlat tables library · 7bb01fb2
      Antonio Nino Diaz authored
      
      
      The folder lib/xlat_tables_v2 has been created to store a new version
      of the translation tables library for further modifications in patches
      to follow. At the moment it only contains a basic implementation that
      supports static regions.
      
      This library allows different translation tables to be modified by
      using different 'contexts'. For now, the implementation defaults to
      the translation tables used by the current image, but it is possible
      to modify other tables than the ones in use.
      
      Added a new API to print debug information for the current state of
      the translation tables, rather than printing the information while
      the tables are being created. This allows subsequent debug printing
      of the xlat tables after they have been changed, which will be useful
      when dynamic regions are implemented in a patch to follow.
      
      The common definitions stored in `xlat_tables.h` header have been moved
      to a new file common to both versions, `xlat_tables_defs.h`.
      
      All headers related to the translation tables library have been moved to
      a the subfolder `xlat_tables`.
      
      Change-Id: Ia55962c33e0b781831d43a548e505206dffc5ea9
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      7bb01fb2
  4. 02 Mar, 2017 3 commits
    • Soby Mathew's avatar
      AArch32: Fix normal memory bakery compilation · 61531a27
      Soby Mathew authored
      
      
      This patch fixes a compilation issue with bakery locks when
      PSCI library is compiled with USE_COHERENT_MEM = 0 build option.
      
      Change-Id: Ic7f6cf9f2bb37f8a946eafbee9cbc3bf0dc7e900
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      61531a27
    • Jeenu Viswambharan's avatar
      PSCI: Optimize call paths if all participants are cache-coherent · b0408e87
      Jeenu Viswambharan authored
      
      
      The current PSCI implementation can apply certain optimizations upon the
      assumption that all PSCI participants are cache-coherent.
      
        - Skip performing cache maintenance during power-up.
      
        - Skip performing cache maintenance during power-down:
      
          At present, on the power-down path, CPU driver disables caches and
          MMU, and performs cache maintenance in preparation for powering down
          the CPU. This means that PSCI must perform additional cache
          maintenance on the extant stack for correct functioning.
      
          If all participating CPUs are cache-coherent, CPU driver would
          neither disable MMU nor perform cache maintenance. The CPU being
          powered down, therefore, remain cache-coherent throughout all PSCI
          call paths. This in turn means that PSCI cache maintenance
          operations are not required during power down.
      
        - Choose spin locks instead of bakery locks:
      
          The current PSCI implementation must synchronize both cache-coherent
          and non-cache-coherent participants. Mutual exclusion primitives are
          not guaranteed to function on non-coherent memory. For this reason,
          the current PSCI implementation had to resort to bakery locks.
      
          If all participants are cache-coherent, the implementation can
          enable MMU and data caches early, and substitute bakery locks for
          spin locks. Spin locks make use of architectural mutual exclusion
          primitives, and are lighter and faster.
      
      The optimizations are applied when HW_ASSISTED_COHERENCY build option is
      enabled, as it's expected that all PSCI participants are cache-coherent
      in those systems.
      
      Change-Id: Iac51c3ed318ea7e2120f6b6a46fd2db2eae46ede
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      b0408e87
    • Jeenu Viswambharan's avatar
      PSCI: Introduce cache and barrier wrappers · a10d3632
      Jeenu Viswambharan authored
      
      
      The PSCI implementation performs cache maintenance operations on its
      data structures to ensure their visibility to both cache-coherent and
      non-cache-coherent participants. These cache maintenance operations
      can be skipped if all PSCI participants are cache-coherent. When
      HW_ASSISTED_COHERENCY build option is enabled, we assume PSCI
      participants are cache-coherent.
      
      For usage abstraction, this patch introduces wrappers for PSCI cache
      maintenance and barrier operations used for state coordination: they are
      effectively NOPs when HW_ASSISTED_COHERENCY is enabled, but are
      applied otherwise.
      
      Also refactor local state usage and associated cache operations to make
      it clearer.
      
      Change-Id: I77f17a90cba41085b7188c1345fe5731c99fad87
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      a10d3632
  5. 28 Feb, 2017 1 commit
  6. 23 Feb, 2017 2 commits
  7. 22 Feb, 2017 1 commit
  8. 14 Feb, 2017 1 commit
    • Jeenu Viswambharan's avatar
      Introduce locking primitives using CAS instruction · c877b414
      Jeenu Viswambharan authored
      
      
      The ARMv8v.1 architecture extension has introduced support for far
      atomics, which includes compare-and-swap. Compare and Swap instruction
      is only available for AArch64.
      
      Introduce build options to choose the architecture versions to target
      ARM Trusted Firmware:
      
        - ARM_ARCH_MAJOR: selects the major version of target ARM
          Architecture. Default value is 8.
      
        - ARM_ARCH_MINOR: selects the minor version of target ARM
          Architecture. Default value is 0.
      
      When:
      
        (ARM_ARCH_MAJOR > 8) || ((ARM_ARCH_MAJOR == 8) && (ARM_ARCH_MINOR >= 1)),
      
      for AArch64, Compare and Swap instruction is used to implement spin
      locks. Otherwise, the implementation falls back to using
      load-/store-exclusive instructions.
      
      Update user guide, and introduce a section in Firmware Design guide to
      summarize support for features introduced in ARMv8 Architecture
      Extensions.
      
      Change-Id: I73096a0039502f7aef9ec6ab3ae36680da033f16
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      c877b414
  9. 13 Feb, 2017 2 commits
    • dp-arm's avatar
      PSCI: Do stat accounting for retention/standby states · e5bbd16a
      dp-arm authored
      
      
      Perform stat accounting for retention/standby states also when
      requested at multiple power levels.
      
      Change-Id: I2c495ea7cdff8619bde323fb641cd84408eb5762
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      e5bbd16a
    • dp-arm's avatar
      PSCI: Decouple PSCI stat residency calculation from PMF · 04c1db1e
      dp-arm authored
      
      
      This patch introduces the following three platform interfaces:
      
      * void plat_psci_stat_accounting_start(const psci_power_state_t *state_info)
      
        This is an optional hook that platforms can implement in order
        to perform accounting before entering a low power state.  This
        typically involves capturing a timestamp.
      
      * void plat_psci_stat_accounting_stop(const psci_power_state_t *state_info)
      
        This is an optional hook that platforms can implement in order
        to perform accounting after exiting from a low power state.  This
        typically involves capturing a timestamp.
      
      * u_register_t plat_psci_stat_get_residency(unsigned int lvl,
      	const psci_power_state_t *state_info,
      	unsigned int last_cpu_index)
      
        This is an optional hook that platforms can implement in order
        to calculate the PSCI stat residency.
      
      If any of these interfaces are overridden by the platform, it is
      recommended that all of them are.
      
      By default `ENABLE_PSCI_STAT` is disabled.  If `ENABLE_PSCI_STAT`
      is set but `ENABLE_PMF` is not set then an alternative PSCI stat
      collection backend must be provided.  If both are set, then default
      weak definitions of these functions are provided, using PMF to
      calculate the residency.
      
      NOTE: Previously, platforms did not have to explicitly set
      `ENABLE_PMF` since this was automatically done by the top-level
      Makefile.
      
      Change-Id: I17b47804dea68c77bc284df15ee1ccd66bc4b79b
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      04c1db1e
  10. 06 Feb, 2017 2 commits
    • Douglas Raillard's avatar
      Replace some memset call by zeromem · 32f0d3c6
      Douglas Raillard authored
      
      
      Replace all use of memset by zeromem when zeroing moderately-sized
      structure by applying the following transformation:
      memset(x, 0, sizeof(x)) => zeromem(x, sizeof(x))
      
      As the Trusted Firmware is compiled with -ffreestanding, it forbids the
      compiler from using __builtin_memset and forces it to generate calls to
      the slow memset implementation. Zeromem is a near drop in replacement
      for this use case, with a more efficient implementation on both AArch32
      and AArch64.
      
      Change-Id: Ia7f3a90e888b96d056881be09f0b4d65b41aa79e
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      32f0d3c6
    • Douglas Raillard's avatar
      Introduce unified API to zero memory · 308d359b
      Douglas Raillard authored
      
      
      Introduce zeromem_dczva function on AArch64 that can handle unaligned
      addresses and make use of DC ZVA instruction to zero a whole block at a
      time. This zeroing takes place directly in the cache to speed it up
      without doing external memory access.
      
      Remove the zeromem16 function on AArch64 and replace it with an alias to
      zeromem. This zeromem16 function is now deprecated.
      
      Remove the 16-bytes alignment constraint on __BSS_START__ in
      firmware-design.md as it is now not mandatory anymore (it used to comply
      with zeromem16 requirements).
      
      Change the 16-bytes alignment constraints in SP min's linker script to a
      8-bytes alignment constraint as the AArch32 zeromem implementation is now
      more efficient on 8-bytes aligned addresses.
      
      Introduce zero_normalmem and zeromem helpers in platform agnostic header
      that are implemented this way:
      * AArch32:
      	* zero_normalmem: zero using usual data access
      	* zeromem: alias for zero_normalmem
      * AArch64:
      	* zero_normalmem: zero normal memory  using DC ZVA instruction
      	                  (needs MMU enabled)
      	* zeromem: zero using usual data access
      
      Usage guidelines: in most cases, zero_normalmem should be preferred.
      
      There are 2 scenarios where zeromem (or memset) must be used instead:
      * Code that must run with MMU disabled (which means all memory is
        considered device memory for data accesses).
      * Code that fills device memory with null bytes.
      
      Optionally, the following rule can be applied if performance is
      important:
      * Code zeroing small areas (few bytes) that are not secrets should use
        memset to take advantage of compiler optimizations.
      
        Note: Code zeroing security-related critical information should use
        zero_normalmem/zeromem instead of memset to avoid removal by
        compilers' optimizations in some cases or misbehaving versions of GCC.
      
      Fixes ARM-software/tf-issues#408
      
      Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      308d359b
  11. 30 Jan, 2017 1 commit
    • Jeenu Viswambharan's avatar
      Report errata workaround status to console · 10bcd761
      Jeenu Viswambharan authored
      
      
      The errata reporting policy is as follows:
      
        - If an errata workaround is enabled:
      
          - If it applies (i.e. the CPU is affected by the errata), an INFO
            message is printed, confirming that the errata workaround has been
            applied.
      
          - If it does not apply, a VERBOSE message is printed, confirming
            that the errata workaround has been skipped.
      
        - If an errata workaround is not enabled, but would have applied had
          it been, a WARN message is printed, alerting that errata workaround
          is missing.
      
      The CPU errata messages are printed by both BL1 (primary CPU only) and
      runtime firmware on debug builds, once for each CPU/errata combination.
      
      Relevant output from Juno r1 console when ARM Trusted Firmware is built
      with PLAT=juno LOG_LEVEL=50 DEBUG=1:
      
        VERBOSE: BL1: cortex_a57: errata workaround for 806969 was not applied
        VERBOSE: BL1: cortex_a57: errata workaround for 813420 was not applied
        INFO:    BL1: cortex_a57: errata workaround for disable_ldnp_overread was applied
        WARNING: BL1: cortex_a57: errata workaround for 826974 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 826977 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 828024 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 829520 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 833471 was missing!
        ...
        VERBOSE: BL31: cortex_a57: errata workaround for 806969 was not applied
        VERBOSE: BL31: cortex_a57: errata workaround for 813420 was not applied
        INFO:    BL31: cortex_a57: errata workaround for disable_ldnp_overread was applied
        WARNING: BL31: cortex_a57: errata workaround for 826974 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 826977 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 828024 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 829520 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 833471 was missing!
        ...
        VERBOSE: BL31: cortex_a53: errata workaround for 826319 was not applied
        INFO:    BL31: cortex_a53: errata workaround for disable_non_temporal_hint was applied
      
      Also update documentation.
      
      Change-Id: Iccf059d3348adb876ca121cdf5207bdbbacf2aba
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      10bcd761
  12. 24 Jan, 2017 2 commits
  13. 23 Jan, 2017 1 commit
    • Masahiro Yamada's avatar
      Use #ifdef for IMAGE_BL* instead of #if · 3d8256b2
      Masahiro Yamada authored
      
      
      One nasty part of ATF is some of boolean macros are always defined
      as 1 or 0, and the rest of them are only defined under certain
      conditions.
      
      For the former group, "#if FOO" or "#if !FOO" must be used because
      "#ifdef FOO" is always true.  (Options passed by $(call add_define,)
      are the cases.)
      
      For the latter, "#ifdef FOO" or "#ifndef FOO" should be used because
      checking the value of an undefined macro is strange.
      
      Here, IMAGE_BL* is handled by make_helpers/build_macro.mk like
      follows:
      
        $(eval IMAGE := IMAGE_BL$(call uppercase,$(3)))
      
        $(OBJ): $(2)
                @echo "  CC      $$<"
                $$(Q)$$(CC) $$(TF_CFLAGS) $$(CFLAGS) -D$(IMAGE) -c $$< -o $$@
      
      This means, IMAGE_BL* is defined when building the corresponding
      image, but *undefined* for the other images.
      
      So, IMAGE_BL* belongs to the latter group where we should use #ifdef
      or #ifndef.
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      3d8256b2
  14. 17 Jan, 2017 1 commit
    • David Cunado's avatar
      Correct system include order · 55c70cb7
      David Cunado authored
      NOTE - this is patch does not address all occurrences of system
      includes not being in alphabetical order, just this one case.
      
      Change-Id: I3cd23702d69b1f60a4a9dd7fd4ae27418f15b7a3
      55c70cb7
  15. 16 Jan, 2017 3 commits
  16. 15 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Add provision to extend CPU operations at more levels · 5dd9dbb5
      Jeenu Viswambharan authored
      
      
      Various CPU drivers in ARM Trusted Firmware register functions to handle
      power-down operations. At present, separate functions are registered to
      power down individual cores and clusters.
      
      This scheme operates on the basis of core and cluster, and doesn't cater
      for extending the hierarchy for power-down operations. For example,
      future CPUs might support multiple threads which might need powering
      down individually.
      
      This patch therefore reworks the CPU operations framework to allow for
      registering power down handlers on specific level basis. Henceforth:
      
        - Generic code invokes CPU power down operations by the level
          required.
      
        - CPU drivers explicitly mention CPU_NO_RESET_FUNC when the CPU has no
          reset function.
      
        - CPU drivers register power down handlers as a list: a mandatory
          handler for level 0, and optional handlers for higher levels.
      
      All existing CPU drivers are adapted to the new CPU operations framework
      without needing any functional changes within.
      
      Also update firmware design guide.
      
      Change-Id: I1826842d37a9e60a9e85fdcee7b4b8f6bc1ad043
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      5dd9dbb5
  17. 14 Dec, 2016 2 commits
  18. 13 Dec, 2016 4 commits
    • Antonio Nino Diaz's avatar
      Forbid block descriptors in initial xlat table levels · 2240f45b
      Antonio Nino Diaz authored
      
      
      In AArch64, depending on the granularity of the translation tables,
      level 0 and/or level 1 of the translation tables may not support block
      descriptors, only table descriptors.
      
      This patch introduces a check to make sure that, even if theoretically
      it could be possible to create a block descriptor to map a big memory
      region, a new subtable will be created to describe its mapping.
      
      Change-Id: Ieb9c302206bfa33fbaf0cdc6a5a82516d32ae2a7
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      2240f45b
    • Antonio Nino Diaz's avatar
      Add PLAT_xxx_ADDR_SPACE_SIZE definitions · 0029624f
      Antonio Nino Diaz authored
      
      
      Added the definitions `PLAT_PHY_ADDR_SPACE_SIZE` and
      `PLAT_VIRT_ADDR_SPACE_SIZE` which specify respectively the physical
      and virtual address space size a platform can use.
      
      `ADDR_SPACE_SIZE` is now deprecated. To maintain compatibility, if any
      of the previous defines aren't present, the value of `ADDR_SPACE_SIZE`
      will be used instead.
      
      For AArch64, register ID_AA64MMFR0_EL1 is checked to calculate the
      max PA supported by the hardware and to verify that the previously
      mentioned definition is valid. For AArch32, a 40 bit physical
      address space is considered.
      
      Added asserts to check for overflows.
      
      Porting guide updated.
      
      Change-Id: Ie8ce1da5967993f0c94dbd4eb9841fc03d5ef8d6
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      0029624f
    • Antonio Nino Diaz's avatar
      Assert correct granularity when mapping a PA · d3d6c6e3
      Antonio Nino Diaz authored
      
      
      Each translation table level entry can only map a given virtual
      address onto physical addresses of the same granularity. For example,
      with the current configuration, a level 2 entry maps blocks of 2 MB,
      so the physical address must be aligned to 2 MB. If the address is not
      aligned, the MMU will just ignore the lower bits.
      
      This patch adds an assertion to make sure that physical addresses are
      always aligned to the correct boundary.
      
      Change-Id: I0ab43df71829d45cdbe323301b3053e08ca99c2c
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      d3d6c6e3
    • dp-arm's avatar
      stdlib: Fix signedness issue in memcmp() · afc03aeb
      dp-arm authored
      
      
      There is no guarantee on the signedness of char.  It can be either
      signed or unsigned.  On ARM it is unsigned and hence this memcmp()
      implementation works as intended.
      
      On other machines, char can be signed (x86 for example).  In that case
      (and assuming a 2's complement implementation), interpreting a
      bit-pattern of 0xFF as signed char can yield -1.  If *s1 is 0 and *s2
      is 255 then the difference *s1 - *s2 should be negative.  The C
      integer promotion rules guarantee that the unsigned chars will be
      converted to int before the operation takes place.  The current
      implementation will return a positive value (0 - (-1)) instead, which
      is wrong.
      
      Fix it by changing the signedness to unsigned to avoid surprises for
      anyone using this code on non-ARM systems.
      
      Change-Id: Ie222fcaa7c0c4272d7a521a6f2f51995fd5130cc
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      afc03aeb
  19. 12 Dec, 2016 1 commit
    • Soby Mathew's avatar
      AArch32: Fix the stack alignment issue · 9f3ee61c
      Soby Mathew authored
      
      
      The AArch32 Procedure call Standard mandates that the stack must be aligned
      to 8 byte boundary at external interfaces. This patch does the required
      changes.
      
      This problem was detected when a crash was encountered in
      `psci_print_power_domain_map()` while printing 64 bit values. Aligning
      the stack to 8 byte boundary resolved the problem.
      
      Fixes ARM-Software/tf-issues#437
      
      Change-Id: I517bd8203601bb88e9311bd36d477fb7b3efb292
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      9f3ee61c
  20. 05 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Define and use no_ret macro where no return is expected · a806dad5
      Jeenu Viswambharan authored
      
      
      There are many instances in ARM Trusted Firmware where control is
      transferred to functions from which return isn't expected. Such jumps
      are made using 'bl' instruction to provide the callee with the location
      from which it was jumped to. Additionally, debuggers infer the caller by
      examining where 'lr' register points to. If a 'bl' of the nature
      described above falls at the end of an assembly function, 'lr' will be
      left pointing to a location outside of the function range. This misleads
      the debugger back trace.
      
      This patch defines a 'no_ret' macro to be used when jumping to functions
      from which return isn't expected. The macro ensures to use 'bl'
      instruction for the jump, and also, for debug builds, places a 'nop'
      instruction immediately thereafter (unless instructed otherwise) so as
      to leave 'lr' pointing within the function range.
      
      Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0
      Co-authored-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      a806dad5
  21. 01 Dec, 2016 1 commit
    • David Cunado's avatar
      Reset EL2 and EL3 configurable controls · 939f66d6
      David Cunado authored
      
      
      This patch resets EL2 and EL3 registers that have architecturally
      UNKNOWN values on reset and that also provide EL2/EL3 configuration
      and trap controls.
      
      Specifically, the EL2 physical timer is disabled to prevent timer
      interrups into EL2 - CNTHP_CTL_EL2 and CNTHP_CTL for AArch64 and AArch32,
      respectively.
      
      Additionally, for AArch64, HSTR_EL2 is reset to avoid unexpected traps of
      non-secure access to certain system registers at EL1 or lower.
      
      For AArch32, the patch also reverts the reset to SDCR which was
      incorrectly added in a previous change.
      
      Change-Id: If00eaa23afa7dd36a922265194ccd6223187414f
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      939f66d6
  22. 21 Nov, 2016 1 commit
    • Soby Mathew's avatar
      Fix normal memory bakery lock implementation · 95c12559
      Soby Mathew authored
      
      
      This patch fixes an issue in the normal memory bakery lock
      implementation. During assertion of lock status, there is a possibility
      that the assertion could fail. This is because the previous update done
      to the lock status by the owning CPU when not participating in cache
      coherency could result in stale data in the cache due to cache maintenance
      operations not propagating to all the caches. This patch fixes this issue
      by doing an extra read cache maintenance operation prior to the assertion.
      
      Fixes ARM-software/tf-issues#402
      
      Change-Id: I0f38a7c52476a4f58e17ebe0141d256d198be88d
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      95c12559
  23. 09 Nov, 2016 1 commit
    • David Cunado's avatar
      Reset debug registers MDCR-EL3/SDCR and MDCR_EL2/HDCR · 495f3d3c
      David Cunado authored
      
      
      In order to avoid unexpected traps into EL3/MON mode, this patch
      resets the debug registers, MDCR_EL3 and MDCR_EL2 for AArch64,
      and SDCR and HDCR for AArch32.
      
      MDCR_EL3/SDCR is zero'ed when EL3/MON mode is entered, at the
      start of BL1 and BL31/SMP_MIN.
      
      For MDCR_EL2/HDCR, this patch zero's the bits that are
      architecturally UNKNOWN values on reset. This is done when
      exiting from EL3/MON mode but only on platforms that support
      EL2/HYP mode but choose to exit to EL1/SVC mode.
      
      Fixes ARM-software/tf-issues#430
      
      Change-Id: Idb992232163c072faa08892251b5626ae4c3a5b6
      Signed-off-by: default avatarDavid Cunado <david.cunado@arm.com>
      495f3d3c
  24. 14 Oct, 2016 1 commit
    • Soby Mathew's avatar
      Unify SCTLR initialization for AArch32 normal world · b7b0787d
      Soby Mathew authored
      
      
      The values of CP15BEN, nTWI & nTWE bits in SCTLR_EL1 are architecturally
      unknown if EL3 is AARCH64 whereas they reset to 1 if EL3 is AArch32. This
      might be a compatibility break for legacy AArch32 normal world software if
      these bits are not set to 1 when EL3 is AArch64. This patch enables the
      CP15BEN, nTWI and nTWE bits in the SCTLR_EL1 if the lower non-secure EL is
      AArch32. This unifies the SCTLR settings for lower non-secure EL in AArch32
      mode for both AArch64 and AArch32 builds of Trusted Firmware.
      
      Fixes ARM-software/tf-issues#428
      
      Change-Id: I3152d1580e4869c0ea745c5bd9da765f9c254947
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      b7b0787d