1. 12 May, 2017 2 commits
  2. 04 May, 2017 1 commit
    • Jeenu Viswambharan's avatar
      Introduce ARM SiP service to switch execution state · b10d4499
      Jeenu Viswambharan authored
      
      
      In AArch64, privileged exception levels control the execution state
      (a.k.a. register width) of the immediate lower Exception Level; i.e.
      whether the lower exception level executes in AArch64 or AArch32 state.
      For an exception level to have its execution state changed at run time,
      it must request the change by raising a synchronous exception to the
      higher exception level.
      
      This patch implements and adds such a provision to the ARM SiP service,
      by which an immediate lower exception level can request to switch its
      execution state. The execution state is switched if the request is:
      
        - raised from non-secure world;
      
        - raised on the primary CPU, before any secondaries are brought online
          with CPU_ON PSCI call;
      
        - raised from an exception level immediately below EL3: EL2, if
          implemented; otherwise NS EL1.
      
      If successful, the SMC doesn't return to the caller, but to the entry
      point supplied with the call. Otherwise, the caller will observe the SMC
      returning with STATE_SW_E_DENIED code. If ARM Trusted Firmware is built
      for AArch32, the feature is not supported, and the call will always
      fail.
      
      For the ARM SiP service:
      
        - Add SMC function IDs for both AArch32 and AArch64;
        - Increment the SiP service minor version to 2;
        - Adjust the number of supported SiP service calls.
      
      Add documentation for ARM SiP service.
      
      Fixes ARM-software/tf-issues#436
      
      Change-Id: I4347f2d6232e69fbfbe333b340fcd0caed0a4cea
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      b10d4499
  3. 03 May, 2017 1 commit
  4. 02 May, 2017 3 commits
    • Jeenu Viswambharan's avatar
      Add macro to check whether the CPU implements an EL · f4c8aa90
      Jeenu Viswambharan authored
      
      
      Replace all instances of checks with the new macro.
      
      Change-Id: I0eec39b9376475a1a9707a3115de9d36f88f8a2a
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      f4c8aa90
    • Antonio Nino Diaz's avatar
      Fix execute-never permissions in xlat tables libs · a5640252
      Antonio Nino Diaz authored
      
      
      Translation regimes that only support one virtual address space (such as
      the ones for EL2 and EL3) can flag memory regions as execute-never by
      setting to 1 the XN bit in the Upper Attributes field in the translation
      tables descriptors. Translation regimes that support two different
      virtual address spaces (such as the one shared by EL1 and EL0) use bits
      PXN and UXN instead.
      
      The Trusted Firmware runs at EL3 and EL1, it has to handle translation
      tables of both translation regimes, but the previous code handled both
      regimes the same way, as if both had only 1 VA range.
      
      When trying to set a descriptor as execute-never it would set the XN
      bit correctly in EL3, but it would set the XN bit in EL1 as well. XN is
      at the same bit position as UXN, which means that EL0 was being
      prevented from executing code at this region, not EL1 as the code
      intended. Therefore, the PXN bit was unset to 0 all the time. The result
      is that, in AArch64 mode, read-only data sections of BL2 weren't
      protected from being executed.
      
      This patch adds support of translation regimes with two virtual address
      spaces to both versions of the translation tables library, fixing the
      execute-never permissions for translation tables in EL1.
      
      The library currently does not support initializing translation tables
      for EL0 software, therefore it does not set/unset the UXN bit. If EL1
      software needs to initialize translation tables for EL0 software, it
      should use a different library instead.
      
      Change-Id: If27588f9820ff42988851d90dc92801c8ecbe0c9
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      a5640252
    • Nishanth Menon's avatar
      xlat lib: Don't set mmap_attr_t enum to be -1 · 7055e6fa
      Nishanth Menon authored
      -1 is not a defined mmap_attr_t type. Instead of using invalid enum
      types, we can either choose to define a INVALID type OR handle the
      condition specifically.
      
      Since the usage of mmap_region_attr is limited, it is easier to just
      handle the error condition specifically and return 0 or -1 depending
      on success or fail.
      
      Fixes: ARM-Software/tf-issues#473
      Fixes: 28fa2e9e
      
       ("xlat lib: Use mmap_attr_t type consistently")
      Signed-off-by: default avatarNishanth Menon <nm@ti.com>
      7055e6fa
  5. 29 Apr, 2017 1 commit
    • Scott Branden's avatar
      Move defines in utils.h to utils_def.h to fix shared header compile issues · 53d9c9c8
      Scott Branden authored
      
      
      utils.h is included in various header files for the defines in it.
      Some of the other header files only contain defines.  This allows the
      header files to be shared between host and target builds for shared defines.
      
      Recently types.h has been included in utils.h as well as some function
      prototypes.
      
      Because of the inclusion of types.h conflicts exist building host tools
      abd these header files now.  To solve this problem,
      move the defines to utils_def.h and have this included by utils.h and
      change header files to only include utils_def.h and not pick up the new
      types.h being introduced.
      
      Fixes ARM-software/tf-issues#461
      Signed-off-by: default avatarScott Branden <scott.branden@broadcom.com>
      
      Remove utils_def.h from utils.h
      
      This patch removes utils_def.h from utils.h as it is not required.
      And also makes a minor change to ensure Juno platform compiles.
      
      Change-Id: I10cf1fb51e44a8fa6dcec02980354eb9ecc9fa29
      53d9c9c8
  6. 20 Apr, 2017 4 commits
  7. 19 Apr, 2017 2 commits
    • Antonio Nino Diaz's avatar
      Add `ENABLE_ASSERTIONS` build option · cc8b5632
      Antonio Nino Diaz authored
      
      
      Add the new build option `ENABLE_ASSERTIONS` that controls whether or
      not assert functions are compiled out. It defaults to 1 for debug builds
      and to 0 for release builds.
      
      Additionally, a following patch will be done to allow this build option
      to hide auxiliary code used for the checks done in an `assert()`. This
      code is is currently under the DEBUG build flag.
      
      Assert messages are now only printed if LOG_LEVEL >= LOG_LEVEL_INFO,
      which is the default for debug builds.
      
      This patch also updates the User Guide.
      
      Change-Id: I1401530b56bab25561bb0f274529f1d12c5263bc
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      cc8b5632
    • Soby Mathew's avatar
      PSCI: Build option to enable D-Caches early in warmboot · bcc3c49c
      Soby Mathew authored
      
      
      This patch introduces a build option to enable D-cache early on the CPU
      after warm boot. This is applicable for platforms which do not require
      interconnect programming to enable cache coherency (eg: single cluster
      platforms). If this option is enabled, then warm boot path enables
      D-caches immediately after enabling MMU.
      
      Fixes ARM-Software/tf-issues#456
      
      Change-Id: I44c8787d116d7217837ced3bcf0b1d3441c8d80e
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      bcc3c49c
  8. 31 Mar, 2017 3 commits
    • Douglas Raillard's avatar
      Add support for GCC stack protection · 51faada7
      Douglas Raillard authored
      
      
      Introduce new build option ENABLE_STACK_PROTECTOR. It enables
      compilation of all BL images with one of the GCC -fstack-protector-*
      options.
      
      A new platform function plat_get_stack_protector_canary() is introduced.
      It returns a value that is used to initialize the canary for stack
      corruption detection. Returning a random value will prevent an attacker
      from predicting the value and greatly increase the effectiveness of the
      protection.
      
      A message is printed at the ERROR level when a stack corruption is
      detected.
      
      To be effective, the global data must be stored at an address
      lower than the base of the stacks. Failure to do so would allow an
      attacker to overwrite the canary as part of an attack which would void
      the protection.
      
      FVP implementation of plat_get_stack_protector_canary is weak as
      there is no real source of entropy on the FVP. It therefore relies on a
      timer's value, which could be predictable.
      
      Change-Id: Icaaee96392733b721fa7c86a81d03660d3c1bc06
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      51faada7
    • Antonio Nino Diaz's avatar
      Remove dead loops in assert() in C and ASM · 1e09ff93
      Antonio Nino Diaz authored
      
      
      The desired behaviour is to call `plat_panic_handler()`, and to use
      `no_ret` to do so from ASM.
      
      Change-Id: I88b2feefa6e6c8f9bf057fd51ee0d2e9fb551e4f
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      1e09ff93
    • Antonio Nino Diaz's avatar
      Flush console where necessary · 0b32628e
      Antonio Nino Diaz authored
      
      
      Call console_flush() before execution either terminates or leaves an
      exception level.
      
      Fixes: ARM-software/tf-issues#123
      
      Change-Id: I64eeb92effb039f76937ce89f877b68e355588e3
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      0b32628e
  9. 28 Mar, 2017 1 commit
  10. 20 Mar, 2017 2 commits
    • Andre Przywara's avatar
      Add workaround for ARM Cortex-A53 erratum 855873 · b75dc0e4
      Andre Przywara authored
      
      
      ARM erratum 855873 applies to all Cortex-A53 CPUs.
      The recommended workaround is to promote "data cache clean"
      instructions to "data cache clean and invalidate" instructions.
      For core revisions of r0p3 and later this can be done by setting a bit
      in the CPUACTLR_EL1 register, so that hardware takes care of the promotion.
      As CPUACTLR_EL1 is both IMPLEMENTATION DEFINED and can be trapped to EL3,
      we set the bit in firmware.
      Also we dump this register upon crashing to provide more debug
      information.
      
      Enable the workaround for the Juno boards.
      
      Change-Id: I3840114291958a406574ab6c49b01a9d9847fec8
      Signed-off-by: default avatarAndre Przywara <andre.przywara@arm.com>
      b75dc0e4
    • Douglas Raillard's avatar
      Replace ASM signed tests with unsigned · 355a5d03
      Douglas Raillard authored
      
      
      ge, lt, gt and le condition codes in assembly provide a signed test
      whereas hs, lo, hi and ls provide the unsigned counterpart. Signed tests
      should only be used when strictly necessary, as using them on logically
      unsigned values can lead to inverting the test for high enough values.
      All offsets, addresses and usually counters are actually unsigned
      values, and should be tested as such.
      
      Replace the occurrences of signed condition codes where it was
      unnecessary by an unsigned test as the unsigned tests allow the full
      range of unsigned values to be used without inverting the result with
      some large operands.
      
      Change-Id: I58b7e98d03e3a4476dfb45230311f296d224980a
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      355a5d03
  11. 08 Mar, 2017 4 commits
    • Antonio Nino Diaz's avatar
      Apply workaround for errata 813419 of Cortex-A57 · ccbec91c
      Antonio Nino Diaz authored
      
      
      TLBI instructions for EL3 won't have the desired effect under specific
      circumstances in Cortex-A57 r0p0. The workaround is to execute DSB and
      TLBI twice each time.
      
      Even though this errata is only needed in r0p0, the current errata
      framework is not prepared to apply run-time workarounds. The current one
      is always applied if compiled in, regardless of the CPU or its revision.
      
      This errata has been enabled for Juno.
      
      The `DSB` instruction used when initializing the translation tables has
      been changed to `DSB ISH` as an optimization and to be consistent with
      the barriers used for the workaround.
      
      Change-Id: Ifc1d70b79cb5e0d87e90d88d376a59385667d338
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      ccbec91c
    • Antonio Nino Diaz's avatar
      Add dynamic region support to xlat tables lib v2 · 0b64f4ef
      Antonio Nino Diaz authored
      
      
      Added APIs to add and remove regions to the translation tables
      dynamically while the MMU is enabled. Only static regions are allowed
      to overlap other static ones (for backwards compatibility).
      
      A new private attribute (MT_DYNAMIC / MT_STATIC) has been added to
      flag each region as such.
      
      The dynamic mapping functionality can be enabled or disabled when
      compiling by setting the build option PLAT_XLAT_TABLES_DYNAMIC to 1
      or 0. This can be done per-image.
      
      TLB maintenance code during dynamic table mapping and unmapping has
      also been added.
      
      Fixes ARM-software/tf-issues#310
      
      Change-Id: I19e8992005c4292297a382824394490c5387aa3b
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      0b64f4ef
    • Antonio Nino Diaz's avatar
      Improve debug output of the translation tables · f10644c5
      Antonio Nino Diaz authored
      
      
      The printed output has been improved in two ways:
      
      - Whenever multiple invalid descriptors are found, only the first one
        is printed, and a line is added to inform about how many descriptors
        have been omitted.
      
      - At the beginning of each line there is an indication of the table
        level the entry belongs to. Example of the new output:
        `[LV3] VA:0x1000 PA:0x1000 size:0x1000 MEM-RO-S-EXEC`
      
      Change-Id: Ib6f1cd8dbd449452f09258f4108241eb11f8d445
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      f10644c5
    • Antonio Nino Diaz's avatar
      Add version 2 of xlat tables library · 7bb01fb2
      Antonio Nino Diaz authored
      
      
      The folder lib/xlat_tables_v2 has been created to store a new version
      of the translation tables library for further modifications in patches
      to follow. At the moment it only contains a basic implementation that
      supports static regions.
      
      This library allows different translation tables to be modified by
      using different 'contexts'. For now, the implementation defaults to
      the translation tables used by the current image, but it is possible
      to modify other tables than the ones in use.
      
      Added a new API to print debug information for the current state of
      the translation tables, rather than printing the information while
      the tables are being created. This allows subsequent debug printing
      of the xlat tables after they have been changed, which will be useful
      when dynamic regions are implemented in a patch to follow.
      
      The common definitions stored in `xlat_tables.h` header have been moved
      to a new file common to both versions, `xlat_tables_defs.h`.
      
      All headers related to the translation tables library have been moved to
      a the subfolder `xlat_tables`.
      
      Change-Id: Ia55962c33e0b781831d43a548e505206dffc5ea9
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      7bb01fb2
  12. 02 Mar, 2017 3 commits
    • Soby Mathew's avatar
      AArch32: Fix normal memory bakery compilation · 61531a27
      Soby Mathew authored
      
      
      This patch fixes a compilation issue with bakery locks when
      PSCI library is compiled with USE_COHERENT_MEM = 0 build option.
      
      Change-Id: Ic7f6cf9f2bb37f8a946eafbee9cbc3bf0dc7e900
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      61531a27
    • Jeenu Viswambharan's avatar
      PSCI: Optimize call paths if all participants are cache-coherent · b0408e87
      Jeenu Viswambharan authored
      
      
      The current PSCI implementation can apply certain optimizations upon the
      assumption that all PSCI participants are cache-coherent.
      
        - Skip performing cache maintenance during power-up.
      
        - Skip performing cache maintenance during power-down:
      
          At present, on the power-down path, CPU driver disables caches and
          MMU, and performs cache maintenance in preparation for powering down
          the CPU. This means that PSCI must perform additional cache
          maintenance on the extant stack for correct functioning.
      
          If all participating CPUs are cache-coherent, CPU driver would
          neither disable MMU nor perform cache maintenance. The CPU being
          powered down, therefore, remain cache-coherent throughout all PSCI
          call paths. This in turn means that PSCI cache maintenance
          operations are not required during power down.
      
        - Choose spin locks instead of bakery locks:
      
          The current PSCI implementation must synchronize both cache-coherent
          and non-cache-coherent participants. Mutual exclusion primitives are
          not guaranteed to function on non-coherent memory. For this reason,
          the current PSCI implementation had to resort to bakery locks.
      
          If all participants are cache-coherent, the implementation can
          enable MMU and data caches early, and substitute bakery locks for
          spin locks. Spin locks make use of architectural mutual exclusion
          primitives, and are lighter and faster.
      
      The optimizations are applied when HW_ASSISTED_COHERENCY build option is
      enabled, as it's expected that all PSCI participants are cache-coherent
      in those systems.
      
      Change-Id: Iac51c3ed318ea7e2120f6b6a46fd2db2eae46ede
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      b0408e87
    • Jeenu Viswambharan's avatar
      PSCI: Introduce cache and barrier wrappers · a10d3632
      Jeenu Viswambharan authored
      
      
      The PSCI implementation performs cache maintenance operations on its
      data structures to ensure their visibility to both cache-coherent and
      non-cache-coherent participants. These cache maintenance operations
      can be skipped if all PSCI participants are cache-coherent. When
      HW_ASSISTED_COHERENCY build option is enabled, we assume PSCI
      participants are cache-coherent.
      
      For usage abstraction, this patch introduces wrappers for PSCI cache
      maintenance and barrier operations used for state coordination: they are
      effectively NOPs when HW_ASSISTED_COHERENCY is enabled, but are
      applied otherwise.
      
      Also refactor local state usage and associated cache operations to make
      it clearer.
      
      Change-Id: I77f17a90cba41085b7188c1345fe5731c99fad87
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      a10d3632
  13. 28 Feb, 2017 1 commit
  14. 23 Feb, 2017 2 commits
  15. 22 Feb, 2017 1 commit
  16. 14 Feb, 2017 1 commit
    • Jeenu Viswambharan's avatar
      Introduce locking primitives using CAS instruction · c877b414
      Jeenu Viswambharan authored
      
      
      The ARMv8v.1 architecture extension has introduced support for far
      atomics, which includes compare-and-swap. Compare and Swap instruction
      is only available for AArch64.
      
      Introduce build options to choose the architecture versions to target
      ARM Trusted Firmware:
      
        - ARM_ARCH_MAJOR: selects the major version of target ARM
          Architecture. Default value is 8.
      
        - ARM_ARCH_MINOR: selects the minor version of target ARM
          Architecture. Default value is 0.
      
      When:
      
        (ARM_ARCH_MAJOR > 8) || ((ARM_ARCH_MAJOR == 8) && (ARM_ARCH_MINOR >= 1)),
      
      for AArch64, Compare and Swap instruction is used to implement spin
      locks. Otherwise, the implementation falls back to using
      load-/store-exclusive instructions.
      
      Update user guide, and introduce a section in Firmware Design guide to
      summarize support for features introduced in ARMv8 Architecture
      Extensions.
      
      Change-Id: I73096a0039502f7aef9ec6ab3ae36680da033f16
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      c877b414
  17. 13 Feb, 2017 2 commits
    • dp-arm's avatar
      PSCI: Do stat accounting for retention/standby states · e5bbd16a
      dp-arm authored
      
      
      Perform stat accounting for retention/standby states also when
      requested at multiple power levels.
      
      Change-Id: I2c495ea7cdff8619bde323fb641cd84408eb5762
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      e5bbd16a
    • dp-arm's avatar
      PSCI: Decouple PSCI stat residency calculation from PMF · 04c1db1e
      dp-arm authored
      
      
      This patch introduces the following three platform interfaces:
      
      * void plat_psci_stat_accounting_start(const psci_power_state_t *state_info)
      
        This is an optional hook that platforms can implement in order
        to perform accounting before entering a low power state.  This
        typically involves capturing a timestamp.
      
      * void plat_psci_stat_accounting_stop(const psci_power_state_t *state_info)
      
        This is an optional hook that platforms can implement in order
        to perform accounting after exiting from a low power state.  This
        typically involves capturing a timestamp.
      
      * u_register_t plat_psci_stat_get_residency(unsigned int lvl,
      	const psci_power_state_t *state_info,
      	unsigned int last_cpu_index)
      
        This is an optional hook that platforms can implement in order
        to calculate the PSCI stat residency.
      
      If any of these interfaces are overridden by the platform, it is
      recommended that all of them are.
      
      By default `ENABLE_PSCI_STAT` is disabled.  If `ENABLE_PSCI_STAT`
      is set but `ENABLE_PMF` is not set then an alternative PSCI stat
      collection backend must be provided.  If both are set, then default
      weak definitions of these functions are provided, using PMF to
      calculate the residency.
      
      NOTE: Previously, platforms did not have to explicitly set
      `ENABLE_PMF` since this was automatically done by the top-level
      Makefile.
      
      Change-Id: I17b47804dea68c77bc284df15ee1ccd66bc4b79b
      Signed-off-by: default avatardp-arm <dimitris.papastamos@arm.com>
      04c1db1e
  18. 06 Feb, 2017 2 commits
    • Douglas Raillard's avatar
      Replace some memset call by zeromem · 32f0d3c6
      Douglas Raillard authored
      
      
      Replace all use of memset by zeromem when zeroing moderately-sized
      structure by applying the following transformation:
      memset(x, 0, sizeof(x)) => zeromem(x, sizeof(x))
      
      As the Trusted Firmware is compiled with -ffreestanding, it forbids the
      compiler from using __builtin_memset and forces it to generate calls to
      the slow memset implementation. Zeromem is a near drop in replacement
      for this use case, with a more efficient implementation on both AArch32
      and AArch64.
      
      Change-Id: Ia7f3a90e888b96d056881be09f0b4d65b41aa79e
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      32f0d3c6
    • Douglas Raillard's avatar
      Introduce unified API to zero memory · 308d359b
      Douglas Raillard authored
      
      
      Introduce zeromem_dczva function on AArch64 that can handle unaligned
      addresses and make use of DC ZVA instruction to zero a whole block at a
      time. This zeroing takes place directly in the cache to speed it up
      without doing external memory access.
      
      Remove the zeromem16 function on AArch64 and replace it with an alias to
      zeromem. This zeromem16 function is now deprecated.
      
      Remove the 16-bytes alignment constraint on __BSS_START__ in
      firmware-design.md as it is now not mandatory anymore (it used to comply
      with zeromem16 requirements).
      
      Change the 16-bytes alignment constraints in SP min's linker script to a
      8-bytes alignment constraint as the AArch32 zeromem implementation is now
      more efficient on 8-bytes aligned addresses.
      
      Introduce zero_normalmem and zeromem helpers in platform agnostic header
      that are implemented this way:
      * AArch32:
      	* zero_normalmem: zero using usual data access
      	* zeromem: alias for zero_normalmem
      * AArch64:
      	* zero_normalmem: zero normal memory  using DC ZVA instruction
      	                  (needs MMU enabled)
      	* zeromem: zero using usual data access
      
      Usage guidelines: in most cases, zero_normalmem should be preferred.
      
      There are 2 scenarios where zeromem (or memset) must be used instead:
      * Code that must run with MMU disabled (which means all memory is
        considered device memory for data accesses).
      * Code that fills device memory with null bytes.
      
      Optionally, the following rule can be applied if performance is
      important:
      * Code zeroing small areas (few bytes) that are not secrets should use
        memset to take advantage of compiler optimizations.
      
        Note: Code zeroing security-related critical information should use
        zero_normalmem/zeromem instead of memset to avoid removal by
        compilers' optimizations in some cases or misbehaving versions of GCC.
      
      Fixes ARM-software/tf-issues#408
      
      Change-Id: Iafd9663fc1070413c3e1904e54091cf60effaa82
      Signed-off-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      308d359b
  19. 30 Jan, 2017 1 commit
    • Jeenu Viswambharan's avatar
      Report errata workaround status to console · 10bcd761
      Jeenu Viswambharan authored
      
      
      The errata reporting policy is as follows:
      
        - If an errata workaround is enabled:
      
          - If it applies (i.e. the CPU is affected by the errata), an INFO
            message is printed, confirming that the errata workaround has been
            applied.
      
          - If it does not apply, a VERBOSE message is printed, confirming
            that the errata workaround has been skipped.
      
        - If an errata workaround is not enabled, but would have applied had
          it been, a WARN message is printed, alerting that errata workaround
          is missing.
      
      The CPU errata messages are printed by both BL1 (primary CPU only) and
      runtime firmware on debug builds, once for each CPU/errata combination.
      
      Relevant output from Juno r1 console when ARM Trusted Firmware is built
      with PLAT=juno LOG_LEVEL=50 DEBUG=1:
      
        VERBOSE: BL1: cortex_a57: errata workaround for 806969 was not applied
        VERBOSE: BL1: cortex_a57: errata workaround for 813420 was not applied
        INFO:    BL1: cortex_a57: errata workaround for disable_ldnp_overread was applied
        WARNING: BL1: cortex_a57: errata workaround for 826974 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 826977 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 828024 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 829520 was missing!
        WARNING: BL1: cortex_a57: errata workaround for 833471 was missing!
        ...
        VERBOSE: BL31: cortex_a57: errata workaround for 806969 was not applied
        VERBOSE: BL31: cortex_a57: errata workaround for 813420 was not applied
        INFO:    BL31: cortex_a57: errata workaround for disable_ldnp_overread was applied
        WARNING: BL31: cortex_a57: errata workaround for 826974 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 826977 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 828024 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 829520 was missing!
        WARNING: BL31: cortex_a57: errata workaround for 833471 was missing!
        ...
        VERBOSE: BL31: cortex_a53: errata workaround for 826319 was not applied
        INFO:    BL31: cortex_a53: errata workaround for disable_non_temporal_hint was applied
      
      Also update documentation.
      
      Change-Id: Iccf059d3348adb876ca121cdf5207bdbbacf2aba
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      10bcd761
  20. 24 Jan, 2017 2 commits
  21. 23 Jan, 2017 1 commit
    • Masahiro Yamada's avatar
      Use #ifdef for IMAGE_BL* instead of #if · 3d8256b2
      Masahiro Yamada authored
      
      
      One nasty part of ATF is some of boolean macros are always defined
      as 1 or 0, and the rest of them are only defined under certain
      conditions.
      
      For the former group, "#if FOO" or "#if !FOO" must be used because
      "#ifdef FOO" is always true.  (Options passed by $(call add_define,)
      are the cases.)
      
      For the latter, "#ifdef FOO" or "#ifndef FOO" should be used because
      checking the value of an undefined macro is strange.
      
      Here, IMAGE_BL* is handled by make_helpers/build_macro.mk like
      follows:
      
        $(eval IMAGE := IMAGE_BL$(call uppercase,$(3)))
      
        $(OBJ): $(2)
                @echo "  CC      $$<"
                $$(Q)$$(CC) $$(TF_CFLAGS) $$(CFLAGS) -D$(IMAGE) -c $$< -o $$@
      
      This means, IMAGE_BL* is defined when building the corresponding
      image, but *undefined* for the other images.
      
      So, IMAGE_BL* belongs to the latter group where we should use #ifdef
      or #ifndef.
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      3d8256b2