1. 07 Aug, 2018 1 commit
    • Antonio Nino Diaz's avatar
      xlat v2: Flush xlat tables after being modified · 3e318e40
      Antonio Nino Diaz authored
      During cold boot, the initial translation tables are created with data
      caches disabled, so all modifications go to memory directly. After the
      MMU is enabled and data cache is enabled, any modification to the tables
      goes to data cache, and eventually may get flushed to memory.
      
      If CPU0 modifies the tables while CPU1 is off, CPU0 will have the
      modified tables in its data cache. When CPU1 is powered on, the MMU is
      enabled, then it enables coherency, and then it enables the data cache.
      Until this is done, CPU1 isn't in coherency, and the translation tables
      it sees can be outdated if CPU0 still has some modified entries in its
      data cache.
      
      This can be a problem in some cases. For example, the warm boot code
      uses only the tables mapped during cold boot, which don't normally
      change. However, if they are modified (and a RO page is made RW, or a XN
      page is made executable) the CPU will see the old attributes and crash
      when it tries to access it.
      
      This doesn't happen in systems with HW_ASSISTED_COHERENCY or
      WARMBOOT_ENABLE_DCACHE_EARLY. In these systems, the data cache is
      enabled at the same time as the MMU. As soon as this happens, the CPU is
      in coherency.
      
      There was an attempt of a fix in psci_helpers.S, but it didn't solve the
      problem. That code has been deleted. The code was introduced in commit
      <26441030
      
      > ("Invalidate TLB entries during warm boot").
      
      Now, during a map or unmap operation, the memory associated to each
      modified table is flushed. Traversing a table will also flush it's
      memory, as there is no way to tell in the current implementation if the
      table that has been traversed has also been modified.
      
      Change-Id: I4b520bca27502f1018878061bc5fb82af740bb92
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      3e318e40
  2. 27 Feb, 2018 1 commit
    • Antonio Nino Diaz's avatar
      Invalidate TLB entries during warm boot · 26441030
      Antonio Nino Diaz authored
      
      
      During the warm boot sequence:
      
      1. The MMU is enabled with the data cache disabled. The MMU table walker
         is set up to access the translation tables as in cacheable memory,
         but its accesses are non-cacheable because SCTLR_EL3.C controls them
         as well.
      2. The interconnect is set up and the CPU enters coherency with the
         rest of the system.
      3. The data cache is enabled.
      
      If the support for dynamic translation tables is enabled and another CPU
      makes changes to a region, the changes may only be present in the data
      cache, not in RAM. The CPU that is booting isn't in coherency with the
      rest of the system, so the table walker of that CPU isn't either. This
      means that it may read old entries from RAM and it may have invalid TLB
      entries corresponding to the dynamic mappings.
      
      This is not a problem for the boot code because the mapping is 1:1 and
      the regions are static. However, the code that runs after the boot
      sequence may need to access the dynamically mapped regions.
      
      This patch invalidates all TLBs during warm boot when the dynamic
      translation tables support is enabled to prevent this problem.
      
      Change-Id: I80264802dc0aa1cb3edd77d0b66b91db6961af3d
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      26441030
  3. 03 May, 2017 1 commit
  4. 15 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Add provision to extend CPU operations at more levels · 5dd9dbb5
      Jeenu Viswambharan authored
      
      
      Various CPU drivers in ARM Trusted Firmware register functions to handle
      power-down operations. At present, separate functions are registered to
      power down individual cores and clusters.
      
      This scheme operates on the basis of core and cluster, and doesn't cater
      for extending the hierarchy for power-down operations. For example,
      future CPUs might support multiple threads which might need powering
      down individually.
      
      This patch therefore reworks the CPU operations framework to allow for
      registering power down handlers on specific level basis. Henceforth:
      
        - Generic code invokes CPU power down operations by the level
          required.
      
        - CPU drivers explicitly mention CPU_NO_RESET_FUNC when the CPU has no
          reset function.
      
        - CPU drivers register power down handlers as a list: a mandatory
          handler for level 0, and optional handlers for higher levels.
      
      All existing CPU drivers are adapted to the new CPU operations framework
      without needing any functional changes within.
      
      Also update firmware design guide.
      
      Change-Id: I1826842d37a9e60a9e85fdcee7b4b8f6bc1ad043
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      5dd9dbb5
  5. 12 Dec, 2016 1 commit
    • Soby Mathew's avatar
      AArch32: Fix the stack alignment issue · 9f3ee61c
      Soby Mathew authored
      
      
      The AArch32 Procedure call Standard mandates that the stack must be aligned
      to 8 byte boundary at external interfaces. This patch does the required
      changes.
      
      This problem was detected when a crash was encountered in
      `psci_print_power_domain_map()` while printing 64 bit values. Aligning
      the stack to 8 byte boundary resolved the problem.
      
      Fixes ARM-Software/tf-issues#437
      
      Change-Id: I517bd8203601bb88e9311bd36d477fb7b3efb292
      Signed-off-by: default avatarSoby Mathew <soby.mathew@arm.com>
      9f3ee61c
  6. 05 Dec, 2016 1 commit
    • Jeenu Viswambharan's avatar
      Define and use no_ret macro where no return is expected · a806dad5
      Jeenu Viswambharan authored
      
      
      There are many instances in ARM Trusted Firmware where control is
      transferred to functions from which return isn't expected. Such jumps
      are made using 'bl' instruction to provide the callee with the location
      from which it was jumped to. Additionally, debuggers infer the caller by
      examining where 'lr' register points to. If a 'bl' of the nature
      described above falls at the end of an assembly function, 'lr' will be
      left pointing to a location outside of the function range. This misleads
      the debugger back trace.
      
      This patch defines a 'no_ret' macro to be used when jumping to functions
      from which return isn't expected. The macro ensures to use 'bl'
      instruction for the jump, and also, for debug builds, places a 'nop'
      instruction immediately thereafter (unless instructed otherwise) so as
      to leave 'lr' pointing within the function range.
      
      Change-Id: Ib34c69fc09197cfd57bc06e147cc8252910e01b0
      Co-authored-by: default avatarDouglas Raillard <douglas.raillard@arm.com>
      Signed-off-by: default avatarJeenu Viswambharan <jeenu.viswambharan@arm.com>
      a806dad5
  7. 10 Aug, 2016 1 commit
    • Soby Mathew's avatar
      AArch32: Add support to PSCI lib · 727e5238
      Soby Mathew authored
      This patch adds AArch32 support to PSCI library, as follows :
      
      * The `psci_helpers.S` is implemented for AArch32.
      
      * AArch32 version of internal helper function `psci_get_ns_ep_info()` is
        defined.
      
      * The PSCI Library is responsible for the Non Secure context initialization.
        Hence a library interface `psci_prepare_next_non_secure_ctx()` is introduced
        to enable EL3 runtime firmware to initialize the non secure context without
        invoking context management library APIs.
      
      Change-Id: I25595b0cc2dbfdf39dbf7c589b875cba33317b9d
      727e5238