1. 03 Feb, 2021 3 commits
  2. 29 Jan, 2021 1 commit
    • Madhukar Pappireddy's avatar
      Fix exception handlers in BL31: Use DSB to synchronize pending EA · c2d32a5f
      Madhukar Pappireddy authored
      For SoCs which do not implement RAS, use DSB as a barrier to
      synchronize pending external aborts at the entry and exit of
      exception handlers. This is needed to isolate the SErrors to
      appropriate context.
      
      However, this introduces an unintended side effect as discussed
      in the https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/3440
      
      
      A summary of the side effect and a quick workaround is provided as
      part of this patch and summarized here:
      
      The explicit DSB at the entry of various exception vectors in BL31
      for handling exceptions from lower ELs can inadvertently trigger an
      SError exception in EL3 due to pending asyncrhonouus aborts in lower
      ELs. This will end up being handled by serror_sp_elx in EL3 which will
      ultimately panic and die.
      
      The way to workaround is to update a flag to indicate if the exception
      truly came from EL3. This flag is allocated in the cpu_context
      structure. This is not a bullet proof solution to the problem at hand
      because we assume the instructions following "isb" that help to update
      the flag (lines 100-102 & 139-141) execute without causing further
      exceptions.
      
      Change-Id: I4d345b07d746a727459435ddd6abb37fda24a9bf
      Signed-off-by: default avatarMadhukar Pappireddy <madhukar.pappireddy@arm.com>
      c2d32a5f
  3. 21 Jan, 2021 1 commit
  4. 20 Jan, 2021 3 commits
  5. 13 Jan, 2021 2 commits
  6. 12 Jan, 2021 1 commit
  7. 06 Jan, 2021 1 commit
    • Alexei Fedorov's avatar
      AArch64: Fix assertions in processing dynamic relocations · db9736e3
      Alexei Fedorov authored
      
      
      This patch provides the following changes in fixup_gdt_reloc()
      function:
      - Fixes assertions in processing dynamic relocations, when
      relocation entries not matching R_AARCH64_RELATIVE type are found.
      Linker might generate entries of relocation type R_AARCH64_NONE
      (code 0), which should be ignored to make the code boot. Similar
      issue was fixed in OP-TEE (see optee_os/ldelf/ta_elf_rel.c
      commit 7a4dc765c133125428136a496a7644c6fec9b3c2)
      - Fixes bug when "b.ge" (signed greater than or equal) condition
      codes were used instead of "b.hs" (greater than or equal) for
      comparison of absolute addresses.
      - Adds optimisation which skips fixing Global Object Table (GOT)
      entries when offset value is 0.
      
      Change-Id: I35e34e055b7476843903859be947b883a1feb1b5
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      db9736e3
  8. 22 Dec, 2020 1 commit
    • Graeme Gregory's avatar
      PSCI: fix limit of 256 CPUs caused by cast to unsigned char · a86865ac
      Graeme Gregory authored
      
      
      In psci_setup.c psci_init_pwr_domain_node() takes an unsigned
      char as node_idx which limits it to initialising only the first
      256 CPUs. As the calling function does not check for a limit of
      256 I think this is a bug so change the unsigned char to
      uint16_t and change the cast from the calling site in
      populate_power_domain_tree().
      
      Also update the non_cpu_pwr_domain_node structure lock_index
      to uint16_t and update the function signature for psci_lock_init()
      appropriately.
      
      Finally add a define PSCI_MAX_CPUS_INDEX to psci_private.h and add
      a CASSERT to psci_setup.c to make sure PLATFORM_CORE_COUNT cannot
      exceed the index value.
      Signed-off-by: default avatarGraeme Gregory <graeme@nuviainc.com>
      Change-Id: I9e26842277db7483fd698b46bbac62aa86e71b45
      a86865ac
  9. 18 Dec, 2020 1 commit
  10. 11 Dec, 2020 1 commit
    • Javier Almansa Sobrino's avatar
      Add support for FEAT_MTPMU for Armv8.6 · 0063dd17
      Javier Almansa Sobrino authored
      
      
      If FEAT_PMUv3 is implemented and PMEVTYPER<n>(_EL0).MT bit is implemented
      as well, it is possible to control whether PMU counters take into account
      events happening on other threads.
      
      If FEAT_MTPMU is implemented, EL3 (or EL2) can override the MT bit
      leaving it to effective state of 0 regardless of any write to it.
      
      This patch introduces the DISABLE_MTPMU flag, which allows to diable
      multithread event count from EL3 (or EL2). The flag is disabled
      by default so the behavior is consistent with those architectures
      that do not implement FEAT_MTPMU.
      Signed-off-by: default avatarJavier Almansa Sobrino <javier.almansasobrino@arm.com>
      Change-Id: Iee3a8470ae8ba13316af1bd40c8d4aa86e0cb85e
      0063dd17
  11. 02 Dec, 2020 1 commit
  12. 30 Nov, 2020 1 commit
  13. 12 Nov, 2020 2 commits
  14. 20 Oct, 2020 3 commits
  15. 15 Oct, 2020 1 commit
  16. 12 Oct, 2020 1 commit
    • Jimmy Brisson's avatar
      Increase type widths to satisfy width requirements · d7b5f408
      Jimmy Brisson authored
      
      
      Usually, C has no problem up-converting types to larger bit sizes. MISRA
      rule 10.7 requires that you not do this, or be very explicit about this.
      This resolves the following required rule:
      
          bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None>
          The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U |
          0x3c0U" (32 bits) is less that the right hand operand
          "18446744073709547519ULL" (64 bits).
      
      This also resolves MISRA defects such as:
      
          bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)]
          In the expression "3U << 20", shifting more than 7 bits, the number
          of bits in the essential type of the left expression, "3U", is
          not allowed.
      
      Further, MISRA requires that all shifts don't overflow. The definition of
      PAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues.
      This fixes the violation by changing the definition to 1UL << 12. Since
      this uses 32bits, it should not create any issues for aarch32.
      
      This patch also contains a fix for a build failure in the sun50i_a64
      platform. Specifically, these misra fixes removed a single and
      instruction,
      
          92407e73        and     x19, x19, #0xffffffff
      
      from the cm_setup_context function caused a relocation in
      psci_cpus_on_start to require a linker-generated stub. This increased the
      size of the .text section and caused an alignment later on to go over a
      page boundary and round up to the end of RAM before placing the .data
      section. This sectionn is of non-zero size and therefore causes a link
      error.
      
      The fix included in this reorders the functions during link time
      without changing their ording with respect to alignment.
      
      Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16
      Signed-off-by: default avatarJimmy Brisson <jimmy.brisson@arm.com>
      d7b5f408
  17. 09 Oct, 2020 1 commit
    • Jimmy Brisson's avatar
      Don't return error information from console_flush · 831b0e98
      Jimmy Brisson authored
      
      
      And from crash_console_flush.
      
      We ignore the error information return by console_flush in _every_
      place where we call it, and casting the return type to void does not
      work around the MISRA violation that this causes. Instead, we collect
      the error information from the driver (to avoid changing that API), and
      don't return it to the caller.
      
      Change-Id: I1e35afe01764d5c8f0efd04f8949d333ffb688c1
      Signed-off-by: default avatarJimmy Brisson <jimmy.brisson@arm.com>
      831b0e98
  18. 07 Oct, 2020 1 commit
  19. 05 Oct, 2020 2 commits
  20. 03 Oct, 2020 1 commit
  21. 02 Oct, 2020 1 commit
    • Andre Przywara's avatar
      libfdt: Upgrade libfdt source files · 3b456661
      Andre Przywara authored
      
      
      Update the libfdt source files, the upstream commit is 73e0f143b73d
      ("libfdt: fdt_strerror(): Fix comparison warning").
      
      This brings us the fixes for the signed/unsigned comparison warnings,
      so platforms can enable -Wsign-compare now.
      
      Change-Id: I303d891c82ffea0acefdde27289339db5ac5a289
      Signed-off-by: default avatarAndre Przywara <andre.przywara@arm.com>
      3b456661
  22. 28 Sep, 2020 1 commit
  23. 25 Sep, 2020 2 commits
  24. 18 Sep, 2020 1 commit
  25. 15 Sep, 2020 1 commit
  26. 14 Sep, 2020 1 commit
    • Andre Przywara's avatar
      SPE: Fix feature detection · b8535929
      Andre Przywara authored
      
      
      Currently the feature test for the SPE extension requires the feature
      bits in the ID_AA64DFR0 register to read exactly 0b0001.
      However the architecture guarantees that any values greater than 0
      indicate the presence of a feature, which is what we are after in
      our spe_supported() function.
      
      Change the comparison to include all values greater than 0.
      
      This fixes SPE support in non-secure world on implementations which
      include the Scalable Vector Extension (SVE), for instance on Zeus cores.
      
      Change-Id: If6cbd1b72d6abb8a303e2c0a7839d508f071cdbe
      Signed-off-by: default avatarAndre Przywara <andre.przywara@arm.com>
      b8535929
  27. 11 Sep, 2020 1 commit
  28. 10 Sep, 2020 1 commit
  29. 09 Sep, 2020 1 commit
  30. 03 Sep, 2020 1 commit
    • Andre Przywara's avatar
      libc: memset: improve performance by avoiding single byte writes · 75fab649
      Andre Przywara authored
      
      
      Currently our memset() implementation is safe, but slow. The main reason
      for that seems to be the single byte writes that it issues, which can
      show horrible performance, depending on the implementation of the
      load/store subsystem.
      
      Improve the algorithm by trying to issue 64-bit writes. As this only
      works with aligned pointers, have a head and a tail section which
      covers unaligned pointers, and leave the bulk of the work to the middle
      section that does use 64-bit writes.
      
      Put through some unit tests, which exercise all combinations of nasty
      input parameters (pointers with various alignments, various odd and even
      sizes, corner cases of content to write (-1, 256)).
      
      Change-Id: I28ddd3d388cc4989030f1a70447581985368d5bb
      Signed-off-by: default avatarAndre Przywara <andre.przywara@arm.com>
      75fab649