1. 13 Jan, 2021 2 commits
  2. 12 Jan, 2021 1 commit
  3. 06 Jan, 2021 1 commit
    • Alexei Fedorov's avatar
      AArch64: Fix assertions in processing dynamic relocations · db9736e3
      Alexei Fedorov authored
      
      
      This patch provides the following changes in fixup_gdt_reloc()
      function:
      - Fixes assertions in processing dynamic relocations, when
      relocation entries not matching R_AARCH64_RELATIVE type are found.
      Linker might generate entries of relocation type R_AARCH64_NONE
      (code 0), which should be ignored to make the code boot. Similar
      issue was fixed in OP-TEE (see optee_os/ldelf/ta_elf_rel.c
      commit 7a4dc765c133125428136a496a7644c6fec9b3c2)
      - Fixes bug when "b.ge" (signed greater than or equal) condition
      codes were used instead of "b.hs" (greater than or equal) for
      comparison of absolute addresses.
      - Adds optimisation which skips fixing Global Object Table (GOT)
      entries when offset value is 0.
      
      Change-Id: I35e34e055b7476843903859be947b883a1feb1b5
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      db9736e3
  4. 22 Dec, 2020 1 commit
    • Graeme Gregory's avatar
      PSCI: fix limit of 256 CPUs caused by cast to unsigned char · a86865ac
      Graeme Gregory authored
      
      
      In psci_setup.c psci_init_pwr_domain_node() takes an unsigned
      char as node_idx which limits it to initialising only the first
      256 CPUs. As the calling function does not check for a limit of
      256 I think this is a bug so change the unsigned char to
      uint16_t and change the cast from the calling site in
      populate_power_domain_tree().
      
      Also update the non_cpu_pwr_domain_node structure lock_index
      to uint16_t and update the function signature for psci_lock_init()
      appropriately.
      
      Finally add a define PSCI_MAX_CPUS_INDEX to psci_private.h and add
      a CASSERT to psci_setup.c to make sure PLATFORM_CORE_COUNT cannot
      exceed the index value.
      Signed-off-by: default avatarGraeme Gregory <graeme@nuviainc.com>
      Change-Id: I9e26842277db7483fd698b46bbac62aa86e71b45
      a86865ac
  5. 18 Dec, 2020 1 commit
  6. 11 Dec, 2020 1 commit
    • Javier Almansa Sobrino's avatar
      Add support for FEAT_MTPMU for Armv8.6 · 0063dd17
      Javier Almansa Sobrino authored
      
      
      If FEAT_PMUv3 is implemented and PMEVTYPER<n>(_EL0).MT bit is implemented
      as well, it is possible to control whether PMU counters take into account
      events happening on other threads.
      
      If FEAT_MTPMU is implemented, EL3 (or EL2) can override the MT bit
      leaving it to effective state of 0 regardless of any write to it.
      
      This patch introduces the DISABLE_MTPMU flag, which allows to diable
      multithread event count from EL3 (or EL2). The flag is disabled
      by default so the behavior is consistent with those architectures
      that do not implement FEAT_MTPMU.
      Signed-off-by: default avatarJavier Almansa Sobrino <javier.almansasobrino@arm.com>
      Change-Id: Iee3a8470ae8ba13316af1bd40c8d4aa86e0cb85e
      0063dd17
  7. 02 Dec, 2020 1 commit
  8. 30 Nov, 2020 1 commit
  9. 12 Nov, 2020 2 commits
  10. 20 Oct, 2020 3 commits
  11. 15 Oct, 2020 1 commit
  12. 12 Oct, 2020 1 commit
    • Jimmy Brisson's avatar
      Increase type widths to satisfy width requirements · d7b5f408
      Jimmy Brisson authored
      
      
      Usually, C has no problem up-converting types to larger bit sizes. MISRA
      rule 10.7 requires that you not do this, or be very explicit about this.
      This resolves the following required rule:
      
          bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None>
          The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U |
          0x3c0U" (32 bits) is less that the right hand operand
          "18446744073709547519ULL" (64 bits).
      
      This also resolves MISRA defects such as:
      
          bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)]
          In the expression "3U << 20", shifting more than 7 bits, the number
          of bits in the essential type of the left expression, "3U", is
          not allowed.
      
      Further, MISRA requires that all shifts don't overflow. The definition of
      PAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues.
      This fixes the violation by changing the definition to 1UL << 12. Since
      this uses 32bits, it should not create any issues for aarch32.
      
      This patch also contains a fix for a build failure in the sun50i_a64
      platform. Specifically, these misra fixes removed a single and
      instruction,
      
          92407e73        and     x19, x19, #0xffffffff
      
      from the cm_setup_context function caused a relocation in
      psci_cpus_on_start to require a linker-generated stub. This increased the
      size of the .text section and caused an alignment later on to go over a
      page boundary and round up to the end of RAM before placing the .data
      section. This sectionn is of non-zero size and therefore causes a link
      error.
      
      The fix included in this reorders the functions during link time
      without changing their ording with respect to alignment.
      
      Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16
      Signed-off-by: default avatarJimmy Brisson <jimmy.brisson@arm.com>
      d7b5f408
  13. 09 Oct, 2020 1 commit
    • Jimmy Brisson's avatar
      Don't return error information from console_flush · 831b0e98
      Jimmy Brisson authored
      
      
      And from crash_console_flush.
      
      We ignore the error information return by console_flush in _every_
      place where we call it, and casting the return type to void does not
      work around the MISRA violation that this causes. Instead, we collect
      the error information from the driver (to avoid changing that API), and
      don't return it to the caller.
      
      Change-Id: I1e35afe01764d5c8f0efd04f8949d333ffb688c1
      Signed-off-by: default avatarJimmy Brisson <jimmy.brisson@arm.com>
      831b0e98
  14. 07 Oct, 2020 1 commit
  15. 05 Oct, 2020 2 commits
  16. 03 Oct, 2020 1 commit
  17. 02 Oct, 2020 1 commit
    • Andre Przywara's avatar
      libfdt: Upgrade libfdt source files · 3b456661
      Andre Przywara authored
      
      
      Update the libfdt source files, the upstream commit is 73e0f143b73d
      ("libfdt: fdt_strerror(): Fix comparison warning").
      
      This brings us the fixes for the signed/unsigned comparison warnings,
      so platforms can enable -Wsign-compare now.
      
      Change-Id: I303d891c82ffea0acefdde27289339db5ac5a289
      Signed-off-by: default avatarAndre Przywara <andre.przywara@arm.com>
      3b456661
  18. 28 Sep, 2020 1 commit
  19. 25 Sep, 2020 2 commits
  20. 18 Sep, 2020 1 commit
  21. 15 Sep, 2020 1 commit
  22. 14 Sep, 2020 1 commit
    • Andre Przywara's avatar
      SPE: Fix feature detection · b8535929
      Andre Przywara authored
      
      
      Currently the feature test for the SPE extension requires the feature
      bits in the ID_AA64DFR0 register to read exactly 0b0001.
      However the architecture guarantees that any values greater than 0
      indicate the presence of a feature, which is what we are after in
      our spe_supported() function.
      
      Change the comparison to include all values greater than 0.
      
      This fixes SPE support in non-secure world on implementations which
      include the Scalable Vector Extension (SVE), for instance on Zeus cores.
      
      Change-Id: If6cbd1b72d6abb8a303e2c0a7839d508f071cdbe
      Signed-off-by: default avatarAndre Przywara <andre.przywara@arm.com>
      b8535929
  23. 11 Sep, 2020 1 commit
  24. 10 Sep, 2020 1 commit
  25. 09 Sep, 2020 1 commit
  26. 03 Sep, 2020 2 commits
    • Andre Przywara's avatar
      libc: memset: improve performance by avoiding single byte writes · 75fab649
      Andre Przywara authored
      
      
      Currently our memset() implementation is safe, but slow. The main reason
      for that seems to be the single byte writes that it issues, which can
      show horrible performance, depending on the implementation of the
      load/store subsystem.
      
      Improve the algorithm by trying to issue 64-bit writes. As this only
      works with aligned pointers, have a head and a tail section which
      covers unaligned pointers, and leave the bulk of the work to the middle
      section that does use 64-bit writes.
      
      Put through some unit tests, which exercise all combinations of nasty
      input parameters (pointers with various alignments, various odd and even
      sizes, corner cases of content to write (-1, 256)).
      
      Change-Id: I28ddd3d388cc4989030f1a70447581985368d5bb
      Signed-off-by: default avatarAndre Przywara <andre.przywara@arm.com>
      75fab649
    • Sandeep Tripathy's avatar
      psci: utility api to invoke stop for other cores · 22744909
      Sandeep Tripathy authored
      
      
      The API can be used to invoke a 'stop_func' callback for all
      other cores from any initiating core. Optionally it can also
      wait for other cores to power down. There may be various use
      of such API by platform. Ex: Platform may use this to power
      down all other cores from a crashed core.
      Signed-off-by: default avatarSandeep Tripathy <sandeep.tripathy@broadcom.com>
      Change-Id: I4f9dc8a38d419f299c021535d5f1bcc6883106f9
      22744909
  27. 02 Sep, 2020 2 commits
    • Alexei Fedorov's avatar
      plat/arm: Introduce and use libc_asm.mk makefile · e3f2b1a9
      Alexei Fedorov authored
      Trace analysis of FVP_Base_AEMv8A 0.0/6063 model
      running in Aarch32 mode with the build options
      listed below:
      TRUSTED_BOARD_BOOT=1 GENERATE_COT=1
      ARM_ROTPK_LOCATION=devel_ecdsa KEY_ALG=ecdsa
      ROT_KEY=plat/arm/board/common/rotpk/arm_rotprivk_ecdsa.pem
      shows that when auth_signature() gets called
      71.99% of CPU execution time is spent in memset() function
      written in C using single byte write operations,
      see lib\libc\memset.c.
      This patch introduces new libc_asm.mk makefile which
      replaces C memset() implementation with assembler
      version giving the following results:
      - for Aarch32 in auth_signature() call memset() CPU time
      reduced to 20.56%.
      The number of CPU instructions (Inst) executed during
      TF-A boot stage before start of BL33 in RELEASE builds
      for different versions is presented in the tables below,
      where:
      - C TF-A: existing TF-A C code;
      - C musl: "lightweight code" C "implementation of the
        standard library for Linux-based systems"
      https://git.musl-libc.org/cgit/musl/tree/src/string/memset.c
      - Asm Opt: assemler version from "Arm Optimized Routines"
        project
      https://github.com/ARM-software/optimized-routines/blob/
      master/string/arm/memset.S
      - Asm Linux: assembler version from Linux kernel
      https://github.com/torvalds/linux/blob/master/arch/arm/lib/memset.S
      
      
      - Asm TF-A: assembler version from this patch
      
      Aarch32:
      +-----------+------+------+--------------+----------+
      | Variant   | Set  | Size |    Inst 	 |  Ratio   |
      +-----------+------+------+--------------+----------+
      | C TF-A    | T32  | 16   | 2122110003   | 1.000000 |
      | C musl    | T32  | 156  | 1643917668   | 0.774662 |
      | Asm Opt   | T32  | 84   | 1604810003   | 0.756233 |
      | Asm Linux | A32  | 168  | 1566255018   | 0.738065 |
      | Asm TF-A  | A32  | 160  | 1525865101   | 0.719032 |
      +-----------+------+------+--------------+----------+
      
      AArch64:
      +-----------+------+------------+----------+
      | Variant   | Size |    Inst    |  Ratio   |
      +-----------+------+------------+----------+
      | C TF-A    | 28   | 2732497518 | 1.000000 |
      | C musl    | 212  | 1802999999 | 0.659836 |
      | Asm TF-A  | 140  | 1680260003 | 0.614917 |
      +-----------+------+------------+----------+
      
      This patch modifies 'plat\arm\common\arm_common.mk'
      by overriding libc.mk makefile with libc_asm.mk and
      does not effect other platforms.
      
      Change-Id: Ie89dd0b74ba1079420733a0d76b7366ad0157c2e
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      e3f2b1a9
    • Pramod Kumar's avatar
      lib: cpu: Check SCU presence in DSU before accessing DSU registers · 942013e1
      Pramod Kumar authored
      
      
      The DSU contains system control registers in the SCU and L3 logic to
      control the functionality of the cluster. If "DIRECT CONNECT" L3
      memory system variant is used, there won't be any L3 cache,
      snoop filter, and SCU logic present hence no system control register
      will be present. Hence check SCU presence before accessing DSU register
      for DSU_936184 errata.
      Signed-off-by: default avatarPramod Kumar <pramod.kumar@broadcom.com>
      Change-Id: I1ffa8afb0447ae3bd1032c9dd678d68021fe5a63
      942013e1
  28. 31 Aug, 2020 3 commits
  29. 26 Aug, 2020 1 commit
  30. 24 Aug, 2020 1 commit
    • Varun Wadekar's avatar
      lib: cpus: sanity check pointers before use · 601e3ed2
      Varun Wadekar authored
      
      
      The cpu_ops structure contains a lot of function pointers. It
      is a good idea to verify that the function pointer is not NULL
      before executing it.
      
      This patch sanity checks each pointer before use to prevent any
      unforeseen crashes. These checks have been enabled for debug
      builds only.
      
      Change-Id: Ib208331c20e60f0c7c582a20eb3d8cc40fb99d21
      Signed-off-by: default avatarVarun Wadekar <vwadekar@nvidia.com>
      601e3ed2