1. 11 Dec, 2020 1 commit
    • Javier Almansa Sobrino's avatar
      Add support for FEAT_MTPMU for Armv8.6 · 0063dd17
      Javier Almansa Sobrino authored
      
      
      If FEAT_PMUv3 is implemented and PMEVTYPER<n>(_EL0).MT bit is implemented
      as well, it is possible to control whether PMU counters take into account
      events happening on other threads.
      
      If FEAT_MTPMU is implemented, EL3 (or EL2) can override the MT bit
      leaving it to effective state of 0 regardless of any write to it.
      
      This patch introduces the DISABLE_MTPMU flag, which allows to diable
      multithread event count from EL3 (or EL2). The flag is disabled
      by default so the behavior is consistent with those architectures
      that do not implement FEAT_MTPMU.
      Signed-off-by: default avatarJavier Almansa Sobrino <javier.almansasobrino@arm.com>
      Change-Id: Iee3a8470ae8ba13316af1bd40c8d4aa86e0cb85e
      0063dd17
  2. 02 Dec, 2020 1 commit
  3. 30 Nov, 2020 1 commit
  4. 12 Nov, 2020 2 commits
  5. 20 Oct, 2020 3 commits
  6. 15 Oct, 2020 1 commit
  7. 12 Oct, 2020 1 commit
    • Jimmy Brisson's avatar
      Increase type widths to satisfy width requirements · d7b5f408
      Jimmy Brisson authored
      
      
      Usually, C has no problem up-converting types to larger bit sizes. MISRA
      rule 10.7 requires that you not do this, or be very explicit about this.
      This resolves the following required rule:
      
          bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None>
          The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U |
          0x3c0U" (32 bits) is less that the right hand operand
          "18446744073709547519ULL" (64 bits).
      
      This also resolves MISRA defects such as:
      
          bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)]
          In the expression "3U << 20", shifting more than 7 bits, the number
          of bits in the essential type of the left expression, "3U", is
          not allowed.
      
      Further, MISRA requires that all shifts don't overflow. The definition of
      PAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues.
      This fixes the violation by changing the definition to 1UL << 12. Since
      this uses 32bits, it should not create any issues for aarch32.
      
      This patch also contains a fix for a build failure in the sun50i_a64
      platform. Specifically, these misra fixes removed a single and
      instruction,
      
          92407e73        and     x19, x19, #0xffffffff
      
      from the cm_setup_context function caused a relocation in
      psci_cpus_on_start to require a linker-generated stub. This increased the
      size of the .text section and caused an alignment later on to go over a
      page boundary and round up to the end of RAM before placing the .data
      section. This sectionn is of non-zero size and therefore causes a link
      error.
      
      The fix included in this reorders the functions during link time
      without changing their ording with respect to alignment.
      
      Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16
      Signed-off-by: default avatarJimmy Brisson <jimmy.brisson@arm.com>
      d7b5f408
  8. 09 Oct, 2020 1 commit
    • Jimmy Brisson's avatar
      Don't return error information from console_flush · 831b0e98
      Jimmy Brisson authored
      
      
      And from crash_console_flush.
      
      We ignore the error information return by console_flush in _every_
      place where we call it, and casting the return type to void does not
      work around the MISRA violation that this causes. Instead, we collect
      the error information from the driver (to avoid changing that API), and
      don't return it to the caller.
      
      Change-Id: I1e35afe01764d5c8f0efd04f8949d333ffb688c1
      Signed-off-by: default avatarJimmy Brisson <jimmy.brisson@arm.com>
      831b0e98
  9. 07 Oct, 2020 1 commit
  10. 05 Oct, 2020 2 commits
  11. 03 Oct, 2020 1 commit
  12. 02 Oct, 2020 1 commit
    • Andre Przywara's avatar
      libfdt: Upgrade libfdt source files · 3b456661
      Andre Przywara authored
      
      
      Update the libfdt source files, the upstream commit is 73e0f143b73d
      ("libfdt: fdt_strerror(): Fix comparison warning").
      
      This brings us the fixes for the signed/unsigned comparison warnings,
      so platforms can enable -Wsign-compare now.
      
      Change-Id: I303d891c82ffea0acefdde27289339db5ac5a289
      Signed-off-by: default avatarAndre Przywara <andre.przywara@arm.com>
      3b456661
  13. 28 Sep, 2020 1 commit
  14. 25 Sep, 2020 2 commits
  15. 18 Sep, 2020 1 commit
  16. 15 Sep, 2020 1 commit
  17. 14 Sep, 2020 1 commit
    • Andre Przywara's avatar
      SPE: Fix feature detection · b8535929
      Andre Przywara authored
      
      
      Currently the feature test for the SPE extension requires the feature
      bits in the ID_AA64DFR0 register to read exactly 0b0001.
      However the architecture guarantees that any values greater than 0
      indicate the presence of a feature, which is what we are after in
      our spe_supported() function.
      
      Change the comparison to include all values greater than 0.
      
      This fixes SPE support in non-secure world on implementations which
      include the Scalable Vector Extension (SVE), for instance on Zeus cores.
      
      Change-Id: If6cbd1b72d6abb8a303e2c0a7839d508f071cdbe
      Signed-off-by: default avatarAndre Przywara <andre.przywara@arm.com>
      b8535929
  18. 11 Sep, 2020 1 commit
  19. 10 Sep, 2020 1 commit
  20. 09 Sep, 2020 1 commit
  21. 03 Sep, 2020 2 commits
    • Andre Przywara's avatar
      libc: memset: improve performance by avoiding single byte writes · 75fab649
      Andre Przywara authored
      
      
      Currently our memset() implementation is safe, but slow. The main reason
      for that seems to be the single byte writes that it issues, which can
      show horrible performance, depending on the implementation of the
      load/store subsystem.
      
      Improve the algorithm by trying to issue 64-bit writes. As this only
      works with aligned pointers, have a head and a tail section which
      covers unaligned pointers, and leave the bulk of the work to the middle
      section that does use 64-bit writes.
      
      Put through some unit tests, which exercise all combinations of nasty
      input parameters (pointers with various alignments, various odd and even
      sizes, corner cases of content to write (-1, 256)).
      
      Change-Id: I28ddd3d388cc4989030f1a70447581985368d5bb
      Signed-off-by: default avatarAndre Przywara <andre.przywara@arm.com>
      75fab649
    • Sandeep Tripathy's avatar
      psci: utility api to invoke stop for other cores · 22744909
      Sandeep Tripathy authored
      
      
      The API can be used to invoke a 'stop_func' callback for all
      other cores from any initiating core. Optionally it can also
      wait for other cores to power down. There may be various use
      of such API by platform. Ex: Platform may use this to power
      down all other cores from a crashed core.
      Signed-off-by: default avatarSandeep Tripathy <sandeep.tripathy@broadcom.com>
      Change-Id: I4f9dc8a38d419f299c021535d5f1bcc6883106f9
      22744909
  22. 02 Sep, 2020 2 commits
    • Alexei Fedorov's avatar
      plat/arm: Introduce and use libc_asm.mk makefile · e3f2b1a9
      Alexei Fedorov authored
      Trace analysis of FVP_Base_AEMv8A 0.0/6063 model
      running in Aarch32 mode with the build options
      listed below:
      TRUSTED_BOARD_BOOT=1 GENERATE_COT=1
      ARM_ROTPK_LOCATION=devel_ecdsa KEY_ALG=ecdsa
      ROT_KEY=plat/arm/board/common/rotpk/arm_rotprivk_ecdsa.pem
      shows that when auth_signature() gets called
      71.99% of CPU execution time is spent in memset() function
      written in C using single byte write operations,
      see lib\libc\memset.c.
      This patch introduces new libc_asm.mk makefile which
      replaces C memset() implementation with assembler
      version giving the following results:
      - for Aarch32 in auth_signature() call memset() CPU time
      reduced to 20.56%.
      The number of CPU instructions (Inst) executed during
      TF-A boot stage before start of BL33 in RELEASE builds
      for different versions is presented in the tables below,
      where:
      - C TF-A: existing TF-A C code;
      - C musl: "lightweight code" C "implementation of the
        standard library for Linux-based systems"
      https://git.musl-libc.org/cgit/musl/tree/src/string/memset.c
      - Asm Opt: assemler version from "Arm Optimized Routines"
        project
      https://github.com/ARM-software/optimized-routines/blob/
      master/string/arm/memset.S
      - Asm Linux: assembler version from Linux kernel
      https://github.com/torvalds/linux/blob/master/arch/arm/lib/memset.S
      
      
      - Asm TF-A: assembler version from this patch
      
      Aarch32:
      +-----------+------+------+--------------+----------+
      | Variant   | Set  | Size |    Inst 	 |  Ratio   |
      +-----------+------+------+--------------+----------+
      | C TF-A    | T32  | 16   | 2122110003   | 1.000000 |
      | C musl    | T32  | 156  | 1643917668   | 0.774662 |
      | Asm Opt   | T32  | 84   | 1604810003   | 0.756233 |
      | Asm Linux | A32  | 168  | 1566255018   | 0.738065 |
      | Asm TF-A  | A32  | 160  | 1525865101   | 0.719032 |
      +-----------+------+------+--------------+----------+
      
      AArch64:
      +-----------+------+------------+----------+
      | Variant   | Size |    Inst    |  Ratio   |
      +-----------+------+------------+----------+
      | C TF-A    | 28   | 2732497518 | 1.000000 |
      | C musl    | 212  | 1802999999 | 0.659836 |
      | Asm TF-A  | 140  | 1680260003 | 0.614917 |
      +-----------+------+------------+----------+
      
      This patch modifies 'plat\arm\common\arm_common.mk'
      by overriding libc.mk makefile with libc_asm.mk and
      does not effect other platforms.
      
      Change-Id: Ie89dd0b74ba1079420733a0d76b7366ad0157c2e
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      e3f2b1a9
    • Pramod Kumar's avatar
      lib: cpu: Check SCU presence in DSU before accessing DSU registers · 942013e1
      Pramod Kumar authored
      
      
      The DSU contains system control registers in the SCU and L3 logic to
      control the functionality of the cluster. If "DIRECT CONNECT" L3
      memory system variant is used, there won't be any L3 cache,
      snoop filter, and SCU logic present hence no system control register
      will be present. Hence check SCU presence before accessing DSU register
      for DSU_936184 errata.
      Signed-off-by: default avatarPramod Kumar <pramod.kumar@broadcom.com>
      Change-Id: I1ffa8afb0447ae3bd1032c9dd678d68021fe5a63
      942013e1
  23. 31 Aug, 2020 3 commits
  24. 26 Aug, 2020 1 commit
  25. 24 Aug, 2020 1 commit
    • Varun Wadekar's avatar
      lib: cpus: sanity check pointers before use · 601e3ed2
      Varun Wadekar authored
      
      
      The cpu_ops structure contains a lot of function pointers. It
      is a good idea to verify that the function pointer is not NULL
      before executing it.
      
      This patch sanity checks each pointer before use to prevent any
      unforeseen crashes. These checks have been enabled for debug
      builds only.
      
      Change-Id: Ib208331c20e60f0c7c582a20eb3d8cc40fb99d21
      Signed-off-by: default avatarVarun Wadekar <vwadekar@nvidia.com>
      601e3ed2
  26. 21 Aug, 2020 1 commit
  27. 19 Aug, 2020 2 commits
    • Alexei Fedorov's avatar
      libc/memset: Implement function in assembler · e7d344de
      Alexei Fedorov authored
      
      
      Trace analysis of FVP_Base_AEMv8A model running in
      Aarch32 mode with the build options listed below:
      TRUSTED_BOARD_BOOT=1 GENERATE_COT=1
      ARM_ROTPK_LOCATION=devel_ecdsa KEY_ALG=ecdsa
      ROT_KEY=plat/arm/board/common/rotpk/arm_rotprivk_ecdsa.pem
      shows that when auth_signature() gets called
      71.84% of CPU execution time is spent in memset() function
      written in C using single byte write operations,
      see lib\libc\memset.c.
      This patch replaces C memset() implementation with assembler
      version giving the following results:
      - for Aarch32 in auth_signature() call memset() CPU time
      reduced to 24.84%.
      - Number of CPU instructions executed during TF-A
      boot stage before start of BL33 in RELEASE builds:
      ----------------------------------------------
      |  Arch   |     C      |  assembler |    %   |
      ----------------------------------------------
      | Aarch32 | 2073275460 | 1487400003 | -28.25 |
      | Aarch64 | 2056807158 | 1244898303 | -39.47 |
      ----------------------------------------------
      The patch also replaces memset.c with aarch64/memset.S
      in plat\nvidia\tegra\platform.mk.
      
      Change-Id: Ifbf085a2f577a25491e2d28446ee95a4ac891597
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      e7d344de
    • Ruari Phipps's avatar
      SPM: Change condition on saving/restoring EL2 registers · 6b704da3
      Ruari Phipps authored
      
      
      Make this more scalable by explicitly checking internal and hardware
      states at run_time
      Signed-off-by: default avatarRuari Phipps <ruari.phipps@arm.com>
      Change-Id: I1c6ed1c1badb3538a93bff3ac5b5189b59cccfa1
      6b704da3
  28. 18 Aug, 2020 3 commits
    • Manish V Badarkhe's avatar
      runtime_exceptions: Update AT speculative workaround · 3b8456bd
      Manish V Badarkhe authored
      As per latest mailing communication [1], we decided to
      update AT speculative workaround implementation in order to
      disable page table walk for lower ELs(EL1 or EL0) immediately
      after context switching to EL3 from lower ELs.
      
      Previous implementation of AT speculative workaround is available
      here: 45aecff0
      
      AT speculative workaround is updated as below:
      1. Avoid saving and restoring of SCTLR and TCR registers for EL1
         in context save and restore routine respectively.
      2. On EL3 entry, save SCTLR and TCR registers for EL1.
      3. On EL3 entry, update EL1 system registers to disable stage 1
         page table walk for lower ELs (EL1 and EL0) and enable EL1
         MMU.
      4. On EL3 exit, restore SCTLR and TCR registers for EL1 which
         are saved in step 2.
      
      [1]:
      https://lists.trustedfirmware.org/pipermail/tf-a/2020-July/000586.html
      
      
      
      Change-Id: Iee8de16f81dc970a8f492726f2ddd57e7bd9ffb5
      Signed-off-by: default avatarManish V Badarkhe <Manish.Badarkhe@arm.com>
      3b8456bd
    • Manish V Badarkhe's avatar
      el3_runtime: Rearrange context offset of EL1 sys registers · cb55615c
      Manish V Badarkhe authored
      
      
      SCTLR and TCR registers of EL1 plays role in enabling/disabling of
      page table walk for lower ELs (EL0 and EL1).
      Hence re-arranged EL1 context offsets to have SCTLR and TCR registers
      values one after another in the stack so that these registers values
      can be saved and restored using stp and ldp instruction respectively.
      
      Change-Id: Iaa28fd9eba82a60932b6b6d85ec8857a9acd5f8b
      Signed-off-by: default avatarManish V Badarkhe <Manish.Badarkhe@arm.com>
      cb55615c
    • Manish V Badarkhe's avatar
      lib/cpus: Report AT speculative erratum workaround · e1c49333
      Manish V Badarkhe authored
      
      
      Reported the status (applies, missing) of AT speculative workaround
      which is applicable for below CPUs.
      
       +---------+--------------+
       | Errata  |      CPU     |
       +=========+==============+
       | 1165522 |  Cortex-A76  |
       +---------+--------------+
       | 1319367 |  Cortex-A72  |
       +---------+--------------+
       | 1319537 |  Cortex-A57  |
       +---------+--------------+
       | 1530923 |  Cortex-A55  |
       +---------+--------------+
       | 1530924 |  Cortex-A53  |
       +---------+--------------+
      
      Also, changes are done to enable common macro 'ERRATA_SPECULATIVE_AT'
      if AT speculative errata workaround is enabled for any of the above
      CPUs using 'ERRATA_*' CPU specific build macro.
      Signed-off-by: default avatarManish V Badarkhe <Manish.Badarkhe@arm.com>
      Change-Id: I3e6a5316a2564071f3920c3ce9ae9a29adbe435b
      e1c49333