1. 11 Jun, 2021 1 commit
    • johpow01's avatar
      feat(RME): BL31 changes · 127ccd17
      johpow01 authored
      
      
      The following is introduced in the patch
      1. Enable GPT translation tables
      2. Adds a new context for Realm world and adds realm world awareness in context management
      3. Initializes CPU registers in context management for Realm world
      Signed-off-by: default avatarJohn Powell <john.powell@arm.com>
      Change-Id: I5bc8f3413144fb63d9eb4f5885abdcf5a44b4db1
      127ccd17
  2. 05 Feb, 2021 1 commit
  3. 29 Jan, 2021 1 commit
    • Madhukar Pappireddy's avatar
      Fix exception handlers in BL31: Use DSB to synchronize pending EA · c2d32a5f
      Madhukar Pappireddy authored
      For SoCs which do not implement RAS, use DSB as a barrier to
      synchronize pending external aborts at the entry and exit of
      exception handlers. This is needed to isolate the SErrors to
      appropriate context.
      
      However, this introduces an unintended side effect as discussed
      in the https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/3440
      
      
      A summary of the side effect and a quick workaround is provided as
      part of this patch and summarized here:
      
      The explicit DSB at the entry of various exception vectors in BL31
      for handling exceptions from lower ELs can inadvertently trigger an
      SError exception in EL3 due to pending asyncrhonouus aborts in lower
      ELs. This will end up being handled by serror_sp_elx in EL3 which will
      ultimately panic and die.
      
      The way to workaround is to update a flag to indicate if the exception
      truly came from EL3. This flag is allocated in the cpu_context
      structure. This is not a bullet proof solution to the problem at hand
      because we assume the instructions following "isb" that help to update
      the flag (lines 100-102 & 139-141) execute without causing further
      exceptions.
      
      Change-Id: I4d345b07d746a727459435ddd6abb37fda24a9bf
      Signed-off-by: default avatarMadhukar Pappireddy <madhukar.pappireddy@arm.com>
      c2d32a5f
  4. 11 Dec, 2020 1 commit
    • Javier Almansa Sobrino's avatar
      Add support for FEAT_MTPMU for Armv8.6 · 0063dd17
      Javier Almansa Sobrino authored
      
      
      If FEAT_PMUv3 is implemented and PMEVTYPER<n>(_EL0).MT bit is implemented
      as well, it is possible to control whether PMU counters take into account
      events happening on other threads.
      
      If FEAT_MTPMU is implemented, EL3 (or EL2) can override the MT bit
      leaving it to effective state of 0 regardless of any write to it.
      
      This patch introduces the DISABLE_MTPMU flag, which allows to diable
      multithread event count from EL3 (or EL2). The flag is disabled
      by default so the behavior is consistent with those architectures
      that do not implement FEAT_MTPMU.
      Signed-off-by: default avatarJavier Almansa Sobrino <javier.almansasobrino@arm.com>
      Change-Id: Iee3a8470ae8ba13316af1bd40c8d4aa86e0cb85e
      0063dd17
  5. 12 Oct, 2020 1 commit
    • Jimmy Brisson's avatar
      Increase type widths to satisfy width requirements · d7b5f408
      Jimmy Brisson authored
      
      
      Usually, C has no problem up-converting types to larger bit sizes. MISRA
      rule 10.7 requires that you not do this, or be very explicit about this.
      This resolves the following required rule:
      
          bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None>
          The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U |
          0x3c0U" (32 bits) is less that the right hand operand
          "18446744073709547519ULL" (64 bits).
      
      This also resolves MISRA defects such as:
      
          bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)]
          In the expression "3U << 20", shifting more than 7 bits, the number
          of bits in the essential type of the left expression, "3U", is
          not allowed.
      
      Further, MISRA requires that all shifts don't overflow. The definition of
      PAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues.
      This fixes the violation by changing the definition to 1UL << 12. Since
      this uses 32bits, it should not create any issues for aarch32.
      
      This patch also contains a fix for a build failure in the sun50i_a64
      platform. Specifically, these misra fixes removed a single and
      instruction,
      
          92407e73        and     x19, x19, #0xffffffff
      
      from the cm_setup_context function caused a relocation in
      psci_cpus_on_start to require a linker-generated stub. This increased the
      size of the .text section and caused an alignment later on to go over a
      page boundary and round up to the end of RAM before placing the .data
      section. This sectionn is of non-zero size and therefore causes a link
      error.
      
      The fix included in this reorders the functions during link time
      without changing their ording with respect to alignment.
      
      Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16
      Signed-off-by: default avatarJimmy Brisson <jimmy.brisson@arm.com>
      d7b5f408
  6. 25 Sep, 2020 1 commit
    • Javier Almansa Sobrino's avatar
      arm_fpga: Add support for unknown MPIDs · 1994e562
      Javier Almansa Sobrino authored
      
      
      This patch allows the system to fallback to a default CPU library
      in case the MPID does not match with any of the supported ones.
      
      This feature can be enabled by setting SUPPORT_UNKNOWN_MPID build
      option to 1 (enabled by default only on arm_fpga platform).
      
      This feature can be very dangerous on a production image and
      therefore it MUST be disabled for Release images.
      Signed-off-by: default avatarJavier Almansa Sobrino <javier.almansasobrino@arm.com>
      Change-Id: I0df7ef2b012d7d60a4fd5de44dea1fbbb46881ba
      1994e562
  7. 14 Sep, 2020 1 commit
  8. 18 Aug, 2020 1 commit
    • Manish V Badarkhe's avatar
      runtime_exceptions: Update AT speculative workaround · 3b8456bd
      Manish V Badarkhe authored
      As per latest mailing communication [1], we decided to
      update AT speculative workaround implementation in order to
      disable page table walk for lower ELs(EL1 or EL0) immediately
      after context switching to EL3 from lower ELs.
      
      Previous implementation of AT speculative workaround is available
      here: 45aecff0
      
      AT speculative workaround is updated as below:
      1. Avoid saving and restoring of SCTLR and TCR registers for EL1
         in context save and restore routine respectively.
      2. On EL3 entry, save SCTLR and TCR registers for EL1.
      3. On EL3 entry, update EL1 system registers to disable stage 1
         page table walk for lower ELs (EL1 and EL0) and enable EL1
         MMU.
      4. On EL3 exit, restore SCTLR and TCR registers for EL1 which
         are saved in step 2.
      
      [1]:
      https://lists.trustedfirmware.org/pipermail/tf-a/2020-July/000586.html
      
      
      
      Change-Id: Iee8de16f81dc970a8f492726f2ddd57e7bd9ffb5
      Signed-off-by: default avatarManish V Badarkhe <Manish.Badarkhe@arm.com>
      3b8456bd
  9. 29 Jun, 2020 1 commit
    • Masahiro Yamada's avatar
      linker_script: move .rela.dyn section to bl_common.ld.h · e8ad6168
      Masahiro Yamada authored
      
      
      The .rela.dyn section is the same for BL2-AT-EL3, BL31, TSP.
      
      Move it to the common header file.
      
      I slightly changed the definition so that we can do "RELA_SECTION >RAM".
      It still produced equivalent elf images.
      
      Please note I got rid of '.' from the VMA field. Otherwise, if the end
      of previous .data section is not 8-byte aligned, it fails to link.
      
      aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
      aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
      aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
      make: *** [Makefile:1071: build/qemu/release/bl31/bl31.elf] Error 1
      
      Change-Id: Iba7422d99c0374d4d9e97e6fd47bae129dba5cc9
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      e8ad6168
  10. 25 Apr, 2020 1 commit
    • Masahiro Yamada's avatar
      linker_script: move .data section to bl_common.ld.h · caa3e7e0
      Masahiro Yamada authored
      Move the data section to the common header.
      
      I slightly tweaked some scripts as follows:
      
      [1] bl1.ld.S has ALIGN(16). I added DATA_ALIGN macro, which is 1
          by default, but overridden by bl1.ld.S. Currently, ALIGN(16)
          of the .data section is redundant because commit 41286590
      
      
          ("Fix boot failures on some builds linked with ld.lld.") padded
          out the previous section to work around the issue of LLD version
          <= 10.0. This will be fixed in the future release of LLVM, so
          I am keeping the proper way to align LMA.
      
      [2] bl1.ld.S and bl2_el3.ld.S define __DATA_RAM_{START,END}__ instead
          of __DATA_{START,END}__. I put them out of the .data section.
      
      [3] SORT_BY_ALIGNMENT() is missing tsp.ld.S, sp_min.ld.S, and
          mediatek/mt6795/bl31.ld.S. This commit adds SORT_BY_ALIGNMENT()
          for all images, so the symbol order in those three will change,
          but I do not think it is a big deal.
      
      Change-Id: I215bb23c319f045cd88e6f4e8ee2518c67f03692
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      caa3e7e0
  11. 24 Apr, 2020 1 commit
  12. 02 Apr, 2020 3 commits
    • Masahiro Yamada's avatar
      linker_script: move bss section to bl_common.ld.h · a7739bc7
      Masahiro Yamada authored
      
      
      Move the bss section to the common header. This adds BAKERY_LOCK_NORMAL
      and PMF_TIMESTAMP, which previously existed only in BL31. This is not
      a big deal because unused data should not be compiled in the first
      place. I believe this should be controlled by BL*_SOURCES in Makefiles,
      not by linker scripts.
      
      I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3,
      BL31, BL31 for plat=uniphier. I did not see any more  unexpected
      code addition.
      
      The bss section has bigger alignment. I added BSS_ALIGN for this.
      
      Currently, SORT_BY_ALIGNMENT() is missing in sp_min.ld.S, and with this
      change, the BSS symbols in SP_MIN will be sorted by the alignment.
      This is not a big deal (or, even better in terms of the image size).
      
      Change-Id: I680ee61f84067a559bac0757f9d03e73119beb33
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      a7739bc7
    • Masahiro Yamada's avatar
      linker_script: replace common read-only data with RODATA_COMMON · 0a0a7a9a
      Masahiro Yamada authored
      The common section data are repeated in many linker scripts (often
      twice in each script to support SEPARATE_CODE_AND_RODATA). When you
      add a new read-only data section, you end up with touching lots of
      places.
      
      After this commit, you will only need to touch bl_common.ld.h when
      you add a new section to RODATA_COMMON.
      
      Replace a series of RO section with RODATA_COMMON, which contains
      6 sections, some of which did not exist before.
      
      This is not a big deal because unneeded data should not be compiled
      in the first place. I believe this should be controlled by BL*_SOURCES
      in Makefiles, not by linker scripts.
      
      When I was working on this commit, the BL1 image size increased
      due to the fconf_populator. Commit c452ba15
      
       ("fconf: exclude
      fconf_dyn_cfg_getter.c from BL1_SOURCES") fixed this issue.
      
      I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3,
      BL31, BL31 for plat=uniphier. I did not see any more  unexpected
      code addition.
      
      Change-Id: I5d14d60dbe3c821765bce3ae538968ef266f1460
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      0a0a7a9a
    • Masahiro Yamada's avatar
      linker_script: move more common code to bl_common.ld.h · 9fb288a0
      Masahiro Yamada authored
      
      
      These are mostly used to collect data from special structure,
      and repeated in many linker scripts.
      
      To differentiate the alignment size between aarch32/aarch64, I added
      a new macro STRUCT_ALIGN.
      
      While I moved the PMF_SVC_DESCS, I dropped #if ENABLE_PMF conditional.
      As you can see in include/lib/pmf/pmf_helpers.h, PMF_REGISTER_SERVICE*
      are no-op when ENABLE_PMF=0. So, pmf_svc_descs and pmf_timestamp_array
      data are not populated.
      
      Change-Id: I3f4ab7fa18f76339f1789103407ba76bda7e56d0
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      9fb288a0
  13. 11 Mar, 2020 2 commits
    • Madhukar Pappireddy's avatar
      fconf: necessary modifications to support fconf in BL31 & SP_MIN · 26d1e0c3
      Madhukar Pappireddy authored
      
      
      Necessary infrastructure added to integrate fconf framework in BL31 & SP_MIN.
      Created few populator() functions which parse HW_CONFIG device tree
      and registered them with fconf framework. Many of the changes are
      only applicable for fvp platform.
      
      This patch:
      1. Adds necessary symbols and sections in BL31, SP_MIN linker script
      2. Adds necessary memory map entry for translation in BL31, SP_MIN
      3. Creates an abstraction layer for hardware configuration based on
         fconf framework
      4. Adds necessary changes to build flow (makefiles)
      5. Minimal callback to read hw_config dtb for capturing properties
         related to GIC(interrupt-controller node)
      6. updates the fconf documentation
      
      Change-Id: Ib6292071f674ef093962b9e8ba0d322b7bf919af
      Signed-off-by: default avatarMadhukar Pappireddy <madhukar.pappireddy@arm.com>
      26d1e0c3
    • Masahiro Yamada's avatar
      Factor xlat_table sections in linker scripts out into a header file · 665e71b8
      Masahiro Yamada authored
      
      
      TF-A has so many linker scripts, at least one linker script for each BL
      image, and some platforms have their own ones. They duplicate quite
      similar code (and comments).
      
      When we add some changes to linker scripts, we end up with touching
      so many files. This is not nice in the maintainability perspective.
      
      When you look at Linux kernel, the common code is macrofied in
      include/asm-generic/vmlinux.lds.h, which is included from each arch
      linker script, arch/*/kernel/vmlinux.lds.S
      
      TF-A can follow this approach. Let's factor out the common code into
      include/common/bl_common.ld.h
      
      As a start point, this commit factors out the xlat_table section.
      
      Change-Id: Ifa369e9b48e8e12702535d721cc2a16d12397895
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      665e71b8
  14. 06 Mar, 2020 1 commit
  15. 10 Feb, 2020 1 commit
  16. 07 Feb, 2020 1 commit
    • Alexei Fedorov's avatar
      Make PAC demangling more generic · 68c76088
      Alexei Fedorov authored
      
      
      At the moment, address demangling is only used by the backtrace
      functionality. However, at some point, other parts of the TF-A
      codebase may want to use it.
      The 'demangle_address' function is replaced with a single XPACI
      instruction which is also added in 'do_crash_reporting()'.
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      Change-Id: I4424dcd54d5bf0a5f9b2a0a84c4e565eec7329ec
      68c76088
  17. 04 Feb, 2020 1 commit
  18. 28 Jan, 2020 1 commit
  19. 27 Jan, 2020 1 commit
    • Madhukar Pappireddy's avatar
      Changes necessary to support SEPARATE_NOBITS_REGION feature · c367b75e
      Madhukar Pappireddy authored
      
      
      Since BL31 PROGBITS and BL31 NOBITS sections are going to be
      in non-adjacent memory regions, potentially far from each other,
      some fixes are needed to support it completely.
      
      1. adr instruction only allows computing the effective address
      of a location only within 1MB range of the PC. However, adrp
      instruction together with an add permits position independent
      address of any location with 4GB range of PC.
      
      2. Since BL31 _RW_END_ marks the end of BL31 image, care must be
      taken that it is aligned to page size since we map this memory
      region in BL31 using xlat_v2 lib utils which mandate alignment of
      image size to page granularity.
      
      Change-Id: I3451cc030d03cb2032db3cc088f0c0e2c84bffda
      Signed-off-by: default avatarMadhukar Pappireddy <madhukar.pappireddy@arm.com>
      c367b75e
  20. 24 Jan, 2020 1 commit
    • Masahiro Yamada's avatar
      BL31: discard .dynsym .dynstr .hash sections to make ENABLE_PIE work · 511046ea
      Masahiro Yamada authored
      
      
      When I tried ENABLE_PIE for my PLAT=uniphier platform, BL31 crashed
      at its entry. When it is built with ENABLE_PIE=1, some sections are
      inserted before the executable code.
      
      $ make PLAT=uniphier CROSS_COMPILE=aarch64-linux-gnu- ENABLE_PIE=1 bl31
      $ aarch64-linux-gnu-objdump -h build/uniphier/release/bl31/bl31.elf | head -n 13
      
      build/uniphier/release/bl31/bl31.elf:     file format elf64-littleaarch64
      
      Sections:
      Idx Name          Size      VMA               LMA               File off  Algn
        0 .dynsym       000002a0  0000000081000000  0000000081000000  00010000  2**3
                        CONTENTS, ALLOC, LOAD, READONLY, DATA
        1 .dynstr       000002a0  00000000810002a0  00000000810002a0  000102a0  2**0
                        CONTENTS, ALLOC, LOAD, READONLY, DATA
        2 .hash         00000124  0000000081000540  0000000081000540  00010540  2**3
                        CONTENTS, ALLOC, LOAD, READONLY, DATA
        3 ro            0000699c  0000000081000664  0000000081000664  00010664  2**11
                        CONTENTS, ALLOC, LOAD, CODE
      
      The previous stage loader generally jumps over to the base address of
      BL31, where no valid instruction exists.
      
      I checked the linker script of Linux (arch/arm64/kernel/vmlinux.lds.S)
      and U-Boot (arch/arm/cpu/armv8/u-boot.lds), both of which support
      relocation. They simply discard those sections.
      
      Do similar in TF-A too.
      
      Change-Id: I6c33e9143856765d4ffa24f3924b0ab51a17cde9
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      511046ea
  21. 22 Jan, 2020 3 commits
  22. 29 Dec, 2019 1 commit
    • Samuel Holland's avatar
      bl31: Split into two separate memory regions · f8578e64
      Samuel Holland authored
      
      
      Some platforms are extremely memory constrained and must split BL31
      between multiple non-contiguous areas in SRAM. Allow the NOBITS
      sections (.bss, stacks, page tables, and coherent memory) to be placed
      in a separate region of RAM from the loaded firmware image.
      
      Because the NOBITS region may be at a lower address than the rest of
      BL31, __RW_{START,END}__ and __BL31_{START,END}__ cannot include this
      region, or el3_entrypoint_common would attempt to invalidate the dcache
      for the entire address space. New symbols __NOBITS_{START,END}__ are
      added when SEPARATE_NOBITS_REGION is enabled, and the dcached for the
      NOBITS region is invalidated separately.
      Signed-off-by: default avatarSamuel Holland <samuel@sholland.org>
      Change-Id: Idedfec5e4dbee77e94f2fdd356e6ae6f4dc79d37
      f8578e64
  23. 20 Dec, 2019 3 commits
    • Paul Beesley's avatar
      spm-mm: Rename component makefile · 442e0928
      Paul Beesley authored
      
      
      Change-Id: Idcd2a35cd2b30d77a7ca031f7e0172814bdb8cab
      Signed-off-by: default avatarPaul Beesley <paul.beesley@arm.com>
      442e0928
    • Paul Beesley's avatar
      spm: Remove SPM Alpha 1 prototype and support files · 538b0020
      Paul Beesley authored
      
      
      The Secure Partition Manager (SPM) prototype implementation is
      being removed. This is preparatory work for putting in place a
      dispatcher component that, in turn, enables partition managers
      at S-EL2 / S-EL1.
      
      This patch removes:
      
      - The core service files (std_svc/spm)
      - The Resource Descriptor headers (include/services)
      - SPRT protocol support and service definitions
      - SPCI protocol support and service definitions
      
      Change-Id: Iaade6f6422eaf9a71187b1e2a4dffd7fb8766426
      Signed-off-by: default avatarPaul Beesley <paul.beesley@arm.com>
      Signed-off-by: default avatarArtsem Artsemenka <artsem.artsemenka@arm.com>
      538b0020
    • Paul Beesley's avatar
      Remove dependency between SPM_MM and ENABLE_SPM build flags · 3f3c341a
      Paul Beesley authored
      
      
      There are two different implementations of Secure Partition
      management in TF-A. One is based on the "Management Mode" (MM)
      design, the other is based on the Secure Partition Client Interface
      (SPCI) specification. Currently there is a dependency between their
      build flags that shouldn't exist, making further development
      harder than it should be. This patch removes that
      dependency, making the two flags function independently.
      
      Before: ENABLE_SPM=1 is required for using either implementation.
              By default, the SPCI-based implementation is enabled and
              this is overridden if SPM_MM=1.
      
      After: ENABLE_SPM=1 enables the SPCI-based implementation.
             SPM_MM=1 enables the MM-based implementation.
             The two build flags are mutually exclusive.
      
      Note that the name of the ENABLE_SPM flag remains a bit
      ambiguous - this will be improved in a subsequent patch. For this
      patch the intention was to leave the name as-is so that it is
      easier to track the changes that were made.
      
      Change-Id: I8e64ee545d811c7000f27e8dc8ebb977d670608a
      Signed-off-by: default avatarPaul Beesley <paul.beesley@arm.com>
      3f3c341a
  24. 18 Dec, 2019 1 commit
  25. 17 Dec, 2019 2 commits
  26. 12 Dec, 2019 1 commit
    • Manish Pandey's avatar
      PIE: make call to GDT relocation fixup generalized · da90359b
      Manish Pandey authored
      When a Firmware is complied as Position Independent Executable it needs
      to request GDT fixup by passing size of the memory region to
      el3_entrypoint_common macro.
      The Global descriptor table fixup will be done early on during cold boot
      process of primary core.
      
      Currently only BL31 supports PIE, but in future when BL2_AT_EL3 will be
      compiled as PIE, it can simply pass fixup size to the common el3
      entrypoint macro to fixup GDT.
      
      The reason for this patch was to overcome the bug introduced by SHA
      330ead80
      
       which called fixup routine for each core causing
      re-initializing of global pointers thus overwriting any changes
      done by the previous core.
      
      Change-Id: I55c792cc3ea9e7eef34c2e4653afd04572c4f055
      Signed-off-by: default avatarManish Pandey <manish.pandey2@arm.com>
      da90359b
  27. 04 Dec, 2019 1 commit
    • Samuel Holland's avatar
      Reduce space lost to object alignment · ebd6efae
      Samuel Holland authored
      
      
      Currently, sections within .text/.rodata/.data/.bss are emitted in the
      order they are seen by the linker. This leads to wasted space, when a
      section with a larger alignment follows one with a smaller alignment.
      We can avoid this wasted space by sorting the sections.
      
      To take full advantage of this, we must disable generation of common
      symbols, so "common" data can be sorted along with the rest of .bss.
      
      An example of the improvement, from `make DEBUG=1 PLAT=sun50i_a64 bl31`:
        .text   => no change
        .rodata => 16 bytes saved
        .data   => 11 bytes saved
        .bss    => 576 bytes saved
      
      As a side effect, the addition of `-fno-common` in TF_CFLAGS makes it
      easier to spot bugs in header files.
      Signed-off-by: default avatarSamuel Holland <samuel@sholland.org>
      Change-Id: I073630a9b0b84e7302a7a500d4bb4b547be01d51
      ebd6efae
  28. 04 Oct, 2019 1 commit
    • laurenw-arm's avatar
      Neoverse N1 Errata Workaround 1542419 · 80942622
      laurenw-arm authored
      
      
      Coherent I-cache is causing a prefetch violation where when the core
      executes an instruction that has recently been modified, the core might
      fetch a stale instruction which violates the ordering of instruction
      fetches.
      
      The workaround includes an instruction sequence to implementation
      defined registers to trap all EL0 IC IVAU instructions to EL3 and a trap
      handler to execute a TLB inner-shareable invalidation to an arbitrary
      address followed by a DSB.
      Signed-off-by: default avatarLauren Wehrmeister <lauren.wehrmeister@arm.com>
      Change-Id: Ic3b7cbb11cf2eaf9005523ef5578a372593ae4d6
      80942622
  29. 13 Sep, 2019 1 commit
    • Alexei Fedorov's avatar
      Refactor ARMv8.3 Pointer Authentication support code · ed108b56
      Alexei Fedorov authored
      
      
      This patch provides the following features and makes modifications
      listed below:
      - Individual APIAKey key generation for each CPU.
      - New key generation on every BL31 warm boot and TSP CPU On event.
      - Per-CPU storage of APIAKey added in percpu_data[]
        of cpu_data structure.
      - `plat_init_apiakey()` function replaced with `plat_init_apkey()`
        which returns 128-bit value and uses Generic timer physical counter
        value to increase the randomness of the generated key.
        The new function can be used for generation of all ARMv8.3-PAuth keys
      - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
      - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
        generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
        pauth_disable_el1()` and `pauth_disable_el3()` functions disable
        PAuth for EL1 and EL3 respectively;
        `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
        cpu-data structure.
      - Combined `save_gp_pauth_registers()` function replaces calls to
        `save_gp_registers()` and `pauth_context_save()`;
        `restore_gp_pauth_registers()` replaces `pauth_context_restore()`
        and `restore_gp_registers()` calls.
      - `restore_gp_registers_eret()` function removed with corresponding
        code placed in `el3_exit()`.
      - Fixed the issue when `pauth_t pauth_ctx` structure allocated space
        for 12 uint64_t PAuth registers instead of 10 by removal of macro
        CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
        and assigning its value to CTX_PAUTH_REGS_END.
      - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
        in `msr	spsel`  instruction instead of hard-coded values.
      - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.
      
      Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      ed108b56
  30. 11 Sep, 2019 1 commit
    • Justin Chadwell's avatar
      Add UBSAN support and handlers · 1f461979
      Justin Chadwell authored
      
      
      This patch adds support for the Undefined Behaviour sanitizer. There are
      two types of support offered - minimalistic trapping support which
      essentially immediately crashes on undefined behaviour and full support
      with full debug messages.
      
      The full support relies on ubsan.c which has been adapted from code used
      by OPTEE.
      
      Change-Id: I417c810f4fc43dcb56db6a6a555bfd0b38440727
      Signed-off-by: default avatarJustin Chadwell <justin.chadwell@arm.com>
      1f461979
  31. 29 Aug, 2019 1 commit
  32. 21 Aug, 2019 1 commit
    • Alexei Fedorov's avatar
      AArch64: Disable Secure Cycle Counter · e290a8fc
      Alexei Fedorov authored
      
      
      This patch fixes an issue when secure world timing information
      can be leaked because Secure Cycle Counter is not disabled.
      For ARMv8.5 the counter gets disabled by setting MDCR_El3.SCCD
      bit on CPU cold/warm boot.
      For the earlier architectures PMCR_EL0 register is saved/restored
      on secure world entry/exit from/to Non-secure state, and cycle
      counting gets disabled by setting PMCR_EL0.DP bit.
      'include\aarch64\arch.h' header file was tided up and new
      ARMv8.5-PMU related definitions were added.
      
      Change-Id: I6f56db6bc77504634a352388990ad925a69ebbfa
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      e290a8fc