1. 21 Apr, 2021 1 commit
    • Yann Gautier's avatar
      Avoid the use of linker *_SIZE__ macros · fb4f511f
      Yann Gautier authored
      The use of end addresses is preferred over the size of sections.
      This was done for some AARCH64 files for PIE with commit [1],
      and some extra explanations can be found in its commit message.
      Align the missing AARCH64 files.
      
      For AARCH32 files, this is required to prepare PIE support introduction.
      
       [1] f1722b69
      
       ("PIE: Use PC relative adrp/adr for symbol reference")
      
      Change-Id: I8f1c06580182b10c680310850f72904e58a54d7d
      Signed-off-by: default avatarYann Gautier <yann.gautier@st.com>
      fb4f511f
  2. 11 Feb, 2021 1 commit
    • Andre Przywara's avatar
      bl32: Enable TRNG service build · 0e14948e
      Andre Przywara authored
      
      
      The Trusted Random Number Generator service is using the standard SMC
      service dispatcher, running in BL31. For that reason we list the files
      implementing the service in bl31.mk.
      However when building for a 32-bit TF-A runtime, sp_min.mk is the
      Makefile snippet used, so we have to add the files into there as well.
      
      This fixes 32-bit builds of platforms that provide the TRNG service.
      
      Change-Id: I8be61522300d36477a9ee0a9ce159a140390b254
      Signed-off-by: default avatarAndre Przywara <andre.przywara@arm.com>
      0e14948e
  3. 11 Dec, 2020 1 commit
    • Javier Almansa Sobrino's avatar
      Add support for FEAT_MTPMU for Armv8.6 · 0063dd17
      Javier Almansa Sobrino authored
      
      
      If FEAT_PMUv3 is implemented and PMEVTYPER<n>(_EL0).MT bit is implemented
      as well, it is possible to control whether PMU counters take into account
      events happening on other threads.
      
      If FEAT_MTPMU is implemented, EL3 (or EL2) can override the MT bit
      leaving it to effective state of 0 regardless of any write to it.
      
      This patch introduces the DISABLE_MTPMU flag, which allows to diable
      multithread event count from EL3 (or EL2). The flag is disabled
      by default so the behavior is consistent with those architectures
      that do not implement FEAT_MTPMU.
      Signed-off-by: default avatarJavier Almansa Sobrino <javier.almansasobrino@arm.com>
      Change-Id: Iee3a8470ae8ba13316af1bd40c8d4aa86e0cb85e
      0063dd17
  4. 13 Nov, 2020 1 commit
    • Alexei Fedorov's avatar
      TSP: Fix GCC 11.0.0 compilation error. · caff3c87
      Alexei Fedorov authored
      
      
      This patch fixes the following compilation error
      reported by aarch64-none-elf-gcc 11.0.0:
      
      bl32/tsp/tsp_main.c: In function 'tsp_smc_handler':
      bl32/tsp/tsp_main.c:393:9: error: 'tsp_get_magic'
       accessing 32 bytes in a region of size 16
       [-Werror=stringop-overflow=]
        393 |         tsp_get_magic(service_args);
            |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~
      bl32/tsp/tsp_main.c:393:9: note: referencing argument 1
       of type 'uint64_t *' {aka 'long long unsigned int *'}
      In file included from bl32/tsp/tsp_main.c:19:
      bl32/tsp/tsp_private.h:64:6: note: in a call to function 'tsp_get_magic'
         64 | void tsp_get_magic(uint64_t args[4]);
            |      ^~~~~~~~~~~~~
      
      by changing declaration of tsp_get_magic function from
      void tsp_get_magic(uint64_t args[4]);
      to
      uint128_t tsp_get_magic(void);
      which returns arguments directly in x0 and x1 registers.
      
      In bl32\tsp\tsp_main.c the current tsp_smc_handler()
      implementation calls tsp_get_magic(service_args);
      , where service_args array is declared as
      uint64_t service_args[2];
      and tsp_get_magic() in bl32\tsp\aarch64\tsp_request.S
      copies only 2 registers in output buffer:
      	/* Store returned arguments to the array */
      	stp	x0, x1, [x4, #0]
      
      Change-Id: Ib34759fc5d7bb803e6c734540d91ea278270b330
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      caff3c87
  5. 12 Oct, 2020 1 commit
    • Jimmy Brisson's avatar
      Increase type widths to satisfy width requirements · d7b5f408
      Jimmy Brisson authored
      
      
      Usually, C has no problem up-converting types to larger bit sizes. MISRA
      rule 10.7 requires that you not do this, or be very explicit about this.
      This resolves the following required rule:
      
          bl1/aarch64/bl1_context_mgmt.c:81:[MISRA C-2012 Rule 10.7 (required)]<None>
          The width of the composite expression "0U | ((mode & 3U) << 2U) | 1U |
          0x3c0U" (32 bits) is less that the right hand operand
          "18446744073709547519ULL" (64 bits).
      
      This also resolves MISRA defects such as:
      
          bl2/aarch64/bl2arch_setup.c:18:[MISRA C-2012 Rule 12.2 (required)]
          In the expression "3U << 20", shifting more than 7 bits, the number
          of bits in the essential type of the left expression, "3U", is
          not allowed.
      
      Further, MISRA requires that all shifts don't overflow. The definition of
      PAGE_SIZE was (1U << 12), and 1U is 8 bits. This caused about 50 issues.
      This fixes the violation by changing the definition to 1UL << 12. Since
      this uses 32bits, it should not create any issues for aarch32.
      
      This patch also contains a fix for a build failure in the sun50i_a64
      platform. Specifically, these misra fixes removed a single and
      instruction,
      
          92407e73        and     x19, x19, #0xffffffff
      
      from the cm_setup_context function caused a relocation in
      psci_cpus_on_start to require a linker-generated stub. This increased the
      size of the .text section and caused an alignment later on to go over a
      page boundary and round up to the end of RAM before placing the .data
      section. This sectionn is of non-zero size and therefore causes a link
      error.
      
      The fix included in this reorders the functions during link time
      without changing their ording with respect to alignment.
      
      Change-Id: I76b4b662c3d262296728a8b9aab7a33b02087f16
      Signed-off-by: default avatarJimmy Brisson <jimmy.brisson@arm.com>
      d7b5f408
  6. 05 Oct, 2020 2 commits
  7. 29 Jun, 2020 1 commit
    • Masahiro Yamada's avatar
      linker_script: move .rela.dyn section to bl_common.ld.h · e8ad6168
      Masahiro Yamada authored
      
      
      The .rela.dyn section is the same for BL2-AT-EL3, BL31, TSP.
      
      Move it to the common header file.
      
      I slightly changed the definition so that we can do "RELA_SECTION >RAM".
      It still produced equivalent elf images.
      
      Please note I got rid of '.' from the VMA field. Otherwise, if the end
      of previous .data section is not 8-byte aligned, it fails to link.
      
      aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
      aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
      aarch64-linux-gnu-ld.bfd: warning: changing start of section .rela.dyn by 4 bytes
      make: *** [Makefile:1071: build/qemu/release/bl31/bl31.elf] Error 1
      
      Change-Id: Iba7422d99c0374d4d9e97e6fd47bae129dba5cc9
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      e8ad6168
  8. 25 Apr, 2020 1 commit
    • Masahiro Yamada's avatar
      linker_script: move .data section to bl_common.ld.h · caa3e7e0
      Masahiro Yamada authored
      Move the data section to the common header.
      
      I slightly tweaked some scripts as follows:
      
      [1] bl1.ld.S has ALIGN(16). I added DATA_ALIGN macro, which is 1
          by default, but overridden by bl1.ld.S. Currently, ALIGN(16)
          of the .data section is redundant because commit 41286590
      
      
          ("Fix boot failures on some builds linked with ld.lld.") padded
          out the previous section to work around the issue of LLD version
          <= 10.0. This will be fixed in the future release of LLVM, so
          I am keeping the proper way to align LMA.
      
      [2] bl1.ld.S and bl2_el3.ld.S define __DATA_RAM_{START,END}__ instead
          of __DATA_{START,END}__. I put them out of the .data section.
      
      [3] SORT_BY_ALIGNMENT() is missing tsp.ld.S, sp_min.ld.S, and
          mediatek/mt6795/bl31.ld.S. This commit adds SORT_BY_ALIGNMENT()
          for all images, so the symbol order in those three will change,
          but I do not think it is a big deal.
      
      Change-Id: I215bb23c319f045cd88e6f4e8ee2518c67f03692
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      caa3e7e0
  9. 24 Apr, 2020 1 commit
  10. 02 Apr, 2020 3 commits
    • Masahiro Yamada's avatar
      linker_script: move bss section to bl_common.ld.h · a7739bc7
      Masahiro Yamada authored
      
      
      Move the bss section to the common header. This adds BAKERY_LOCK_NORMAL
      and PMF_TIMESTAMP, which previously existed only in BL31. This is not
      a big deal because unused data should not be compiled in the first
      place. I believe this should be controlled by BL*_SOURCES in Makefiles,
      not by linker scripts.
      
      I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3,
      BL31, BL31 for plat=uniphier. I did not see any more  unexpected
      code addition.
      
      The bss section has bigger alignment. I added BSS_ALIGN for this.
      
      Currently, SORT_BY_ALIGNMENT() is missing in sp_min.ld.S, and with this
      change, the BSS symbols in SP_MIN will be sorted by the alignment.
      This is not a big deal (or, even better in terms of the image size).
      
      Change-Id: I680ee61f84067a559bac0757f9d03e73119beb33
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      a7739bc7
    • Masahiro Yamada's avatar
      linker_script: replace common read-only data with RODATA_COMMON · 0a0a7a9a
      Masahiro Yamada authored
      The common section data are repeated in many linker scripts (often
      twice in each script to support SEPARATE_CODE_AND_RODATA). When you
      add a new read-only data section, you end up with touching lots of
      places.
      
      After this commit, you will only need to touch bl_common.ld.h when
      you add a new section to RODATA_COMMON.
      
      Replace a series of RO section with RODATA_COMMON, which contains
      6 sections, some of which did not exist before.
      
      This is not a big deal because unneeded data should not be compiled
      in the first place. I believe this should be controlled by BL*_SOURCES
      in Makefiles, not by linker scripts.
      
      When I was working on this commit, the BL1 image size increased
      due to the fconf_populator. Commit c452ba15
      
       ("fconf: exclude
      fconf_dyn_cfg_getter.c from BL1_SOURCES") fixed this issue.
      
      I investigated BL1, BL2, BL2U, BL31 for plat=fvp, and BL2-AT-EL3,
      BL31, BL31 for plat=uniphier. I did not see any more  unexpected
      code addition.
      
      Change-Id: I5d14d60dbe3c821765bce3ae538968ef266f1460
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      0a0a7a9a
    • Masahiro Yamada's avatar
      linker_script: move more common code to bl_common.ld.h · 9fb288a0
      Masahiro Yamada authored
      
      
      These are mostly used to collect data from special structure,
      and repeated in many linker scripts.
      
      To differentiate the alignment size between aarch32/aarch64, I added
      a new macro STRUCT_ALIGN.
      
      While I moved the PMF_SVC_DESCS, I dropped #if ENABLE_PMF conditional.
      As you can see in include/lib/pmf/pmf_helpers.h, PMF_REGISTER_SERVICE*
      are no-op when ENABLE_PMF=0. So, pmf_svc_descs and pmf_timestamp_array
      data are not populated.
      
      Change-Id: I3f4ab7fa18f76339f1789103407ba76bda7e56d0
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      9fb288a0
  11. 31 Mar, 2020 1 commit
  12. 20 Mar, 2020 1 commit
  13. 11 Mar, 2020 2 commits
    • Madhukar Pappireddy's avatar
      fconf: necessary modifications to support fconf in BL31 & SP_MIN · 26d1e0c3
      Madhukar Pappireddy authored
      
      
      Necessary infrastructure added to integrate fconf framework in BL31 & SP_MIN.
      Created few populator() functions which parse HW_CONFIG device tree
      and registered them with fconf framework. Many of the changes are
      only applicable for fvp platform.
      
      This patch:
      1. Adds necessary symbols and sections in BL31, SP_MIN linker script
      2. Adds necessary memory map entry for translation in BL31, SP_MIN
      3. Creates an abstraction layer for hardware configuration based on
         fconf framework
      4. Adds necessary changes to build flow (makefiles)
      5. Minimal callback to read hw_config dtb for capturing properties
         related to GIC(interrupt-controller node)
      6. updates the fconf documentation
      
      Change-Id: Ib6292071f674ef093962b9e8ba0d322b7bf919af
      Signed-off-by: default avatarMadhukar Pappireddy <madhukar.pappireddy@arm.com>
      26d1e0c3
    • Masahiro Yamada's avatar
      Factor xlat_table sections in linker scripts out into a header file · 665e71b8
      Masahiro Yamada authored
      
      
      TF-A has so many linker scripts, at least one linker script for each BL
      image, and some platforms have their own ones. They duplicate quite
      similar code (and comments).
      
      When we add some changes to linker scripts, we end up with touching
      so many files. This is not nice in the maintainability perspective.
      
      When you look at Linux kernel, the common code is macrofied in
      include/asm-generic/vmlinux.lds.h, which is included from each arch
      linker script, arch/*/kernel/vmlinux.lds.S
      
      TF-A can follow this approach. Let's factor out the common code into
      include/common/bl_common.ld.h
      
      As a start point, this commit factors out the xlat_table section.
      
      Change-Id: Ifa369e9b48e8e12702535d721cc2a16d12397895
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      665e71b8
  14. 06 Mar, 2020 1 commit
  15. 24 Jan, 2020 1 commit
  16. 22 Jan, 2020 1 commit
  17. 17 Dec, 2019 1 commit
  18. 26 Sep, 2019 1 commit
    • Alexei Fedorov's avatar
      AArch32: Disable Secure Cycle Counter · c3e8b0be
      Alexei Fedorov authored
      
      
      This patch changes implementation for disabling Secure Cycle
      Counter. For ARMv8.5 the counter gets disabled by setting
      SDCR.SCCD bit on CPU cold/warm boot. For the earlier
      architectures PMCR register is saved/restored on secure
      world entry/exit from/to Non-secure state, and cycle counting
      gets disabled by setting PMCR.DP bit.
      In 'include\aarch32\arch.h' header file new
      ARMv8.5-PMU related definitions were added.
      
      Change-Id: Ia8845db2ebe8de940d66dff479225a5b879316f8
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      c3e8b0be
  19. 13 Sep, 2019 1 commit
    • Alexei Fedorov's avatar
      Refactor ARMv8.3 Pointer Authentication support code · ed108b56
      Alexei Fedorov authored
      
      
      This patch provides the following features and makes modifications
      listed below:
      - Individual APIAKey key generation for each CPU.
      - New key generation on every BL31 warm boot and TSP CPU On event.
      - Per-CPU storage of APIAKey added in percpu_data[]
        of cpu_data structure.
      - `plat_init_apiakey()` function replaced with `plat_init_apkey()`
        which returns 128-bit value and uses Generic timer physical counter
        value to increase the randomness of the generated key.
        The new function can be used for generation of all ARMv8.3-PAuth keys
      - ARMv8.3-PAuth specific code placed in `lib\extensions\pauth`.
      - New `pauth_init_enable_el1()` and `pauth_init_enable_el3()` functions
        generate, program and enable APIAKey_EL1 for EL1 and EL3 respectively;
        pauth_disable_el1()` and `pauth_disable_el3()` functions disable
        PAuth for EL1 and EL3 respectively;
        `pauth_load_bl31_apiakey()` loads saved per-CPU APIAKey_EL1 from
        cpu-data structure.
      - Combined `save_gp_pauth_registers()` function replaces calls to
        `save_gp_registers()` and `pauth_context_save()`;
        `restore_gp_pauth_registers()` replaces `pauth_context_restore()`
        and `restore_gp_registers()` calls.
      - `restore_gp_registers_eret()` function removed with corresponding
        code placed in `el3_exit()`.
      - Fixed the issue when `pauth_t pauth_ctx` structure allocated space
        for 12 uint64_t PAuth registers instead of 10 by removal of macro
        CTX_PACGAKEY_END from `include/lib/el3_runtime/aarch64/context.h`
        and assigning its value to CTX_PAUTH_REGS_END.
      - Use of MODE_SP_ELX and MODE_SP_EL0 macro definitions
        in `msr	spsel`  instruction instead of hard-coded values.
      - Changes in documentation related to ARMv8.3-PAuth and ARMv8.5-BTI.
      
      Change-Id: Id18b81cc46f52a783a7e6a09b9f149b6ce803211
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      ed108b56
  20. 09 Sep, 2019 1 commit
    • Justin Chadwell's avatar
      Enable MTE support in both secure and non-secure worlds · 9dd94382
      Justin Chadwell authored
      
      
      This patch adds support for the new Memory Tagging Extension arriving in
      ARMv8.5. MTE support is now enabled by default on systems that support
      at EL0. To enable it at ELx for both the non-secure and the secure
      world, the compiler flag CTX_INCLUDE_MTE_REGS includes register saving
      and restoring when necessary in order to prevent register leakage
      between the worlds.
      
      Change-Id: I2d4ea993d6b11654ea0d4757d00ca20d23acf36c
      Signed-off-by: default avatarJustin Chadwell <justin.chadwell@arm.com>
      9dd94382
  21. 01 Aug, 2019 1 commit
    • Julius Werner's avatar
      Replace __ASSEMBLY__ with compiler-builtin __ASSEMBLER__ · d5dfdeb6
      Julius Werner authored
      
      
      NOTE: __ASSEMBLY__ macro is now deprecated in favor of __ASSEMBLER__.
      
      All common C compilers predefine a macro called __ASSEMBLER__ when
      preprocessing a .S file. There is no reason for TF-A to define it's own
      __ASSEMBLY__ macro for this purpose instead. To unify code with the
      export headers (which use __ASSEMBLER__ to avoid one extra dependency),
      let's deprecate __ASSEMBLY__ and switch the code base over to the
      predefined standard.
      
      Change-Id: Id7d0ec8cf330195da80499c68562b65cb5ab7417
      Signed-off-by: default avatarJulius Werner <jwerner@chromium.org>
      d5dfdeb6
  22. 10 Jul, 2019 1 commit
  23. 24 May, 2019 1 commit
    • Alexei Fedorov's avatar
      Add support for Branch Target Identification · 9fc59639
      Alexei Fedorov authored
      
      
      This patch adds the functionality needed for platforms to provide
      Branch Target Identification (BTI) extension, introduced to AArch64
      in Armv8.5-A by adding BTI instruction used to mark valid targets
      for indirect branches. The patch sets new GP bit [50] to the stage 1
      Translation Table Block and Page entries to denote guarded EL3 code
      pages which will cause processor to trap instructions in protected
      pages trying to perform an indirect branch to any instruction other
      than BTI.
      BTI feature is selected by BRANCH_PROTECTION option which supersedes
      the previous ENABLE_PAUTH used for Armv8.3-A Pointer Authentication
      and is disabled by default. Enabling BTI requires compiler support
      and was tested with GCC versions 9.0.0, 9.0.1 and 10.0.0.
      The assembly macros and helpers are modified to accommodate the BTI
      instruction.
      This is an experimental feature.
      Note. The previous ENABLE_PAUTH build option to enable PAuth in EL3
      is now made as an internal flag and BRANCH_PROTECTION flag should be
      used instead to enable Pointer Authentication.
      Note. USE_LIBROM=1 option is currently not supported.
      
      Change-Id: Ifaf4438609b16647dc79468b70cd1f47a623362e
      Signed-off-by: default avatarAlexei Fedorov <Alexei.Fedorov@arm.com>
      9fc59639
  24. 25 Apr, 2019 2 commits
  25. 12 Mar, 2019 1 commit
    • John Tsichritzis's avatar
      Apply stricter speculative load restriction · 02b57943
      John Tsichritzis authored
      
      
      The SCTLR.DSSBS bit is zero by default thus disabling speculative loads.
      However, we also explicitly set it to zero for BL2 and TSP images when
      each image initialises its context. This is done to ensure that the
      image environment is initialised in a safe state, regardless of the
      reset value of the bit.
      
      Change-Id: If25a8396641edb640f7f298b8d3309d5cba3cd79
      Signed-off-by: default avatarJohn Tsichritzis <john.tsichritzis@arm.com>
      02b57943
  26. 27 Feb, 2019 1 commit
    • Antonio Nino Diaz's avatar
      TSP: Enable pointer authentication support · 67b6ff9f
      Antonio Nino Diaz authored
      
      
      The size increase after enabling options related to ARMv8.3-PAuth is:
      
      +----------------------------+-------+-------+-------+--------+
      |                            |  text |  bss  |  data | rodata |
      +----------------------------+-------+-------+-------+--------+
      | CTX_INCLUDE_PAUTH_REGS = 1 |   +40 |   +0  |   +0  |   +0   |
      |                            |  0.4% |       |       |        |
      +----------------------------+-------+-------+-------+--------+
      | ENABLE_PAUTH = 1           |  +352 |    +0 |  +16  |   +0   |
      |                            |  3.1% |       | 15.8% |        |
      +----------------------------+-------+-------+-------+--------+
      
      Results calculated with the following build configuration:
      
          make PLAT=fvp SPD=tspd DEBUG=1 \
          SDEI_SUPPORT=1                 \
          EL3_EXCEPTION_HANDLING=1       \
          TSP_NS_INTR_ASYNC_PREEMPT=1    \
          CTX_INCLUDE_PAUTH_REGS=1       \
          ENABLE_PAUTH=1
      
      Change-Id: I6cc1fe0b2345c547dcef66f98758c4eb55fe5ee4
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      67b6ff9f
  27. 07 Feb, 2019 1 commit
    • Varun Wadekar's avatar
      locks: linker variables to calculate per-cpu bakery lock size · 596929b9
      Varun Wadekar authored
      
      
      This patch introduces explicit linker variables to mark the start and
      end of the per-cpu bakery lock section to help bakery_lock_normal.c
      calculate the size of the section. This patch removes the previously
      used '__PERCPU_BAKERY_LOCK_SIZE__' linker variable to make the code
      uniform across GNU linker and ARM linker.
      
      Change-Id: Ie0c51702cbc0fe8a2076005344a1fcebb48e7cca
      Signed-off-by: default avatarVarun Wadekar <vwadekar@nvidia.com>
      596929b9
  28. 01 Feb, 2019 1 commit
  29. 15 Jan, 2019 1 commit
    • Paul Beesley's avatar
      Correct typographical errors · 8aabea33
      Paul Beesley authored
      
      
      Corrects typos in core code, documentation files, drivers, Arm
      platforms and services.
      
      None of the corrections affect code; changes are limited to comments
      and other documentation.
      
      Change-Id: I5c1027b06ef149864f315ccc0ea473e2a16bfd1d
      Signed-off-by: default avatarPaul Beesley <paul.beesley@arm.com>
      8aabea33
  30. 04 Jan, 2019 1 commit
    • Antonio Nino Diaz's avatar
      Sanitise includes across codebase · 09d40e0e
      Antonio Nino Diaz authored
      Enforce full include path for includes. Deprecate old paths.
      
      The following folders inside include/lib have been left unchanged:
      
      - include/lib/cpus/${ARCH}
      - include/lib/el3_runtime/${ARCH}
      
      The reason for this change is that having a global namespace for
      includes isn't a good idea. It defeats one of the advantages of having
      folders and it introduces problems that are sometimes subtle (because
      you may not know the header you are actually including if there are two
      of them).
      
      For example, this patch had to be created because two headers were
      called the same way: e0ea0928 ("Fix gpio includes of mt8173 platform
      to avoid collision."). More recently, this patch has had similar
      problems: 46f9b2c3 ("drivers: add tzc380 support").
      
      This problem was introduced in commit 4ecca339
      
       ("Move include and
      source files to logical locations"). At that time, there weren't too
      many headers so it wasn't a real issue. However, time has shown that
      this creates problems.
      
      Platforms that want to preserve the way they include headers may add the
      removed paths to PLAT_INCLUDES, but this is discouraged.
      
      Change-Id: I39dc53ed98f9e297a5966e723d1936d6ccf2fc8f
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      09d40e0e
  31. 08 Nov, 2018 1 commit
    • Antonio Nino Diaz's avatar
      Standardise header guards across codebase · c3cf06f1
      Antonio Nino Diaz authored
      
      
      All identifiers, regardless of use, that start with two underscores are
      reserved. This means they can't be used in header guards.
      
      The style that this project is now to use the full name of the file in
      capital letters followed by 'H'. For example, for a file called
      "uart_example.h", the header guard is UART_EXAMPLE_H.
      
      The exceptions are files that are imported from other projects:
      
      - CryptoCell driver
      - dt-bindings folders
      - zlib headers
      
      Change-Id: I50561bf6c88b491ec440d0c8385c74650f3c106e
      Signed-off-by: default avatarAntonio Nino Diaz <antonio.ninodiaz@arm.com>
      c3cf06f1
  32. 05 Nov, 2018 1 commit
  33. 22 Aug, 2018 1 commit
  34. 11 Jul, 2018 2 commits
    • Roberto Vargas's avatar
      Add end_vector_entry assembler macro · a9203eda
      Roberto Vargas authored
      
      
      Check_vector_size checks if the size of the vector fits
      in the size reserved for it. This check creates problems in
      the Clang assembler. A new macro, end_vector_entry, is added
      and check_vector_size is deprecated.
      
      This new macro fills the current exception vector until the next
      exception vector. If the size of the current vector is bigger
      than 32 instructions then it gives an error.
      
      Change-Id: Ie8545cf1003a1e31656a1018dd6b4c28a4eaf671
      Signed-off-by: default avatarRoberto Vargas <roberto.vargas@arm.com>
      a9203eda
    • Roberto Vargas's avatar
      Add .extab and .exidx sections · ad925094
      Roberto Vargas authored
      
      
      These sections are required by clang when the code is compiled for
      aarch32. These sections are related to the unwind of the stack in
      exceptions, but in the way that clang defines and uses them, the
      garbage collector cannot get rid of them.
      
      Change-Id: I085efc0cf77eae961d522472f72c4b5bad2237ab
      Signed-off-by: default avatarRoberto Vargas <roberto.vargas@arm.com>
      ad925094