Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
Menu
Open sidebar
adam.huang
Arm Trusted Firmware
Commits
0f49d496
Commit
0f49d496
authored
Oct 09, 2017
by
davidcunado-arm
Committed by
GitHub
Oct 09, 2017
Browse files
Merge pull request #1117 from antonio-nino-diaz-arm/an/xlat-improvements
Improvements to the translation tables library v2
parents
4d415c11
609c9191
Changes
11
Hide whitespace changes
Inline
Side-by-side
docs/xlat-tables-lib-v2-design.rst
View file @
0f49d496
...
@@ -66,7 +66,8 @@ map. It is one of the key interfaces to the library. It is identified by:
...
@@ -66,7 +66,8 @@ map. It is one of the key interfaces to the library. It is identified by:
- its physical base address;
- its physical base address;
- its virtual base address;
- its virtual base address;
- its size;
- its size;
- its attributes.
- its attributes;
- its mapping granularity (optional).
See the ``struct mmap_region`` type in `xlat\_tables\_v2.h`_.
See the ``struct mmap_region`` type in `xlat\_tables\_v2.h`_.
...
@@ -76,9 +77,37 @@ might create new translation tables, update or split existing ones.
...
@@ -76,9 +77,37 @@ might create new translation tables, update or split existing ones.
The region attributes specify the type of memory (for example device or cached
The region attributes specify the type of memory (for example device or cached
normal memory) as well as the memory access permissions (read-only or
normal memory) as well as the memory access permissions (read-only or
read-write, executable or not, secure or non-secure, and so on). See the
read-write, executable or not, secure or non-secure, and so on). In the case of
``mmap_attr_t`` enumeration type in `xlat\_tables\_v2.h`_.
the EL1&0 translation regime, the attributes also specify whether the region is
a User region (EL0) or Privileged region (EL1). See the ``mmap_attr_t``
enumeration type in `xlat\_tables\_v2.h`_. Note that for the EL1&0 translation
regime the Execute Never attribute is set simultaneously for both EL1 and EL0.
The granularity controls the translation table level to go down to when mapping
the region. For example, assuming the MMU has been configured to use a 4KB
granule size, the library might map a 2MB memory region using either of the two
following options:
- using a single level-2 translation table entry;
- using a level-2 intermediate entry to a level-3 translation table (which
contains 512 entries, each mapping 4KB).
The first solution potentially requires less translation tables, hence
potentially less memory. However, if part of this 2MB region is later remapped
with different memory attributes, the library might need to split the existing
page tables to refine the mappings. If a single level-2 entry has been used
here, a level-3 table will need to be allocated on the fly and the level-2
modified to point to this new level-3 table. This has a performance cost at
run-time.
If the user knows upfront that such a remapping operation is likely to happen
then they might enforce a 4KB mapping granularity for this 2MB region from the
beginning; remapping some of these 4KB pages on the fly then becomes a
lightweight operation.
The region's granularity is an optional field; if it is not specified the
library will choose the mapping granularity for this region as it sees fit (more
details can be found in `The memory mapping algorithm`_ section below).
Translation Context
Translation Context
~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~
...
@@ -190,6 +219,11 @@ the ``MAP_REGION*()`` family of helper macros. This is to limit the risk of
...
@@ -190,6 +219,11 @@ the ``MAP_REGION*()`` family of helper macros. This is to limit the risk of
compatibility breaks, should the ``mmap_region`` structure type evolve in the
compatibility breaks, should the ``mmap_region`` structure type evolve in the
future.
future.
The ``MAP_REGION()`` and ``MAP_REGION_FLAT()`` macros do not allow specifying a
mapping granularity, which leaves the library implementation free to choose
it. However, in cases where a specific granularity is required, the
``MAP_REGION2()`` macro might be used instead.
As explained earlier in this document, when the dynamic mapping feature is
As explained earlier in this document, when the dynamic mapping feature is
disabled, there is no notion of dynamic regions. Conceptually, there are only
disabled, there is no notion of dynamic regions. Conceptually, there are only
static regions. For this reason (and to retain backward compatibility with the
static regions. For this reason (and to retain backward compatibility with the
...
@@ -265,6 +299,9 @@ The architectural module
...
@@ -265,6 +299,9 @@ The architectural module
Core module
Core module
~~~~~~~~~~~
~~~~~~~~~~~
From mmap regions to translation tables
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
All the APIs in this module work on a translation context. The translation
All the APIs in this module work on a translation context. The translation
context contains the list of ``mmap_region``, which holds the information of all
context contains the list of ``mmap_region``, which holds the information of all
the regions that are mapped at any given time. Whenever there is a request to
the regions that are mapped at any given time. Whenever there is a request to
...
@@ -288,14 +325,18 @@ After the ``init_xlat_tables()`` API has been called, only dynamic regions can
...
@@ -288,14 +325,18 @@ After the ``init_xlat_tables()`` API has been called, only dynamic regions can
be added. Changes to the translation tables (as well as the mmap regions list)
be added. Changes to the translation tables (as well as the mmap regions list)
will take effect immediately.
will take effect immediately.
The memory mapping algorithm
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The mapping function is implemented as a recursive algorithm. It is however
The mapping function is implemented as a recursive algorithm. It is however
bound by the level of depth of the translation tables (the ARMv8-A architecture
bound by the level of depth of the translation tables (the ARMv8-A architecture
allows up to 4 lookup levels).
allows up to 4 lookup levels).
By default, the algorithm will attempt to minimize the number of translation
By default [#granularity-ref]_, the algorithm will attempt to minimize the
tables created to satisfy the user's request. It will favour mapping a region
number of translation tables created to satisfy the user's request. It will
using the biggest possible blocks, only creating a sub-table if it is strictly
favour mapping a region using the biggest possible blocks, only creating a
necessary. This is to reduce the memory footprint of the firmware.
sub-table if it is strictly necessary. This is to reduce the memory footprint of
the firmware.
The most common reason for needing a sub-table is when a specific mapping
The most common reason for needing a sub-table is when a specific mapping
requires a finer granularity. Misaligned regions also require a finer
requires a finer granularity. Misaligned regions also require a finer
...
@@ -322,6 +363,12 @@ entries in the translation tables are checked to ensure consistency. Please
...
@@ -322,6 +363,12 @@ entries in the translation tables are checked to ensure consistency. Please
refer to the comments in the source code of the core module for more details
refer to the comments in the source code of the core module for more details
about the sorting algorithm in use.
about the sorting algorithm in use.
.. [#granularity-ref] That is, when mmap regions do not enforce their mapping
granularity.
TLB maintenance operations
^^^^^^^^^^^^^^^^^^^^^^^^^^
The library takes care of performing TLB maintenance operations when required.
The library takes care of performing TLB maintenance operations when required.
For example, when the user requests removing a dynamic region, the library
For example, when the user requests removing a dynamic region, the library
invalidates all TLB entries associated to that region to ensure that these
invalidates all TLB entries associated to that region to ensure that these
...
...
include/lib/xlat_tables/xlat_tables_defs.h
View file @
0f49d496
...
@@ -89,9 +89,22 @@
...
@@ -89,9 +89,22 @@
* AP[1] bit is ignored by hardware and is
* AP[1] bit is ignored by hardware and is
* treated as if it is One in EL2/EL3
* treated as if it is One in EL2/EL3
*/
*/
#define AP_RO (U(0x1) << 5)
#define AP2_SHIFT U(0x7)
#define AP_RW (U(0x0) << 5)
#define AP2_RO U(0x1)
#define AP2_RW U(0x0)
#define AP1_SHIFT U(0x6)
#define AP1_ACCESS_UNPRIVILEGED U(0x1)
#define AP1_NO_ACCESS_UNPRIVILEGED U(0x0)
/*
* The following definitions must all be passed to the LOWER_ATTRS() macro to
* get the right bitmask.
*/
#define AP_RO (AP2_RO << 5)
#define AP_RW (AP2_RW << 5)
#define AP_ACCESS_UNPRIVILEGED (AP1_ACCESS_UNPRIVILEGED << 4)
#define AP_NO_ACCESS_UNPRIVILEGED (AP1_NO_ACCESS_UNPRIVILEGED << 4)
#define NS (U(0x1) << 3)
#define NS (U(0x1) << 3)
#define ATTR_NON_CACHEABLE_INDEX U(0x2)
#define ATTR_NON_CACHEABLE_INDEX U(0x2)
#define ATTR_DEVICE_INDEX U(0x1)
#define ATTR_DEVICE_INDEX U(0x1)
...
...
include/lib/xlat_tables/xlat_tables_v2.h
View file @
0f49d496
...
@@ -15,20 +15,36 @@
...
@@ -15,20 +15,36 @@
#include <xlat_mmu_helpers.h>
#include <xlat_mmu_helpers.h>
#include <xlat_tables_v2_helpers.h>
#include <xlat_tables_v2_helpers.h>
/* Helper macro to define entries for mmap_region_t. It creates
/*
* identity mappings for each region.
* Default granularity size for an mmap_region_t.
* Useful when no specific granularity is required.
*
* By default, choose the biggest possible block size allowed by the
* architectural state and granule size in order to minimize the number of page
* tables required for the mapping.
*/
*/
#define MAP_REGION_FLAT(adr, sz, attr) MAP_REGION(adr, adr, sz, attr)
#define REGION_DEFAULT_GRANULARITY XLAT_BLOCK_SIZE(MIN_LVL_BLOCK_DESC)
/* Helper macro to define an mmap_region_t. */
#define MAP_REGION(_pa, _va, _sz, _attr) \
_MAP_REGION_FULL_SPEC(_pa, _va, _sz, _attr, REGION_DEFAULT_GRANULARITY)
/* Helper macro to define entries for mmap_region_t. It allows to
/* Helper macro to define an mmap_region_t with an identity mapping. */
* re-map address mappings from 'pa' to 'va' for each region.
#define MAP_REGION_FLAT(_adr, _sz, _attr) \
MAP_REGION(_adr, _adr, _sz, _attr)
/*
* Helper macro to define an mmap_region_t to map with the desired granularity
* of translation tables.
*
* The granularity value passed to this macro must be a valid block or page
* size. When using a 4KB translation granule, this might be 4KB, 2MB or 1GB.
* Passing REGION_DEFAULT_GRANULARITY is also allowed and means that the library
* is free to choose the granularity for this region. In this case, it is
* equivalent to the MAP_REGION() macro.
*/
*/
#define MAP_REGION(_pa, _va, _sz, _attr) { \
#define MAP_REGION2(_pa, _va, _sz, _attr, _gr) \
.base_pa = (_pa), \
_MAP_REGION_FULL_SPEC(_pa, _va, _sz, _attr, _gr)
.base_va = (_va), \
.size = (_sz), \
.attr = (_attr), \
}
/*
/*
* Shifts and masks to access fields of an mmap_attr_t
* Shifts and masks to access fields of an mmap_attr_t
...
@@ -41,6 +57,11 @@
...
@@ -41,6 +57,11 @@
#define MT_SEC_SHIFT U(4)
#define MT_SEC_SHIFT U(4)
/* Access permissions for instruction execution (EXECUTE/EXECUTE_NEVER) */
/* Access permissions for instruction execution (EXECUTE/EXECUTE_NEVER) */
#define MT_EXECUTE_SHIFT U(5)
#define MT_EXECUTE_SHIFT U(5)
/*
* In the EL1&0 translation regime, mark the region as User (EL0) or
* Privileged (EL1). In the EL3 translation regime this has no effect.
*/
#define MT_USER_SHIFT U(6)
/* All other bits are reserved */
/* All other bits are reserved */
/*
/*
...
@@ -73,10 +94,20 @@ typedef enum {
...
@@ -73,10 +94,20 @@ typedef enum {
*/
*/
MT_EXECUTE
=
U
(
0
)
<<
MT_EXECUTE_SHIFT
,
MT_EXECUTE
=
U
(
0
)
<<
MT_EXECUTE_SHIFT
,
MT_EXECUTE_NEVER
=
U
(
1
)
<<
MT_EXECUTE_SHIFT
,
MT_EXECUTE_NEVER
=
U
(
1
)
<<
MT_EXECUTE_SHIFT
,
/*
* When mapping a region at EL0 or EL1, this attribute will be used to
* determine if a User mapping (EL0) will be created or a Privileged
* mapping (EL1).
*/
MT_USER
=
U
(
1
)
<<
MT_USER_SHIFT
,
MT_PRIVILEGED
=
U
(
0
)
<<
MT_USER_SHIFT
,
}
mmap_attr_t
;
}
mmap_attr_t
;
/* Compound attributes for most common usages */
#define MT_CODE (MT_MEMORY | MT_RO | MT_EXECUTE)
#define MT_CODE (MT_MEMORY | MT_RO | MT_EXECUTE)
#define MT_RO_DATA (MT_MEMORY | MT_RO | MT_EXECUTE_NEVER)
#define MT_RO_DATA (MT_MEMORY | MT_RO | MT_EXECUTE_NEVER)
#define MT_RW_DATA (MT_MEMORY | MT_RW | MT_EXECUTE_NEVER)
/*
/*
* Structure for specifying a single region of memory.
* Structure for specifying a single region of memory.
...
@@ -86,8 +117,18 @@ typedef struct mmap_region {
...
@@ -86,8 +117,18 @@ typedef struct mmap_region {
uintptr_t
base_va
;
uintptr_t
base_va
;
size_t
size
;
size_t
size
;
mmap_attr_t
attr
;
mmap_attr_t
attr
;
/* Desired granularity. See the MAP_REGION2() macro for more details. */
size_t
granularity
;
}
mmap_region_t
;
}
mmap_region_t
;
/*
* Translation regimes supported by this library.
*/
typedef
enum
xlat_regime
{
EL1_EL0_REGIME
,
EL3_REGIME
,
}
xlat_regime_t
;
/*
/*
* Declare the translation context type.
* Declare the translation context type.
* Its definition is private.
* Its definition is private.
...
@@ -123,8 +164,25 @@ typedef struct xlat_ctx xlat_ctx_t;
...
@@ -123,8 +164,25 @@ typedef struct xlat_ctx xlat_ctx_t;
*/
*/
#define REGISTER_XLAT_CONTEXT(_ctx_name, _mmap_count, _xlat_tables_count, \
#define REGISTER_XLAT_CONTEXT(_ctx_name, _mmap_count, _xlat_tables_count, \
_virt_addr_space_size, _phy_addr_space_size) \
_virt_addr_space_size, _phy_addr_space_size) \
_REGISTER_XLAT_CONTEXT(_ctx_name, _mmap_count, _xlat_tables_count, \
_REGISTER_XLAT_CONTEXT_FULL_SPEC(_ctx_name, _mmap_count, \
_virt_addr_space_size, _phy_addr_space_size)
_xlat_tables_count, \
_virt_addr_space_size, \
_phy_addr_space_size, \
IMAGE_XLAT_DEFAULT_REGIME)
/*
* Same as REGISTER_XLAT_CONTEXT plus the additional parameter _xlat_regime to
* specify the translation regime managed by this xlat_ctx_t instance. The
* values are the one from xlat_regime_t enumeration.
*/
#define REGISTER_XLAT_CONTEXT2(_ctx_name, _mmap_count, _xlat_tables_count, \
_virt_addr_space_size, _phy_addr_space_size, \
_xlat_regime) \
_REGISTER_XLAT_CONTEXT_FULL_SPEC(_ctx_name, _mmap_count, \
_xlat_tables_count, \
_virt_addr_space_size, \
_phy_addr_space_size, \
_xlat_regime)
/******************************************************************************
/******************************************************************************
* Generic translation table APIs.
* Generic translation table APIs.
...
...
include/lib/xlat_tables/xlat_tables_v2_helpers.h
View file @
0f49d496
...
@@ -27,6 +27,20 @@
...
@@ -27,6 +27,20 @@
/* Forward declaration */
/* Forward declaration */
struct
mmap_region
;
struct
mmap_region
;
/*
* Helper macro to define an mmap_region_t. This macro allows to specify all
* the fields of the structure but its parameter list is not guaranteed to
* remain stable as we add members to mmap_region_t.
*/
#define _MAP_REGION_FULL_SPEC(_pa, _va, _sz, _attr, _gr) \
{ \
.base_pa = (_pa), \
.base_va = (_va), \
.size = (_sz), \
.attr = (_attr), \
.granularity = (_gr), \
}
/* Struct that holds all information about the translation tables. */
/* Struct that holds all information about the translation tables. */
struct
xlat_ctx
{
struct
xlat_ctx
{
/*
/*
...
@@ -85,11 +99,12 @@ struct xlat_ctx {
...
@@ -85,11 +99,12 @@ struct xlat_ctx {
unsigned
int
initialized
;
unsigned
int
initialized
;
/*
/*
* Bit mask that has to be ORed to the rest of a translation table
* Translation regime managed by this xlat_ctx_t. It takes the values of
* descriptor in order to prohibit execution of code at the exception
* the enumeration xlat_regime_t. The type is "int" to avoid a circular
* level of this translation context.
* dependency on xlat_tables_v2.h, but this member must be treated as
* xlat_regime_t.
*/
*/
u
int
64_t
execute_never_mask
;
int
xlat_regime
;
};
};
#if PLAT_XLAT_TABLES_DYNAMIC
#if PLAT_XLAT_TABLES_DYNAMIC
...
@@ -106,9 +121,9 @@ struct xlat_ctx {
...
@@ -106,9 +121,9 @@ struct xlat_ctx {
/* do nothing */
/* do nothing */
#endif
/* PLAT_XLAT_TABLES_DYNAMIC */
#endif
/* PLAT_XLAT_TABLES_DYNAMIC */
#define _REGISTER_XLAT_CONTEXT_FULL_SPEC(_ctx_name, _mmap_count, _xlat_tables_count, \
#define _REGISTER_XLAT_CONTEXT(_ctx_name, _mmap_count, _xlat_tables_count,
\
_virt_addr_space_size, _phy_addr_space_size,
\
_
virt_addr_space_size, _phy_addr_space_size)
\
_
xlat_regime)
\
CASSERT(CHECK_VIRT_ADDR_SPACE_SIZE(_virt_addr_space_size), \
CASSERT(CHECK_VIRT_ADDR_SPACE_SIZE(_virt_addr_space_size), \
assert_invalid_virtual_addr_space_size_for_##_ctx_name); \
assert_invalid_virtual_addr_space_size_for_##_ctx_name); \
\
\
...
@@ -140,12 +155,23 @@ struct xlat_ctx {
...
@@ -140,12 +155,23 @@ struct xlat_ctx {
.tables = _ctx_name##_xlat_tables, \
.tables = _ctx_name##_xlat_tables, \
.tables_num = _xlat_tables_count, \
.tables_num = _xlat_tables_count, \
_REGISTER_DYNMAP_STRUCT(_ctx_name) \
_REGISTER_DYNMAP_STRUCT(_ctx_name) \
.xlat_regime = (_xlat_regime), \
.max_pa = 0, \
.max_pa = 0, \
.max_va = 0, \
.max_va = 0, \
.next_table = 0, \
.next_table = 0, \
.initialized = 0, \
.initialized = 0, \
}
}
/* This IMAGE_EL macro must not to be used outside the library */
#if IMAGE_BL1 || IMAGE_BL31
# define IMAGE_EL 3
# define IMAGE_XLAT_DEFAULT_REGIME EL3_REGIME
#else
# define IMAGE_EL 1
# define IMAGE_XLAT_DEFAULT_REGIME EL1_EL0_REGIME
#endif
#endif
/*__ASSEMBLY__*/
#endif
/*__ASSEMBLY__*/
#endif
/* __XLAT_TABLES_V2_HELPERS_H__ */
#endif
/* __XLAT_TABLES_V2_HELPERS_H__ */
lib/xlat_tables_v2/aarch32/xlat_tables_arch.c
View file @
0f49d496
...
@@ -22,7 +22,7 @@ unsigned long long xlat_arch_get_max_supported_pa(void)
...
@@ -22,7 +22,7 @@ unsigned long long xlat_arch_get_max_supported_pa(void)
}
}
#endif
/* ENABLE_ASSERTIONS*/
#endif
/* ENABLE_ASSERTIONS*/
int
is_mmu_enabled
(
voi
d
)
int
is_mmu_enabled
_ctx
(
const
xlat_ctx_t
*
ctx
__unuse
d
)
{
{
return
(
read_sctlr
()
&
SCTLR_M_BIT
)
!=
0
;
return
(
read_sctlr
()
&
SCTLR_M_BIT
)
!=
0
;
}
}
...
@@ -40,6 +40,17 @@ void xlat_arch_tlbi_va(uintptr_t va)
...
@@ -40,6 +40,17 @@ void xlat_arch_tlbi_va(uintptr_t va)
tlbimvaais
(
TLBI_ADDR
(
va
));
tlbimvaais
(
TLBI_ADDR
(
va
));
}
}
void
xlat_arch_tlbi_va_regime
(
uintptr_t
va
,
xlat_regime_t
xlat_regime
__unused
)
{
/*
* Ensure the translation table write has drained into memory before
* invalidating the TLB entry.
*/
dsbishst
();
tlbimvaais
(
TLBI_ADDR
(
va
));
}
void
xlat_arch_tlbi_va_sync
(
void
)
void
xlat_arch_tlbi_va_sync
(
void
)
{
{
/* Invalidate all entries from branch predictors. */
/* Invalidate all entries from branch predictors. */
...
@@ -77,11 +88,6 @@ int xlat_arch_current_el(void)
...
@@ -77,11 +88,6 @@ int xlat_arch_current_el(void)
return
3
;
return
3
;
}
}
uint64_t
xlat_arch_get_xn_desc
(
int
el
__unused
)
{
return
UPPER_ATTRS
(
XN
);
}
/*******************************************************************************
/*******************************************************************************
* Function for enabling the MMU in Secure PL1, assuming that the page tables
* Function for enabling the MMU in Secure PL1, assuming that the page tables
* have already been created.
* have already been created.
...
...
lib/xlat_tables_v2/aarch32/xlat_tables_arch_private.h
0 → 100644
View file @
0f49d496
/*
* Copyright (c) 2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef __XLAT_TABLES_ARCH_PRIVATE_H__
#define __XLAT_TABLES_ARCH_PRIVATE_H__
#include <xlat_tables_defs.h>
#include <xlat_tables_v2.h>
/*
* Return the execute-never mask that will prevent instruction fetch at the
* given translation regime.
*/
static
inline
uint64_t
xlat_arch_regime_get_xn_desc
(
xlat_regime_t
regime
__unused
)
{
return
UPPER_ATTRS
(
XN
);
}
#endif
/* __XLAT_TABLES_ARCH_PRIVATE_H__ */
lib/xlat_tables_v2/aarch64/xlat_tables_arch.c
View file @
0f49d496
...
@@ -10,19 +10,12 @@
...
@@ -10,19 +10,12 @@
#include <bl_common.h>
#include <bl_common.h>
#include <cassert.h>
#include <cassert.h>
#include <common_def.h>
#include <common_def.h>
#include <platform_def.h>
#include <sys/types.h>
#include <sys/types.h>
#include <utils.h>
#include <utils.h>
#include <utils_def.h>
#include <utils_def.h>
#include <xlat_tables_v2.h>
#include <xlat_tables_v2.h>
#include "../xlat_tables_private.h"
#include "../xlat_tables_private.h"
#if defined(IMAGE_BL1) || defined(IMAGE_BL31)
# define IMAGE_EL 3
#else
# define IMAGE_EL 1
#endif
static
unsigned
long
long
calc_physical_addr_size_bits
(
static
unsigned
long
long
calc_physical_addr_size_bits
(
unsigned
long
long
max_addr
)
unsigned
long
long
max_addr
)
{
{
...
@@ -71,20 +64,31 @@ unsigned long long xlat_arch_get_max_supported_pa(void)
...
@@ -71,20 +64,31 @@ unsigned long long xlat_arch_get_max_supported_pa(void)
}
}
#endif
/* ENABLE_ASSERTIONS*/
#endif
/* ENABLE_ASSERTIONS*/
int
is_mmu_enabled
(
void
)
int
is_mmu_enabled_ctx
(
const
xlat_ctx_t
*
ctx
)
{
if
(
ctx
->
xlat_regime
==
EL1_EL0_REGIME
)
{
assert
(
xlat_arch_current_el
()
>=
1
);
return
(
read_sctlr_el1
()
&
SCTLR_M_BIT
)
!=
0
;
}
else
{
assert
(
ctx
->
xlat_regime
==
EL3_REGIME
);
assert
(
xlat_arch_current_el
()
>=
3
);
return
(
read_sctlr_el3
()
&
SCTLR_M_BIT
)
!=
0
;
}
}
void
xlat_arch_tlbi_va
(
uintptr_t
va
)
{
{
#if IMAGE_EL == 1
#if IMAGE_EL == 1
assert
(
IS_IN_EL
(
1
));
assert
(
IS_IN_EL
(
1
));
return
(
read_sctlr_el1
()
&
SCTLR_M_BIT
)
!=
0
;
xlat_arch_tlbi_va_regime
(
va
,
EL1_EL0_REGIME
)
;
#elif IMAGE_EL == 3
#elif IMAGE_EL == 3
assert
(
IS_IN_EL
(
3
));
assert
(
IS_IN_EL
(
3
));
return
(
read_sctlr_el3
()
&
SCTLR_M_BIT
)
!=
0
;
xlat_arch_tlbi_va_regime
(
va
,
EL3_REGIME
)
;
#endif
#endif
}
}
#if PLAT_XLAT_TABLES_DYNAMIC
void
xlat_arch_tlbi_va_regime
(
uintptr_t
va
,
xlat_regime_t
xlat_regime
)
void
xlat_arch_tlbi_va
(
uintptr_t
va
)
{
{
/*
/*
* Ensure the translation table write has drained into memory before
* Ensure the translation table write has drained into memory before
...
@@ -92,13 +96,21 @@ void xlat_arch_tlbi_va(uintptr_t va)
...
@@ -92,13 +96,21 @@ void xlat_arch_tlbi_va(uintptr_t va)
*/
*/
dsbishst
();
dsbishst
();
#if IMAGE_EL == 1
/*
assert
(
IS_IN_EL
(
1
));
* This function only supports invalidation of TLB entries for the EL3
tlbivaae1is
(
TLBI_ADDR
(
va
));
* and EL1&0 translation regimes.
#elif IMAGE_EL == 3
*
assert
(
IS_IN_EL
(
3
));
* Also, it is architecturally UNDEFINED to invalidate TLBs of a higher
tlbivae3is
(
TLBI_ADDR
(
va
));
* exception level (see section D4.9.2 of the ARM ARM rev B.a).
#endif
*/
if
(
xlat_regime
==
EL1_EL0_REGIME
)
{
assert
(
xlat_arch_current_el
()
>=
1
);
tlbivaae1is
(
TLBI_ADDR
(
va
));
}
else
{
assert
(
xlat_regime
==
EL3_REGIME
);
assert
(
xlat_arch_current_el
()
>=
3
);
tlbivae3is
(
TLBI_ADDR
(
va
));
}
}
}
void
xlat_arch_tlbi_va_sync
(
void
)
void
xlat_arch_tlbi_va_sync
(
void
)
...
@@ -124,8 +136,6 @@ void xlat_arch_tlbi_va_sync(void)
...
@@ -124,8 +136,6 @@ void xlat_arch_tlbi_va_sync(void)
isb
();
isb
();
}
}
#endif
/* PLAT_XLAT_TABLES_DYNAMIC */
int
xlat_arch_current_el
(
void
)
int
xlat_arch_current_el
(
void
)
{
{
int
el
=
GET_EL
(
read_CurrentEl
());
int
el
=
GET_EL
(
read_CurrentEl
());
...
@@ -135,16 +145,6 @@ int xlat_arch_current_el(void)
...
@@ -135,16 +145,6 @@ int xlat_arch_current_el(void)
return
el
;
return
el
;
}
}
uint64_t
xlat_arch_get_xn_desc
(
int
el
)
{
if
(
el
==
3
)
{
return
UPPER_ATTRS
(
XN
);
}
else
{
assert
(
el
==
1
);
return
UPPER_ATTRS
(
PXN
);
}
}
/*******************************************************************************
/*******************************************************************************
* Macro generating the code for the function enabling the MMU in the given
* Macro generating the code for the function enabling the MMU in the given
* exception level, assuming that the pagetables have already been created.
* exception level, assuming that the pagetables have already been created.
...
...
lib/xlat_tables_v2/aarch64/xlat_tables_arch_private.h
0 → 100644
View file @
0f49d496
/*
* Copyright (c) 2017, ARM Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef __XLAT_TABLES_ARCH_PRIVATE_H__
#define __XLAT_TABLES_ARCH_PRIVATE_H__
#include <assert.h>
#include <xlat_tables_defs.h>
#include <xlat_tables_v2.h>
/*
* Return the execute-never mask that will prevent instruction fetch at all ELs
* that are part of the given translation regime.
*/
static
inline
uint64_t
xlat_arch_regime_get_xn_desc
(
xlat_regime_t
regime
)
{
if
(
regime
==
EL1_EL0_REGIME
)
{
return
UPPER_ATTRS
(
UXN
)
|
UPPER_ATTRS
(
PXN
);
}
else
{
assert
(
regime
==
EL3_REGIME
);
return
UPPER_ATTRS
(
XN
);
}
}
#endif
/* __XLAT_TABLES_ARCH_PRIVATE_H__ */
lib/xlat_tables_v2/xlat_tables.mk
View file @
0f49d496
...
@@ -7,3 +7,5 @@
...
@@ -7,3 +7,5 @@
XLAT_TABLES_LIB_SRCS
:=
$(
addprefix
lib/xlat_tables_v2/,
\
XLAT_TABLES_LIB_SRCS
:=
$(
addprefix
lib/xlat_tables_v2/,
\
${ARCH}
/xlat_tables_arch.c
\
${ARCH}
/xlat_tables_arch.c
\
xlat_tables_internal.c
)
xlat_tables_internal.c
)
INCLUDES
+=
-Ilib
/xlat_tables_v2/
${ARCH}
lib/xlat_tables_v2/xlat_tables_internal.c
View file @
0f49d496
...
@@ -14,7 +14,7 @@
...
@@ -14,7 +14,7 @@
#include <string.h>
#include <string.h>
#include <types.h>
#include <types.h>
#include <utils.h>
#include <utils.h>
#include <xlat_tables_arch.h>
#include <xlat_tables_arch
_private
.h>
#include <xlat_tables_defs.h>
#include <xlat_tables_defs.h>
#include <xlat_tables_v2.h>
#include <xlat_tables_v2.h>
...
@@ -112,9 +112,11 @@ static uint64_t *xlat_table_get_empty(xlat_ctx_t *ctx)
...
@@ -112,9 +112,11 @@ static uint64_t *xlat_table_get_empty(xlat_ctx_t *ctx)
#endif
/* PLAT_XLAT_TABLES_DYNAMIC */
#endif
/* PLAT_XLAT_TABLES_DYNAMIC */
/* Returns a block/page table descriptor for the given level and attributes. */
/*
static
uint64_t
xlat_desc
(
mmap_attr_t
attr
,
unsigned
long
long
addr_pa
,
* Returns a block/page table descriptor for the given level and attributes.
int
level
,
uint64_t
execute_never_mask
)
*/
uint64_t
xlat_desc
(
const
xlat_ctx_t
*
ctx
,
mmap_attr_t
attr
,
unsigned
long
long
addr_pa
,
int
level
)
{
{
uint64_t
desc
;
uint64_t
desc
;
int
mem_type
;
int
mem_type
;
...
@@ -133,9 +135,28 @@ static uint64_t xlat_desc(mmap_attr_t attr, unsigned long long addr_pa,
...
@@ -133,9 +135,28 @@ static uint64_t xlat_desc(mmap_attr_t attr, unsigned long long addr_pa,
* Deduce other fields of the descriptor based on the MT_NS and MT_RW
* Deduce other fields of the descriptor based on the MT_NS and MT_RW
* memory region attributes.
* memory region attributes.
*/
*/
desc
|=
LOWER_ATTRS
(
ACCESS_FLAG
);
desc
|=
(
attr
&
MT_NS
)
?
LOWER_ATTRS
(
NS
)
:
0
;
desc
|=
(
attr
&
MT_NS
)
?
LOWER_ATTRS
(
NS
)
:
0
;
desc
|=
(
attr
&
MT_RW
)
?
LOWER_ATTRS
(
AP_RW
)
:
LOWER_ATTRS
(
AP_RO
);
desc
|=
(
attr
&
MT_RW
)
?
LOWER_ATTRS
(
AP_RW
)
:
LOWER_ATTRS
(
AP_RO
);
desc
|=
LOWER_ATTRS
(
ACCESS_FLAG
);
/*
* Do not allow unprivileged access when the mapping is for a privileged
* EL. For translation regimes that do not have mappings for access for
* lower exception levels, set AP[2] to AP_NO_ACCESS_UNPRIVILEGED.
*/
if
(
ctx
->
xlat_regime
==
EL1_EL0_REGIME
)
{
if
(
attr
&
MT_USER
)
{
/* EL0 mapping requested, so we give User access */
desc
|=
LOWER_ATTRS
(
AP_ACCESS_UNPRIVILEGED
);
}
else
{
/* EL1 mapping requested, no User access granted */
desc
|=
LOWER_ATTRS
(
AP_NO_ACCESS_UNPRIVILEGED
);
}
}
else
{
assert
(
ctx
->
xlat_regime
==
EL3_REGIME
);
desc
|=
LOWER_ATTRS
(
AP_NO_ACCESS_UNPRIVILEGED
);
}
/*
/*
* Deduce shareability domain and executability of the memory region
* Deduce shareability domain and executability of the memory region
...
@@ -156,7 +177,7 @@ static uint64_t xlat_desc(mmap_attr_t attr, unsigned long long addr_pa,
...
@@ -156,7 +177,7 @@ static uint64_t xlat_desc(mmap_attr_t attr, unsigned long long addr_pa,
* fetch, which could be an issue if this memory region
* fetch, which could be an issue if this memory region
* corresponds to a read-sensitive peripheral.
* corresponds to a read-sensitive peripheral.
*/
*/
desc
|=
execute_never_mask
;
desc
|=
xlat_arch_regime_get_xn_desc
(
ctx
->
xlat_regime
)
;
}
else
{
/* Normal memory */
}
else
{
/* Normal memory */
/*
/*
...
@@ -171,10 +192,13 @@ static uint64_t xlat_desc(mmap_attr_t attr, unsigned long long addr_pa,
...
@@ -171,10 +192,13 @@ static uint64_t xlat_desc(mmap_attr_t attr, unsigned long long addr_pa,
* translation table.
* translation table.
*
*
* For read-only memory, rely on the MT_EXECUTE/MT_EXECUTE_NEVER
* For read-only memory, rely on the MT_EXECUTE/MT_EXECUTE_NEVER
* attribute to figure out the value of the XN bit.
* attribute to figure out the value of the XN bit. The actual
* XN bit(s) to set in the descriptor depends on the context's
* translation regime and the policy applied in
* xlat_arch_regime_get_xn_desc().
*/
*/
if
((
attr
&
MT_RW
)
||
(
attr
&
MT_EXECUTE_NEVER
))
{
if
((
attr
&
MT_RW
)
||
(
attr
&
MT_EXECUTE_NEVER
))
{
desc
|=
execute_never_mask
;
desc
|=
xlat_arch_regime_get_xn_desc
(
ctx
->
xlat_regime
)
;
}
}
if
(
mem_type
==
MT_MEMORY
)
{
if
(
mem_type
==
MT_MEMORY
)
{
...
@@ -314,7 +338,7 @@ static void xlat_tables_unmap_region(xlat_ctx_t *ctx, mmap_region_t *mm,
...
@@ -314,7 +338,7 @@ static void xlat_tables_unmap_region(xlat_ctx_t *ctx, mmap_region_t *mm,
if
(
action
==
ACTION_WRITE_BLOCK_ENTRY
)
{
if
(
action
==
ACTION_WRITE_BLOCK_ENTRY
)
{
table_base
[
table_idx
]
=
INVALID_DESC
;
table_base
[
table_idx
]
=
INVALID_DESC
;
xlat_arch_tlbi_va
(
table_idx_va
);
xlat_arch_tlbi_va
_regime
(
table_idx_va
,
ctx
->
xlat_regime
);
}
else
if
(
action
==
ACTION_RECURSE_INTO_TABLE
)
{
}
else
if
(
action
==
ACTION_RECURSE_INTO_TABLE
)
{
...
@@ -330,7 +354,8 @@ static void xlat_tables_unmap_region(xlat_ctx_t *ctx, mmap_region_t *mm,
...
@@ -330,7 +354,8 @@ static void xlat_tables_unmap_region(xlat_ctx_t *ctx, mmap_region_t *mm,
*/
*/
if
(
xlat_table_is_empty
(
ctx
,
subtable
))
{
if
(
xlat_table_is_empty
(
ctx
,
subtable
))
{
table_base
[
table_idx
]
=
INVALID_DESC
;
table_base
[
table_idx
]
=
INVALID_DESC
;
xlat_arch_tlbi_va
(
table_idx_va
);
xlat_arch_tlbi_va_regime
(
table_idx_va
,
ctx
->
xlat_regime
);
}
}
}
else
{
}
else
{
...
@@ -417,7 +442,8 @@ static action_t xlat_tables_map_region_action(const mmap_region_t *mm,
...
@@ -417,7 +442,8 @@ static action_t xlat_tables_map_region_action(const mmap_region_t *mm,
* descriptors. If not, create a table instead.
* descriptors. If not, create a table instead.
*/
*/
if
((
dest_pa
&
XLAT_BLOCK_MASK
(
level
))
||
if
((
dest_pa
&
XLAT_BLOCK_MASK
(
level
))
||
(
level
<
MIN_LVL_BLOCK_DESC
))
(
level
<
MIN_LVL_BLOCK_DESC
)
||
(
mm
->
granularity
<
XLAT_BLOCK_SIZE
(
level
)))
return
ACTION_CREATE_NEW_TABLE
;
return
ACTION_CREATE_NEW_TABLE
;
else
else
return
ACTION_WRITE_BLOCK_ENTRY
;
return
ACTION_WRITE_BLOCK_ENTRY
;
...
@@ -535,8 +561,7 @@ static uintptr_t xlat_tables_map_region(xlat_ctx_t *ctx, mmap_region_t *mm,
...
@@ -535,8 +561,7 @@ static uintptr_t xlat_tables_map_region(xlat_ctx_t *ctx, mmap_region_t *mm,
if
(
action
==
ACTION_WRITE_BLOCK_ENTRY
)
{
if
(
action
==
ACTION_WRITE_BLOCK_ENTRY
)
{
table_base
[
table_idx
]
=
table_base
[
table_idx
]
=
xlat_desc
(
mm
->
attr
,
table_idx_pa
,
level
,
xlat_desc
(
ctx
,
mm
->
attr
,
table_idx_pa
,
level
);
ctx
->
execute_never_mask
);
}
else
if
(
action
==
ACTION_CREATE_NEW_TABLE
)
{
}
else
if
(
action
==
ACTION_CREATE_NEW_TABLE
)
{
...
@@ -590,9 +615,10 @@ void print_mmap(mmap_region_t *const mmap)
...
@@ -590,9 +615,10 @@ void print_mmap(mmap_region_t *const mmap)
mmap_region_t
*
mm
=
mmap
;
mmap_region_t
*
mm
=
mmap
;
while
(
mm
->
size
)
{
while
(
mm
->
size
)
{
tf_printf
(
" VA:%p PA:0x%llx size:0x%zx attr:0x%x
\n
"
,
tf_printf
(
" VA:%p PA:0x%llx size:0x%zx attr:0x%x"
,
(
void
*
)
mm
->
base_va
,
mm
->
base_pa
,
(
void
*
)
mm
->
base_va
,
mm
->
base_pa
,
mm
->
size
,
mm
->
attr
);
mm
->
size
,
mm
->
attr
);
tf_printf
(
" granularity:0x%zx
\n
"
,
mm
->
granularity
);
++
mm
;
++
mm
;
};
};
tf_printf
(
"
\n
"
);
tf_printf
(
"
\n
"
);
...
@@ -613,7 +639,7 @@ static int mmap_add_region_check(xlat_ctx_t *ctx, const mmap_region_t *mm)
...
@@ -613,7 +639,7 @@ static int mmap_add_region_check(xlat_ctx_t *ctx, const mmap_region_t *mm)
unsigned
long
long
base_pa
=
mm
->
base_pa
;
unsigned
long
long
base_pa
=
mm
->
base_pa
;
uintptr_t
base_va
=
mm
->
base_va
;
uintptr_t
base_va
=
mm
->
base_va
;
size_t
size
=
mm
->
size
;
size_t
size
=
mm
->
size
;
mmap_attr_t
attr
=
mm
->
attr
;
size_t
granularity
=
mm
->
granularity
;
unsigned
long
long
end_pa
=
base_pa
+
size
-
1
;
unsigned
long
long
end_pa
=
base_pa
+
size
-
1
;
uintptr_t
end_va
=
base_va
+
size
-
1
;
uintptr_t
end_va
=
base_va
+
size
-
1
;
...
@@ -622,6 +648,12 @@ static int mmap_add_region_check(xlat_ctx_t *ctx, const mmap_region_t *mm)
...
@@ -622,6 +648,12 @@ static int mmap_add_region_check(xlat_ctx_t *ctx, const mmap_region_t *mm)
!
IS_PAGE_ALIGNED
(
size
))
!
IS_PAGE_ALIGNED
(
size
))
return
-
EINVAL
;
return
-
EINVAL
;
if
((
granularity
!=
XLAT_BLOCK_SIZE
(
1
))
&&
(
granularity
!=
XLAT_BLOCK_SIZE
(
2
))
&&
(
granularity
!=
XLAT_BLOCK_SIZE
(
3
)))
{
return
-
EINVAL
;
}
/* Check for overflows */
/* Check for overflows */
if
((
base_pa
>
end_pa
)
||
(
base_va
>
end_va
))
if
((
base_pa
>
end_pa
)
||
(
base_va
>
end_va
))
return
-
ERANGE
;
return
-
ERANGE
;
...
@@ -663,11 +695,9 @@ static int mmap_add_region_check(xlat_ctx_t *ctx, const mmap_region_t *mm)
...
@@ -663,11 +695,9 @@ static int mmap_add_region_check(xlat_ctx_t *ctx, const mmap_region_t *mm)
if
(
fully_overlapped_va
)
{
if
(
fully_overlapped_va
)
{
#if PLAT_XLAT_TABLES_DYNAMIC
#if PLAT_XLAT_TABLES_DYNAMIC
if
((
attr
&
MT_DYNAMIC
)
||
if
((
mm
->
attr
&
MT_DYNAMIC
)
||
(
mm_cursor
->
attr
&
MT_DYNAMIC
))
(
mm_cursor
->
attr
&
MT_DYNAMIC
))
return
-
EPERM
;
return
-
EPERM
;
#else
(
void
)
attr
;
#endif
/* PLAT_XLAT_TABLES_DYNAMIC */
#endif
/* PLAT_XLAT_TABLES_DYNAMIC */
if
((
mm_cursor
->
base_va
-
mm_cursor
->
base_pa
)
!=
if
((
mm_cursor
->
base_va
-
mm_cursor
->
base_pa
)
!=
(
base_va
-
base_pa
))
(
base_va
-
base_pa
))
...
@@ -876,9 +906,8 @@ int mmap_add_dynamic_region_ctx(xlat_ctx_t *ctx, mmap_region_t *mm)
...
@@ -876,9 +906,8 @@ int mmap_add_dynamic_region_ctx(xlat_ctx_t *ctx, mmap_region_t *mm)
.
size
=
end_va
-
mm
->
base_va
,
.
size
=
end_va
-
mm
->
base_va
,
.
attr
=
0
.
attr
=
0
};
};
xlat_tables_unmap_region
(
ctx
,
xlat_tables_unmap_region
(
ctx
,
&
unmap_mm
,
0
,
ctx
->
base_table
,
&
unmap_mm
,
0
,
ctx
->
base_table
,
ctx
->
base_table_entries
,
ctx
->
base_level
);
ctx
->
base_table_entries
,
ctx
->
base_level
);
return
-
ENOMEM
;
return
-
ENOMEM
;
}
}
...
@@ -993,9 +1022,10 @@ int mmap_remove_dynamic_region(uintptr_t base_va, size_t size)
...
@@ -993,9 +1022,10 @@ int mmap_remove_dynamic_region(uintptr_t base_va, size_t size)
#if LOG_LEVEL >= LOG_LEVEL_VERBOSE
#if LOG_LEVEL >= LOG_LEVEL_VERBOSE
/* Print the attributes of the specified block descriptor. */
/* Print the attributes of the specified block descriptor. */
static
void
xlat_desc_print
(
uint64_t
desc
,
uint64_t
execute_never_mask
)
static
void
xlat_desc_print
(
xlat_ctx_t
*
ctx
,
uint64_t
desc
)
{
{
int
mem_type_index
=
ATTR_INDEX_GET
(
desc
);
int
mem_type_index
=
ATTR_INDEX_GET
(
desc
);
xlat_regime_t
xlat_regime
=
ctx
->
xlat_regime
;
if
(
mem_type_index
==
ATTR_IWBWA_OWBWA_NTR_INDEX
)
{
if
(
mem_type_index
==
ATTR_IWBWA_OWBWA_NTR_INDEX
)
{
tf_printf
(
"MEM"
);
tf_printf
(
"MEM"
);
...
@@ -1006,9 +1036,49 @@ static void xlat_desc_print(uint64_t desc, uint64_t execute_never_mask)
...
@@ -1006,9 +1036,49 @@ static void xlat_desc_print(uint64_t desc, uint64_t execute_never_mask)
tf_printf
(
"DEV"
);
tf_printf
(
"DEV"
);
}
}
tf_printf
(
LOWER_ATTRS
(
AP_RO
)
&
desc
?
"-RO"
:
"-RW"
);
const
char
*
priv_str
=
"(PRIV)"
;
const
char
*
user_str
=
"(USER)"
;
/*
* Showing Privileged vs Unprivileged only makes sense for EL1&0
* mappings
*/
const
char
*
ro_str
=
"-RO"
;
const
char
*
rw_str
=
"-RW"
;
const
char
*
no_access_str
=
"-NOACCESS"
;
if
(
xlat_regime
==
EL3_REGIME
)
{
/* For EL3, the AP[2] bit is all what matters */
tf_printf
((
desc
&
LOWER_ATTRS
(
AP_RO
))
?
ro_str
:
rw_str
);
}
else
{
const
char
*
ap_str
=
(
desc
&
LOWER_ATTRS
(
AP_RO
))
?
ro_str
:
rw_str
;
tf_printf
(
ap_str
);
tf_printf
(
priv_str
);
/*
* EL0 can only have the same permissions as EL1 or no
* permissions at all.
*/
tf_printf
((
desc
&
LOWER_ATTRS
(
AP_ACCESS_UNPRIVILEGED
))
?
ap_str
:
no_access_str
);
tf_printf
(
user_str
);
}
const
char
*
xn_str
=
"-XN"
;
const
char
*
exec_str
=
"-EXEC"
;
if
(
xlat_regime
==
EL3_REGIME
)
{
/* For EL3, the XN bit is all what matters */
tf_printf
(
LOWER_ATTRS
(
XN
)
&
desc
?
xn_str
:
exec_str
);
}
else
{
/* For EL0 and EL1, we need to know who has which rights */
tf_printf
(
LOWER_ATTRS
(
PXN
)
&
desc
?
xn_str
:
exec_str
);
tf_printf
(
priv_str
);
tf_printf
(
LOWER_ATTRS
(
UXN
)
&
desc
?
xn_str
:
exec_str
);
tf_printf
(
user_str
);
}
tf_printf
(
LOWER_ATTRS
(
NS
)
&
desc
?
"-NS"
:
"-S"
);
tf_printf
(
LOWER_ATTRS
(
NS
)
&
desc
?
"-NS"
:
"-S"
);
tf_printf
(
execute_never_mask
&
desc
?
"-XN"
:
"-EXEC"
);
}
}
static
const
char
*
const
level_spacers
[]
=
{
static
const
char
*
const
level_spacers
[]
=
{
...
@@ -1025,9 +1095,10 @@ static const char *invalid_descriptors_ommited =
...
@@ -1025,9 +1095,10 @@ static const char *invalid_descriptors_ommited =
* Recursive function that reads the translation tables passed as an argument
* Recursive function that reads the translation tables passed as an argument
* and prints their status.
* and prints their status.
*/
*/
static
void
xlat_tables_print_internal
(
const
uintptr_t
table_base_va
,
static
void
xlat_tables_print_internal
(
xlat_ctx_t
*
ctx
,
const
uintptr_t
table_base_va
,
uint64_t
*
const
table_base
,
const
int
table_entries
,
uint64_t
*
const
table_base
,
const
int
table_entries
,
const
unsigned
int
level
,
const
uint64_t
execute_never_mask
)
const
unsigned
int
level
)
{
{
assert
(
level
<=
XLAT_TABLE_LEVEL_MAX
);
assert
(
level
<=
XLAT_TABLE_LEVEL_MAX
);
...
@@ -1086,17 +1157,16 @@ static void xlat_tables_print_internal(const uintptr_t table_base_va,
...
@@ -1086,17 +1157,16 @@ static void xlat_tables_print_internal(const uintptr_t table_base_va,
uintptr_t
addr_inner
=
desc
&
TABLE_ADDR_MASK
;
uintptr_t
addr_inner
=
desc
&
TABLE_ADDR_MASK
;
xlat_tables_print_internal
(
table_idx_va
,
xlat_tables_print_internal
(
ctx
,
table_idx_va
,
(
uint64_t
*
)
addr_inner
,
(
uint64_t
*
)
addr_inner
,
XLAT_TABLE_ENTRIES
,
level
+
1
,
XLAT_TABLE_ENTRIES
,
level
+
1
);
execute_never_mask
);
}
else
{
}
else
{
tf_printf
(
"%sVA:%p PA:0x%llx size:0x%zx "
,
tf_printf
(
"%sVA:%p PA:0x%llx size:0x%zx "
,
level_spacers
[
level
],
level_spacers
[
level
],
(
void
*
)
table_idx_va
,
(
void
*
)
table_idx_va
,
(
unsigned
long
long
)(
desc
&
TABLE_ADDR_MASK
),
(
unsigned
long
long
)(
desc
&
TABLE_ADDR_MASK
),
level_size
);
level_size
);
xlat_desc_print
(
desc
,
execute_never_mask
);
xlat_desc_print
(
ctx
,
desc
);
tf_printf
(
"
\n
"
);
tf_printf
(
"
\n
"
);
}
}
}
}
...
@@ -1116,7 +1186,15 @@ static void xlat_tables_print_internal(const uintptr_t table_base_va,
...
@@ -1116,7 +1186,15 @@ static void xlat_tables_print_internal(const uintptr_t table_base_va,
void
xlat_tables_print
(
xlat_ctx_t
*
ctx
)
void
xlat_tables_print
(
xlat_ctx_t
*
ctx
)
{
{
#if LOG_LEVEL >= LOG_LEVEL_VERBOSE
#if LOG_LEVEL >= LOG_LEVEL_VERBOSE
const
char
*
xlat_regime_str
;
if
(
ctx
->
xlat_regime
==
EL1_EL0_REGIME
)
{
xlat_regime_str
=
"1&0"
;
}
else
{
assert
(
ctx
->
xlat_regime
==
EL3_REGIME
);
xlat_regime_str
=
"3"
;
}
VERBOSE
(
"Translation tables state:
\n
"
);
VERBOSE
(
"Translation tables state:
\n
"
);
VERBOSE
(
" Xlat regime: EL%s
\n
"
,
xlat_regime_str
);
VERBOSE
(
" Max allowed PA: 0x%llx
\n
"
,
ctx
->
pa_max_address
);
VERBOSE
(
" Max allowed PA: 0x%llx
\n
"
,
ctx
->
pa_max_address
);
VERBOSE
(
" Max allowed VA: %p
\n
"
,
(
void
*
)
ctx
->
va_max_address
);
VERBOSE
(
" Max allowed VA: %p
\n
"
,
(
void
*
)
ctx
->
va_max_address
);
VERBOSE
(
" Max mapped PA: 0x%llx
\n
"
,
ctx
->
max_pa
);
VERBOSE
(
" Max mapped PA: 0x%llx
\n
"
,
ctx
->
max_pa
);
...
@@ -1140,22 +1218,21 @@ void xlat_tables_print(xlat_ctx_t *ctx)
...
@@ -1140,22 +1218,21 @@ void xlat_tables_print(xlat_ctx_t *ctx)
used_page_tables
,
ctx
->
tables_num
,
used_page_tables
,
ctx
->
tables_num
,
ctx
->
tables_num
-
used_page_tables
);
ctx
->
tables_num
-
used_page_tables
);
xlat_tables_print_internal
(
0
,
ctx
->
base_table
,
ctx
->
base_table
_entries
,
xlat_tables_print_internal
(
ctx
,
0
,
ctx
->
base_table
,
ctx
->
base_
level
,
ctx
->
execut
e_
n
eve
r_mask
);
ctx
->
base_
table_entries
,
ctx
->
bas
e_
l
eve
l
);
#endif
/* LOG_LEVEL >= LOG_LEVEL_VERBOSE */
#endif
/* LOG_LEVEL >= LOG_LEVEL_VERBOSE */
}
}
void
init_xlat_tables_ctx
(
xlat_ctx_t
*
ctx
)
void
init_xlat_tables_ctx
(
xlat_ctx_t
*
ctx
)
{
{
mmap_region_t
*
mm
=
ctx
->
mmap
;
assert
(
ctx
!=
NULL
);
assert
(
!
is_mmu_enabled
());
assert
(
!
ctx
->
initialized
);
assert
(
!
ctx
->
initialized
);
assert
(
ctx
->
xlat_regime
==
EL3_REGIME
||
ctx
->
xlat_regime
==
EL1_EL0_REGIME
);
assert
(
!
is_mmu_enabled_ctx
(
ctx
));
print_mmap
(
mm
)
;
mmap_region_t
*
mm
=
ctx
->
mmap
;
ctx
->
execute_never_mask
=
print_mmap
(
mm
);
xlat_arch_get_xn_desc
(
xlat_arch_current_el
());
/* All tables must be zeroed before mapping any region. */
/* All tables must be zeroed before mapping any region. */
...
...
lib/xlat_tables_v2/xlat_tables_private.h
View file @
0f49d496
...
@@ -34,12 +34,24 @@ typedef enum {
...
@@ -34,12 +34,24 @@ typedef enum {
MT_DYNAMIC
=
1
<<
MT_DYN_SHIFT
MT_DYNAMIC
=
1
<<
MT_DYN_SHIFT
}
mmap_priv_attr_t
;
}
mmap_priv_attr_t
;
#endif
/* PLAT_XLAT_TABLES_DYNAMIC */
/*
/*
* Function used to invalidate all levels of the translation walk for a given
* Invalidate all TLB entries that match the given virtual address. This
* virtual address. It must be called for every translation table entry that is
* operation applies to all PEs in the same Inner Shareable domain as the PE
* modified.
* that executes this function. This functions must be called for every
* translation table entry that is modified.
*
* xlat_arch_tlbi_va() applies the invalidation to the exception level of the
* current translation regime, whereas xlat_arch_tlbi_va_regime() applies it to
* the given translation regime.
*
* Note, however, that it is architecturally UNDEFINED to invalidate TLB entries
* pertaining to a higher exception level, e.g. invalidating EL3 entries from
* S-EL1.
*/
*/
void
xlat_arch_tlbi_va
(
uintptr_t
va
);
void
xlat_arch_tlbi_va
(
uintptr_t
va
);
void
xlat_arch_tlbi_va_regime
(
uintptr_t
va
,
xlat_regime_t
xlat_regime
);
/*
/*
* This function has to be called at the end of any code that uses the function
* This function has to be called at the end of any code that uses the function
...
@@ -47,8 +59,6 @@ void xlat_arch_tlbi_va(uintptr_t va);
...
@@ -47,8 +59,6 @@ void xlat_arch_tlbi_va(uintptr_t va);
*/
*/
void
xlat_arch_tlbi_va_sync
(
void
);
void
xlat_arch_tlbi_va_sync
(
void
);
#endif
/* PLAT_XLAT_TABLES_DYNAMIC */
/* Print VA, PA, size and attributes of all regions in the mmap array. */
/* Print VA, PA, size and attributes of all regions in the mmap array. */
void
print_mmap
(
mmap_region_t
*
const
mmap
);
void
print_mmap
(
mmap_region_t
*
const
mmap
);
...
@@ -65,13 +75,6 @@ void xlat_tables_print(xlat_ctx_t *ctx);
...
@@ -65,13 +75,6 @@ void xlat_tables_print(xlat_ctx_t *ctx);
/* Returns the current Exception Level. The returned EL must be 1 or higher. */
/* Returns the current Exception Level. The returned EL must be 1 or higher. */
int
xlat_arch_current_el
(
void
);
int
xlat_arch_current_el
(
void
);
/*
* Returns the bit mask that has to be ORed to the rest of a translation table
* descriptor so that execution of code is prohibited at the given Exception
* Level.
*/
uint64_t
xlat_arch_get_xn_desc
(
int
el
);
/*
/*
* Return the maximum physical address supported by the hardware.
* Return the maximum physical address supported by the hardware.
* This value depends on the execution state (AArch32/AArch64).
* This value depends on the execution state (AArch32/AArch64).
...
@@ -82,7 +85,10 @@ unsigned long long xlat_arch_get_max_supported_pa(void);
...
@@ -82,7 +85,10 @@ unsigned long long xlat_arch_get_max_supported_pa(void);
void
enable_mmu_arch
(
unsigned
int
flags
,
uint64_t
*
base_table
,
void
enable_mmu_arch
(
unsigned
int
flags
,
uint64_t
*
base_table
,
unsigned
long
long
pa
,
uintptr_t
max_va
);
unsigned
long
long
pa
,
uintptr_t
max_va
);
/* Return 1 if the MMU of this Exception Level is enabled, 0 otherwise. */
/*
int
is_mmu_enabled
(
void
);
* Return 1 if the MMU of the translation regime managed by the given xlat_ctx_t
* is enabled, 0 otherwise.
*/
int
is_mmu_enabled_ctx
(
const
xlat_ctx_t
*
ctx
);
#endif
/* __XLAT_TABLES_PRIVATE_H__ */
#endif
/* __XLAT_TABLES_PRIVATE_H__ */
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment