On Wed, Nov 9, 2016 at 3:19 PM, Lorenzo Pieralisi
wrote:
> DT based systems have a generic kernel API to configure IOMMUs
> for devices (ie of_iommu_configure()).
>
> On ARM based ACPI systems, the of_iommu_configure() equivalent can
> be implemented atop ACPI IORT kernel API, with the correspondi
On Wed, Nov 9, 2016 at 3:19 PM, Lorenzo Pieralisi
wrote:
> On DT based systems, the of_dma_configure() API implements DMA
> configuration for a given device. On ACPI systems an API equivalent to
> of_dma_configure() is missing which implies that it is currently not
> possible to set-up DMA operati
On 11/15/2016 3:33 PM, Borislav Petkov wrote:
> On Tue, Nov 15, 2016 at 03:22:45PM -0600, Tom Lendacky wrote:
>> Hmmm... I still need the ebx value from the CPUID instruction to
>> calculate the proper reduction in physical bits, so I'll still need
>> to make the CPUID call.
>
> if (c->ext
On Tue, Nov 15, 2016 at 03:22:45PM -0600, Tom Lendacky wrote:
> Hmmm... I still need the ebx value from the CPUID instruction to
> calculate the proper reduction in physical bits, so I'll still need
> to make the CPUID call.
if (c->extended_cpuid_level >= 0x801f) {
cpui
On 11/15/2016 6:14 AM, Borislav Petkov wrote:
> On Tue, Nov 15, 2016 at 01:10:35PM +0100, Joerg Roedel wrote:
>> Maybe add a comment here why you can't use cpu_has (yet).
>
> So that could be alleviated by moving this function *after*
> init_scattered_cpuid_features(). Then you can simply do *cpu_
On 11/15/2016 12:17 PM, Radim Krčmář wrote:
> 2016-11-15 11:02-0600, Tom Lendacky:
>> On 11/15/2016 8:39 AM, Radim Krčmář wrote:
>>> 2016-11-09 18:37-0600, Tom Lendacky:
Since DMA addresses will effectively look like 48-bit addresses when the
memory encryption mask is set, SWIOTLB is need
On Tue, Nov 15, 2016 at 12:29:35PM -0600, Tom Lendacky wrote:
> On 11/15/2016 9:16 AM, Michael S. Tsirkin wrote:
> > On Wed, Nov 09, 2016 at 06:37:23PM -0600, Tom Lendacky wrote:
> >> Since DMA addresses will effectively look like 48-bit addresses when the
> >> memory encryption mask is set, SWIOTL
On 11/15/2016 9:16 AM, Michael S. Tsirkin wrote:
> On Wed, Nov 09, 2016 at 06:37:23PM -0600, Tom Lendacky wrote:
>> Since DMA addresses will effectively look like 48-bit addresses when the
>> memory encryption mask is set, SWIOTLB is needed if the DMA mask of the
>> device performing the DMA does n
On 11/15/2016 01:26 AM, Marc Zyngier wrote:
On 15/11/16 07:00, Geetha sowjanya wrote:
From: Tirumalesh Chalamarla
This patch implements Cavium ThunderX erratum 28168.
PCI requires stores complete in order. Due to erratum #28168
PCI-inbound MSI-X store to the interrupt controller are
2016-11-15 11:02-0600, Tom Lendacky:
> On 11/15/2016 8:39 AM, Radim Krčmář wrote:
>> 2016-11-09 18:37-0600, Tom Lendacky:
>>> Since DMA addresses will effectively look like 48-bit addresses when the
>>> memory encryption mask is set, SWIOTLB is needed if the DMA mask of the
>>> device performing th
On 11/15/2016 10:33 AM, Borislav Petkov wrote:
> On Tue, Nov 15, 2016 at 10:06:16AM -0600, Tom Lendacky wrote:
>> Yes, but that doesn't relate to the physical address space reduction.
>>
>> Once the SYS_CFG MSR bit for SME is set, even if the encryption bit is
>> never used, there is a physical red
On 11/15/2016 8:39 AM, Radim Krčmář wrote:
> 2016-11-09 18:37-0600, Tom Lendacky:
>> Since DMA addresses will effectively look like 48-bit addresses when the
>> memory encryption mask is set, SWIOTLB is needed if the DMA mask of the
>> device performing the DMA does not support 48-bits. SWIOTLB wil
On Tue, Nov 15, 2016 at 10:06:16AM -0600, Tom Lendacky wrote:
> Yes, but that doesn't relate to the physical address space reduction.
>
> Once the SYS_CFG MSR bit for SME is set, even if the encryption bit is
> never used, there is a physical reduction of the address space. So when
> checking whet
On 15/11/16 11:49, Joerg Roedel wrote:
> On Fri, Nov 11, 2016 at 06:30:45PM +, Robin Murphy wrote:
>> iommu_dma_init_domain() was originally written under the misconception
>> that dma_32bit_pfn represented some sort of size limit for IOVA domains.
>> Since the truth is almost the exact opposit
On 11/15/2016 9:33 AM, Borislav Petkov wrote:
> On Tue, Nov 15, 2016 at 08:40:05AM -0600, Tom Lendacky wrote:
>> The feature may be present and enabled even if it is not currently
>> active. In other words, the SYS_CFG MSR bit could be set but we aren't
>> actually using encryption (sme_me_mask is
On Tue, Nov 15, 2016 at 08:40:05AM -0600, Tom Lendacky wrote:
> The feature may be present and enabled even if it is not currently
> active. In other words, the SYS_CFG MSR bit could be set but we aren't
> actually using encryption (sme_me_mask is 0). As long as the SYS_CFG
> MSR bit is set we ne
On Wed, Nov 09, 2016 at 06:37:23PM -0600, Tom Lendacky wrote:
> Since DMA addresses will effectively look like 48-bit addresses when the
> memory encryption mask is set, SWIOTLB is needed if the DMA mask of the
> device performing the DMA does not support 48-bits. SWIOTLB will be
> initialized to c
On 14/11/16 23:23, Auger Eric wrote:
> Hi Robin,
>
> On 14/11/2016 13:36, Robin Murphy wrote:
>> On 04/11/16 11:24, Eric Auger wrote:
>>> From: Robin Murphy
>>>
>>> IOMMU domain users such as VFIO face a similar problem to DMA API ops
>>> with regard to mapping MSI messages in systems where the M
2016-11-09 18:37-0600, Tom Lendacky:
> Since DMA addresses will effectively look like 48-bit addresses when the
> memory encryption mask is set, SWIOTLB is needed if the DMA mask of the
> device performing the DMA does not support 48-bits. SWIOTLB will be
> initialized to create un-encrypted bounce
On 11/15/2016 6:14 AM, Borislav Petkov wrote:
> On Tue, Nov 15, 2016 at 01:10:35PM +0100, Joerg Roedel wrote:
>> Maybe add a comment here why you can't use cpu_has (yet).
>
> So that could be alleviated by moving this function *after*
> init_scattered_cpuid_features(). Then you can simply do *cpu_
On 11/15/2016 6:10 AM, Joerg Roedel wrote:
> On Wed, Nov 09, 2016 at 06:35:13PM -0600, Tom Lendacky wrote:
>> +/*
>> + * AMD Secure Memory Encryption (SME) can reduce the size of the physical
>> + * address space if it is enabled, even if memory encryption is not active.
>> + * Adjust x86_phys_bits
On Tue, Nov 15, 2016 at 02:04:09PM +0100, Rafael J. Wysocki wrote:
> On Tue, Nov 15, 2016 at 11:12 AM, Lorenzo Pieralisi
> wrote:
> > Hi Rafael,
> >
> > On Thu, Nov 10, 2016 at 12:36:12AM +0100, Rafael J. Wysocki wrote:
> >> Hi Lorenzo,
> >>
> >> On Wed, Nov 9, 2016 at 3:19 PM, Lorenzo Pieralisi
>
Following LPC discussions, we now report reserved regions through
iommu-group sysfs reserved_regions attribute file.
Reserved regions are populated through the IOMMU get_resv_region callback
(former get_dm_regions), now implemented by amd-iommu, intel-iommu and
arm-smmu.
The intel-iommu reports t
When attaching a group to the container, handle the group's
reserved regions and particularly the IOMMU_RESV_MSI region
which requires an IOVA allocator to be initialized through
the iommu_get_msi_cookie API. This will allow the MSI IOVAs
to be transparently allocated on MSI controller's compose().
IOMMU domain users such as VFIO face a similar problem to DMA API ops
with regard to mapping MSI messages in systems where the MSI write is
subject to IOMMU translation. With the relevant infrastructure now in
place for managed DMA domains, it's actually really simple for other
users to piggyback o
The get() populates the list with the PCI host bridge windows
and the MSI IOVA range.
At the moment an arbitray MSI IOVA window is set at 0x800
of size 1MB. This will allow to report those info in iommu-group
sysfs?
Signed-off-by: Eric Auger
---
RFC v2 -> v3:
- use existing get/put_resv_re
This patch registers the [FEE0_h - FEF0_000h] 1MB MSI range
as a reserved region. This will allow to report that range
in the iommu-group sysfs.
Signed-off-by: Eric Auger
---
RFCv2 -> RFCv3:
- use get/put_resv_region callbacks.
RFC v1 -> RFC v2:
- fix intel_iommu_add_reserved_regions name
We want to extend the callbacks used for dm regions and
use them for reserved regions. Reserved regions can be
- directly mapped regions
- regions that cannot be iommu mapped (PCI host bridge windows, ...)
- MSI regions (because they belong to another address space or because
they are not transla
A new iommu-group sysfs attribute file is introduced. It contains
the list of reserved regions for the iommu-group. Each reserved
region is described on a separate line:
- first field is the start IOVA address,
- second is the end IOVA address,
Signed-off-by: Eric Auger
---
The file layout is i
Introduce a new helper serving the purpose to allocate a reserved
region. This will be used in iommu driver implementing reserved
region callbacks.
Signed-off-by: Eric Auger
---
drivers/iommu/iommu.c | 16
include/linux/iommu.h | 8
2 files changed, 24 insertions(+)
IOMMU_RESV_NOMAP is used to tag reserved IOVAs that are not
supposed to be IOMMU mapped. IOMMU_RESV_MSI tags IOVAs
corresponding to MSIs that need to be IOMMU mapped.
IOMMU_RESV_MASK allows to check if the IOVA is reserved.
Signed-off-by: Eric Auger
---
include/linux/iommu.h | 4
1 file ch
Introduce iommu_get_group_resv_regions whose role consists in
enumerating all devices from the group and collecting their
reserved regions. It checks duplicates.
Signed-off-by: Eric Auger
---
- we do not move list elements from device to group list since
the iommu_put_resv_regions() could not
As we introduced IOMMU_RESV_NOMAP and IOMMU_RESV_MSI regions,
let's prevent those new regions from being mapped.
Signed-off-by: Eric Auger
---
drivers/iommu/iommu.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 6ee529f..a4530ad 100644
On Tue, Nov 15, 2016 at 11:12 AM, Lorenzo Pieralisi
wrote:
> Hi Rafael,
>
> On Thu, Nov 10, 2016 at 12:36:12AM +0100, Rafael J. Wysocki wrote:
>> Hi Lorenzo,
>>
>> On Wed, Nov 9, 2016 at 3:19 PM, Lorenzo Pieralisi
>> wrote:
>> > This patch series is v7 of a previous posting:
>> >
>> > https://lkm
On 15/11/16 09:26, Marc Zyngier wrote:
> On 15/11/16 07:00, Geetha sowjanya wrote:
>> From: Tirumalesh Chalamarla
>>
>> This patch implements Cavium ThunderX erratum 28168.
>>
>> PCI requires stores complete in order. Due to erratum #28168
>> PCI-inbound MSI-X store to the interrupt controll
On Tue, Nov 15, 2016 at 01:10:35PM +0100, Joerg Roedel wrote:
> Maybe add a comment here why you can't use cpu_has (yet).
So that could be alleviated by moving this function *after*
init_scattered_cpuid_features(). Then you can simply do *cpu_has().
Also, I'm not sure why we're checking CPUID for
On Wed, Nov 09, 2016 at 06:35:13PM -0600, Tom Lendacky wrote:
> +/*
> + * AMD Secure Memory Encryption (SME) can reduce the size of the physical
> + * address space if it is enabled, even if memory encryption is not active.
> + * Adjust x86_phys_bits if SME is enabled.
> + */
> +static void phys_bi
On Fri, Nov 11, 2016 at 06:30:45PM +, Robin Murphy wrote:
> iommu_dma_init_domain() was originally written under the misconception
> that dma_32bit_pfn represented some sort of size limit for IOVA domains.
> Since the truth is almost the exact opposite of that, rework the logic
> and comments t
Hi Dan,
On 15/11/16 09:44, Dan Carpenter wrote:
> Hello Robin Murphy,
>
> The patch 0db2e5d18f76: "iommu: Implement common IOMMU ops for DMA
> mapping" from Oct 1, 2015, leads to the following static checker
> warning:
>
> drivers/iommu/dma-iommu.c:377 iommu_dma_alloc()
> warn: use '
On Fri, Nov 11, 2016 at 06:35:46PM +, Robin Murphy wrote:
> When searching for a free IOVA range, we optimise the tree traversal
> by starting from the cached32_node, instead of the last node, when
> limit_pfn is equal to dma_32bit_pfn. However, if limit_pfn happens to
> be smaller, then we'll
On Fri, Nov 11, 2016 at 05:59:21PM +, Robin Murphy wrote:
> iommu_group_get_for_dev() expects that the IOMMU driver's device_group
> callback return a group with a reference held for the given device.
> Whilst allocating a new group is fine, and pci_device_group() correctly
> handles reusing an
commit 8fd524b355da ("x86: Kill bad_dma_address variable") has killed
bad_dma_address variable and used instead of macro DMA_ERROR_CODE
which is always zero. Since dma_addr is unsigned, statement
dma_addr >= DMA_ERROR_CODE
is always true, and not needed.
arch/x86/kernel/pci-calgary_64.c: In f
Hi Rafael,
On Thu, Nov 10, 2016 at 12:36:12AM +0100, Rafael J. Wysocki wrote:
> Hi Lorenzo,
>
> On Wed, Nov 9, 2016 at 3:19 PM, Lorenzo Pieralisi
> wrote:
> > This patch series is v7 of a previous posting:
> >
> > https://lkml.org/lkml/2016/10/18/506
>
> I don't see anything objectionable in th
On Mon, Nov 14, 2016 at 06:25:16PM +, Robin Murphy wrote:
> On 14/11/16 15:52, Joerg Roedel wrote:
> > On Mon, Nov 14, 2016 at 12:00:47PM +, Robin Murphy wrote:
> >> If we've already made the decision to move away from bus ops, I don't
> >> see that it makes sense to deliberately introduce
Add a simple checks for dma_map_single() return value to make DMA-debug
checker happly. Exynos IOMMU on Samsung Exynos SoCs always use device,
which has linear DMA mapping ops (dma address is equal to physical memory
address), so no failures are returned from dma_map_single().
Signed-off-by: Marek
Hello Robin Murphy,
The patch 0db2e5d18f76: "iommu: Implement common IOMMU ops for DMA
mapping" from Oct 1, 2015, leads to the following static checker
warning:
drivers/iommu/dma-iommu.c:377 iommu_dma_alloc()
warn: use 'gfp' here instead of GFP_XXX?
drivers/iommu/dma-iommu.c
3
On 15/11/16 07:00, Geetha sowjanya wrote:
> From: Tirumalesh Chalamarla
>
> This patch implements Cavium ThunderX erratum 28168.
>
> PCI requires stores complete in order. Due to erratum #28168
> PCI-inbound MSI-X store to the interrupt controller are delivered
> to the interrupt control
47 matches
Mail list logo