On 2021-08-02 21:42, Will Deacon wrote:
On Tue, Jul 27, 2021 at 03:03:22PM +0530, Sai Prakash Ranjan wrote:
Some clocks for SMMU can have parent as XO such as
gpu_cc_hub_cx_int_clk
of GPU SMMU in QTI SC7280 SoC and in order to enter deep sleep states
in
such cases, we would need to drop the XO
[AMD Official Use Only]
Hi Chris
I hit kmemleak with your following patch, Can you help to fix it?
According to the info in this thread, it seems the patch doesn't merge into
iommu mainline branch, but I can get your patch from my kernel: 5.11.0
commit 48a64dd561a53fb809e3f2210faf5dd442cfc56d
On 2021-08-02 21:13, Will Deacon wrote:
On Wed, Jun 23, 2021 at 07:12:01PM +0530, Sai Prakash Ranjan wrote:
Currently for iommu_unmap() of large scatter-gather list with page
size
elements, the majority of time is spent in flushing of partial walks
in
__arm_lpae_unmap() which is a VA based TLB
Hi,
On Thu, Jul 29, 2021 at 2:41 PM Yong Wu wrote:
>
> Hi Ikjoon,
>
> Just a ping.
>
> On Thu, 2021-07-22 at 14:38 +0800, Yong Wu wrote:
> > On Wed, 2021-07-21 at 21:40 +0800, Ikjoon Jang wrote:
> > > On Thu, Jul 15, 2021 at 8:23 PM Yong Wu wrote:
> > > >
> > > > To improve the performance, We a
> From: David Gibson
> Sent: Tuesday, August 3, 2021 9:51 AM
>
> On Wed, Jul 28, 2021 at 04:04:24AM +, Tian, Kevin wrote:
> > Hi, David,
> >
> > > From: David Gibson
> > > Sent: Monday, July 26, 2021 12:51 PM
> > >
> > > On Fri, Jul 09, 2021 at 07:48:44AM +, Tian, Kevin wrote:
> > > > /d
On Fri, Jul 30, 2021 at 11:51:23AM -0300, Jason Gunthorpe wrote:
> On Mon, Jul 26, 2021 at 02:50:48PM +1000, David Gibson wrote:
>
> > That said, I'm still finding the various ways a device can attach to
> > an ioasid pretty confusing. Here are some thoughts on some extra
> > concepts that might
On Wed, Jul 28, 2021 at 04:04:24AM +, Tian, Kevin wrote:
> Hi, David,
>
> > From: David Gibson
> > Sent: Monday, July 26, 2021 12:51 PM
> >
> > On Fri, Jul 09, 2021 at 07:48:44AM +, Tian, Kevin wrote:
> > > /dev/iommu provides an unified interface for managing I/O page tables for
> > > d
On Mon, Aug 2, 2021 at 8:14 AM Will Deacon wrote:
>
> On Mon, Aug 02, 2021 at 08:08:07AM -0700, Rob Clark wrote:
> > On Mon, Aug 2, 2021 at 3:55 AM Will Deacon wrote:
> > >
> > > On Thu, Jul 29, 2021 at 10:08:22AM +0530, Sai Prakash Ranjan wrote:
> > > > On 2021-07-28 19:30, Georgi Djakov wrote:
On Mon, Aug 2, 2021 at 9:12 AM Will Deacon wrote:
>
> On Tue, Jul 27, 2021 at 03:03:22PM +0530, Sai Prakash Ranjan wrote:
> > Some clocks for SMMU can have parent as XO such as gpu_cc_hub_cx_int_clk
> > of GPU SMMU in QTI SC7280 SoC and in order to enter deep sleep states in
> > such cases, we wou
Hi Rob,
On Mon, Aug 2, 2021 at 5:09 PM Rajat Jain wrote:
>
> Hi Robin, Doug,
>
> On Wed, Jul 14, 2021 at 8:14 AM Doug Anderson wrote:
> >
> > Hi,
> >
> > On Tue, Jul 13, 2021 at 11:07 AM Robin Murphy wrote:
> > >
> > > On 2021-07-08 15:36, Doug Anderson wrote:
> > > [...]
> > > >> Or document f
Hi Robin, Doug,
On Wed, Jul 14, 2021 at 8:14 AM Doug Anderson wrote:
>
> Hi,
>
> On Tue, Jul 13, 2021 at 11:07 AM Robin Murphy wrote:
> >
> > On 2021-07-08 15:36, Doug Anderson wrote:
> > [...]
> > >> Or document for the users that want performance how to
> > >> change the setting, so that they
On Tue, Jul 27, 2021 at 1:52 AM Christoph Hellwig wrote:
>
> On Mon, Jul 26, 2021 at 03:47:54PM -0700, Atish Patra wrote:
> > arch_dma_set_uncached works as well in this case. However, mips,
> > niops2 & xtensa uses a
> > fixed (via config) value for the offset. Similar approach can't be
> > used
On 02/08/2021 17:40, John Garry wrote:
On 02/08/2021 17:16, Robin Murphy wrote:
On 2021-08-02 17:06, John Garry wrote:
On 02/08/2021 16:06, Will Deacon wrote:
On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
Add max opt argument to init_iova_domain(), and use it to set the
rcaches
On 02/08/2021 17:16, Robin Murphy wrote:
On 2021-08-02 17:06, John Garry wrote:
On 02/08/2021 16:06, Will Deacon wrote:
On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
Add max opt argument to init_iova_domain(), and use it to set the
rcaches
range.
Also fix up all users to set th
On 2021-08-02 17:06, John Garry wrote:
On 02/08/2021 16:06, Will Deacon wrote:
On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
Add max opt argument to init_iova_domain(), and use it to set the
rcaches
range.
Also fix up all users to set this value (at 0, meaning use default).
Wra
On Tue, Jul 27, 2021 at 03:03:22PM +0530, Sai Prakash Ranjan wrote:
> Some clocks for SMMU can have parent as XO such as gpu_cc_hub_cx_int_clk
> of GPU SMMU in QTI SC7280 SoC and in order to enter deep sleep states in
> such cases, we would need to drop the XO clock vote in unprepare call and
> thi
On 2021-08-02 16:23, John Garry wrote:
On 02/08/2021 16:01, Will Deacon wrote:
On Wed, Jul 14, 2021 at 06:36:39PM +0800, John Garry wrote:
Some LLDs may request DMA mappings whose IOVA length exceeds that of the
current rcache upper limit.
What's an LLD?
low-level driver
maybe I'll stick
On 02/08/2021 16:06, Will Deacon wrote:
On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
Add max opt argument to init_iova_domain(), and use it to set the rcaches
range.
Also fix up all users to set this value (at 0, meaning use default).
Wrap that in init_iova_domain_defaults() to
On 2021-08-02 16:16, Will Deacon wrote:
On Fri, Jun 18, 2021 at 02:00:35AM +0530, Ashish Mhetre wrote:
Multiple iommu domains and iommu groups are getting created for the devices
sharing same SID. It is expected for devices sharing same SID to be in same
iommu group and same iommu domain.
This i
On Wed, Jun 23, 2021 at 07:12:01PM +0530, Sai Prakash Ranjan wrote:
> Currently for iommu_unmap() of large scatter-gather list with page size
> elements, the majority of time is spent in flushing of partial walks in
> __arm_lpae_unmap() which is a VA based TLB invalidation invalidating
> page-by-pa
On 02/08/2021 16:01, Will Deacon wrote:
On Wed, Jul 14, 2021 at 06:36:39PM +0800, John Garry wrote:
Some LLDs may request DMA mappings whose IOVA length exceeds that of the
current rcache upper limit.
What's an LLD?
low-level driver
maybe I'll stick with simply "drivers"
This means that
On Fri, Jun 18, 2021 at 02:00:35AM +0530, Ashish Mhetre wrote:
> Multiple iommu domains and iommu groups are getting created for the devices
> sharing same SID. It is expected for devices sharing same SID to be in same
> iommu group and same iommu domain.
> This is leading to context faults when on
On Mon, Aug 02, 2021 at 08:08:07AM -0700, Rob Clark wrote:
> On Mon, Aug 2, 2021 at 3:55 AM Will Deacon wrote:
> >
> > On Thu, Jul 29, 2021 at 10:08:22AM +0530, Sai Prakash Ranjan wrote:
> > > On 2021-07-28 19:30, Georgi Djakov wrote:
> > > > On Mon, Jan 11, 2021 at 07:45:02PM +0530, Sai Prakash R
On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
> Add max opt argument to init_iova_domain(), and use it to set the rcaches
> range.
>
> Also fix up all users to set this value (at 0, meaning use default).
Wrap that in init_iova_domain_defaults() to avoid the mysterious 0?
Will
_
From: Joerg Roedel
Remove the new use of the variable introduced in the AMD driver branch.
The variable was removed already in the iommu core branch, causing build
errors when the brances are merged.
Cc: Nadav Amit
Cc: Zhen Lei
Signed-off-by: Joerg Roedel
---
drivers/iommu/amd/init.c | 6 ++-
On Mon, Aug 2, 2021 at 3:55 AM Will Deacon wrote:
>
> On Thu, Jul 29, 2021 at 10:08:22AM +0530, Sai Prakash Ranjan wrote:
> > On 2021-07-28 19:30, Georgi Djakov wrote:
> > > On Mon, Jan 11, 2021 at 07:45:02PM +0530, Sai Prakash Ranjan wrote:
> > > > commit ecd7274fb4cd ("iommu: Remove unused IOMMU
On Wed, Jul 14, 2021 at 06:36:39PM +0800, John Garry wrote:
> Some LLDs may request DMA mappings whose IOVA length exceeds that of the
> current rcache upper limit.
What's an LLD?
> This means that allocations for those IOVAs will never be cached, and
> always must be allocated and freed from the
On Mon, Aug 02, 2021 at 03:43:20PM +0100, Will Deacon wrote:
> For both patches:
>
> Acked-by: Will Deacon
>
> Joerg -- please can you take these directly? They build on top of the
> changes from Isaac which you have queued on your 'core' branch.
Sure, applied to core branch now.
Thanks,
On Wed, Jul 14, 2021 at 06:36:38PM +0800, John Garry wrote:
> Function iommu_group_store_type() supports changing the default domain
> of an IOMMU group.
>
> Many conditions need to be satisfied and steps taken for this action to be
> successful.
>
> Satisfying these conditions and steps will be
On Sat, Jul 31, 2021 at 10:17:09AM +0800, chenxiang wrote:
> From: Xiang Chen
>
> The series ("Optimizing iommu_[map/unmap] performance") improve the
> iommu_[map/unmap] performance. Based on the series, implement
> [map/unmap]_pages
> callbacks for ARM SMMUV3.
> Use tool dma_map_benchmark to te
On Sat, Jul 31, 2021 at 09:47:37AM +0200, Frank Wunderlich wrote:
> Fixes: d72e31c93746 ("iommu: IOMMU Groups")
> Signed-off-by: Frank Wunderlich
> ---
> v2:
> - commit-message with captial letters on beginning of sentenence
> - added more information, many thanks to Yong Wu
Applied, thanks.
On 2021-08-02 14:04, Will Deacon wrote:
On Wed, Jul 28, 2021 at 04:58:44PM +0100, Robin Murphy wrote:
To make io-pgtable aware of a flush queue being dynamically enabled,
allow IO_PGTABLE_QUIRK_NON_STRICT to be set even after a domain has been
attached to, and hook up the final piece of the puzz
On 8/2/2021 9:20 PM, Joerg Roedel wrote:
On Wed, Jul 28, 2021 at 10:52:28AM -0400, Tianyu Lan wrote:
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
storvsc rx/tx ring buffer. The page buffer used by vmbus_sen
On Fri, Jul 09, 2021 at 12:35:01PM +0900, David Stevens wrote:
> From: David Stevens
>
> If SKIP_CPU_SYNC isn't already set, then iommu_dma_unmap_(page|sg) has
> already called iommu_dma_sync_(single|sg)_for_cpu, so there is no need
> to copy from the bounce buffer again.
>
> Signed-off-by: Davi
On Mon, Aug 02, 2021 at 02:40:59PM +0100, Will Deacon wrote:
> On Fri, Jul 09, 2021 at 12:35:00PM +0900, David Stevens wrote:
> > From: David Stevens
> >
> > When calling arch_sync_dma, we need to pass it the memory that's
> > actually being used for dma. When using swiotlb bounce buffers, this i
On Fri, Jul 09, 2021 at 12:35:00PM +0900, David Stevens wrote:
> From: David Stevens
>
> When calling arch_sync_dma, we need to pass it the memory that's
> actually being used for dma. When using swiotlb bounce buffers, this is
> the bounce buffer. Move arch_sync_dma into the __iommu_dma_map_swio
On 8/2/2021 8:39 PM, Joerg Roedel wrote:
On Wed, Jul 28, 2021 at 10:52:21AM -0400, Tianyu Lan wrote:
+ hv_ghcb->ghcb.protocol_version = 1;
+ hv_ghcb->ghcb.ghcb_usage = 1;
The values set to ghcb_usage deserve some defines (here and below).
OK. Will update in the next version.
__
On Fri, Jul 09, 2021 at 12:34:59PM +0900, David Stevens wrote:
> From: David Stevens
>
> The is_swiotlb_buffer function takes the physical address of the swiotlb
> buffer, not the physical address of the original buffer. The sglist
> contains the physical addresses of the original buffer, so for
On Mon, Aug 02, 2021 at 03:11:40PM +0200, Juergen Gross wrote:
> As those cases are all mutually exclusive, wouldn't a static_call() be
> the appropriate solution?
Right, static_call() is even better, thanks.
___
iommu mailing list
iommu@lists.linux-foun
On Wed, Jul 28, 2021 at 10:52:28AM -0400, Tianyu Lan wrote:
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> mpb_desc() still need to ha
On 8/2/2021 8:28 PM, Joerg Roedel wrote:
On Wed, Jul 28, 2021 at 10:52:20AM -0400, Tianyu Lan wrote:
+void hv_ghcb_msr_write(u64 msr, u64 value)
+{
+ union hv_ghcb *hv_ghcb;
+ void **ghcb_base;
+ unsigned long flags;
+
+ if (!ms_hyperv.ghcb_base)
+ return;
+
On 02.08.21 14:01, Joerg Roedel wrote:
On Wed, Jul 28, 2021 at 08:29:41AM -0700, Dave Hansen wrote:
__set_memory_enc_dec() is turning into a real mess. SEV, TDX and now
Hyper-V are messing around in here.
I was going to suggest a PV_OPS call where the fitting implementation
for the guest envi
On 8/2/2021 8:59 PM, Joerg Roedel wrote:
On Mon, Aug 02, 2021 at 08:56:29PM +0800, Tianyu Lan wrote:
Both second and third are HV_GPADL_RING type. One is send ring and the
other is receive ring. The driver keeps the order to allocate rx and
tx buffer. You are right this is not robust and will ad
On Wed, Jul 28, 2021 at 04:58:44PM +0100, Robin Murphy wrote:
> To make io-pgtable aware of a flush queue being dynamically enabled,
> allow IO_PGTABLE_QUIRK_NON_STRICT to be set even after a domain has been
> attached to, and hook up the final piece of the puzzle in iommu-dma.
>
> Signed-off-by:
On 8/2/2021 8:01 PM, Joerg Roedel wrote:
On Wed, Jul 28, 2021 at 08:29:41AM -0700, Dave Hansen wrote:
__set_memory_enc_dec() is turning into a real mess. SEV, TDX and now
Hyper-V are messing around in here.
I was going to suggest a PV_OPS call where the fitting implementation
for the guest
On Mon, Aug 02, 2021 at 08:56:29PM +0800, Tianyu Lan wrote:
> Both second and third are HV_GPADL_RING type. One is send ring and the
> other is receive ring. The driver keeps the order to allocate rx and
> tx buffer. You are right this is not robust and will add a mutex to keep
> the order.
Or you
On Wed, Jul 28, 2021 at 10:52:22AM -0400, Tianyu Lan wrote:
> + if (hv_is_isolation_supported()) {
> + vmbus_connection.monitor_pages_va[0]
> + = vmbus_connection.monitor_pages[0];
> + vmbus_connection.monitor_pages[0]
> + = memrem
Le 28/07/2021 à 00:26, Tom Lendacky a écrit :
Replace occurrences of mem_encrypt_active() with calls to prot_guest_has()
with the PATTR_MEM_ENCRYPT attribute.
What about
https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20210730114231.23445-1-w...@kernel.org/ ?
Christophe
Cc: Th
On 8/2/2021 8:07 PM, Joerg Roedel wrote:
On Wed, Jul 28, 2021 at 10:52:19AM -0400, Tianyu Lan wrote:
+ if (type == HV_GPADL_BUFFER)
+ index = 0;
+ else
+ index = channel->gpadl_range[1].gpadlhandle ? 2 : 1;
Hmm... This doesn't look very robust. Can yo
On Wed, Jul 28, 2021 at 10:52:21AM -0400, Tianyu Lan wrote:
> + hv_ghcb->ghcb.protocol_version = 1;
> + hv_ghcb->ghcb.ghcb_usage = 1;
The values set to ghcb_usage deserve some defines (here and below).
> +
> + hv_ghcb->hypercall.outputgpa = (u64)output;
> + hv_ghcb->hypercall.hype
Hi Joerg:
Thanks for your review.
On 8/2/2021 7:53 PM, Joerg Roedel wrote:
On Wed, Jul 28, 2021 at 10:52:16AM -0400, Tianyu Lan wrote:
+static int hyperv_init_ghcb(void)
+{
+ u64 ghcb_gpa;
+ void *ghcb_va;
+ void **ghcb_base;
+
+ if (!ms_hyperv.ghcb_base)
+
On Wed, Jul 28, 2021 at 10:52:20AM -0400, Tianyu Lan wrote:
> +void hv_ghcb_msr_write(u64 msr, u64 value)
> +{
> + union hv_ghcb *hv_ghcb;
> + void **ghcb_base;
> + unsigned long flags;
> +
> + if (!ms_hyperv.ghcb_base)
> + return;
> +
> + WARN_ON(in_nmi());
> +
> +
On Wed, Jul 28, 2021 at 10:52:19AM -0400, Tianyu Lan wrote:
> + if (type == HV_GPADL_BUFFER)
> + index = 0;
> + else
> + index = channel->gpadl_range[1].gpadlhandle ? 2 : 1;
Hmm... This doesn't look very robust. Can you set fixed indexes for
different buffer types?
On Wed, Jul 28, 2021 at 08:29:41AM -0700, Dave Hansen wrote:
> __set_memory_enc_dec() is turning into a real mess. SEV, TDX and now
> Hyper-V are messing around in here.
I was going to suggest a PV_OPS call where the fitting implementation
for the guest environment can be plugged in at boot. Ther
On Wed, Jul 28, 2021 at 10:52:16AM -0400, Tianyu Lan wrote:
> +static int hyperv_init_ghcb(void)
> +{
> + u64 ghcb_gpa;
> + void *ghcb_va;
> + void **ghcb_base;
> +
> + if (!ms_hyperv.ghcb_base)
> + return -EINVAL;
> +
> + rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa);
> +
Will Deacon writes:
> Commit ad6c00283163 ("swiotlb: Free tbl memory in swiotlb_exit()")
> introduced a set_memory_encrypted() call to swiotlb_exit() so that the
> buffer pages are returned to an encrypted state prior to being freed.
>
> Sachin reports that this leads to the following crash on a P
On Thu, Jul 29, 2021 at 10:08:22AM +0530, Sai Prakash Ranjan wrote:
> On 2021-07-28 19:30, Georgi Djakov wrote:
> > On Mon, Jan 11, 2021 at 07:45:02PM +0530, Sai Prakash Ranjan wrote:
> > > commit ecd7274fb4cd ("iommu: Remove unused IOMMU_SYS_CACHE_ONLY flag")
> > > removed unused IOMMU_SYS_CACHE_O
On Tue, Jul 27, 2021 at 05:26:11PM -0500, Tom Lendacky wrote:
> The mem_encrypt_active() function has been replaced by prot_guest_has(),
> so remove the implementation.
>
> Signed-off-by: Tom Lendacky
Reviewed-by: Joerg Roedel
___
iommu mailing list
i
On Tue, Jul 27, 2021 at 05:26:12PM -0500, Tom Lendacky wrote:
> The mem_encrypt_active() function has been replaced by prot_guest_has(),
> so remove the implementation.
>
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: Borislav Petkov
> Signed-off-by: Tom Lendacky
Reviewed-by: Joerg Roedel
___
On Tue, Jul 27, 2021 at 05:26:09PM -0500, Tom Lendacky wrote:
> @@ -48,7 +47,7 @@ static void sme_sev_setup_real_mode(struct
> trampoline_header *th)
> if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT))
> th->flags |= TH_FLAGS_SME_ACTIVE;
>
> - if (sev_es_active()) {
> + if
On Tue, Jul 27, 2021 at 05:26:08PM -0500, Tom Lendacky wrote:
> Replace occurrences of sev_active() with the more generic prot_guest_has()
> using PATTR_GUEST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c
> where PATTR_SEV will be used. If future support is added for other memory
> encrypti
On Tue, Jul 27, 2021 at 05:26:07PM -0500, Tom Lendacky wrote:
> Replace occurrences of sme_active() with the more generic prot_guest_has()
> using PATTR_HOST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c
> where PATTR_SME will be used. If future support is added for other memory
> encryptio
On Tue, Jul 27, 2021 at 05:26:05PM -0500, Tom Lendacky wrote:
> Introduce an x86 version of the prot_guest_has() function. This will be
> used in the more generic x86 code to replace vendor specific calls like
> sev_active(), etc.
>
> While the name suggests this is intended mainly for guests, it
On Tue, Jul 27, 2021 at 05:26:04PM -0500, Tom Lendacky wrote:
> In prep for other protected virtualization technologies, introduce a
> generic helper function, prot_guest_has(), that can be used to check
> for specific protection attributes, like memory encryption. This is
> intended to eliminate h
On Fri, 2021-07-23 at 11:50 -0600, Logan Gunthorpe wrote:
> Setting the ->dma_address to DMA_MAPPING_ERROR is not part of
> the ->map_sg calling convention, so remove it.
>
> Link: https://lore.kernel.org/linux-mips/20210716063241.gc13...@lst.de/
> Suggested-by: Christoph Hellwig
> Signed-off-by:
On Fri, Jul 30, 2021 at 10:52:26AM +0800, Yong Wu wrote:
> .../display/mediatek/mediatek,disp.txt| 9
> .../bindings/media/mediatek-jpeg-decoder.yaml | 9
> .../bindings/media/mediatek-jpeg-encoder.yaml | 9
> .../bindings/media/mediatek-mdp.txt | 8
> ...
On Fri, Jul 23, 2021 at 02:32:02AM -0700, Nadav Amit wrote:
> Nadav Amit (6):
> iommu/amd: Selective flush on unmap
> iommu/amd: Do not use flush-queue when NpCache is on
> iommu: Factor iommu_iotlb_gather_is_disjoint() out
> iommu/amd: Tailored gather logic for AMD
> iommu/amd: Sync once
On Tue, Jul 27, 2021 at 06:51:56AM +, Shameerali Kolothum Thodi wrote:
> A gentle ping on this...
This needs more reviews, and please add
Will Deacon
when you post the next version of this patch-set.
Regards,
Joerg
___
iommu mail
68 matches
Mail list logo