On Tue, Jun 28, 2022 at 03:01:34PM +0800, Chao Gao wrote:
>From: Andi Kleen
>
>Traditionally swiotlb was not performance critical because it was only
>used for slow devices. But in some setups, like TDX confidential
>guests, all IO has to go through swiotlb. Currently swiotlb onl
ustom hook
called from the early ACPI code.
Signed-off-by: Andi Kleen
[ rebase and fix warnings of checkpatch.pl ]
Signed-off-by: Chao Gao
---
.../admin-guide/kernel-parameters.txt | 4 +-
arch/x86/kernel/acpi/boot.c | 4 +
include/linux/swiotlb.h
ache as well and then the overhead is almost negligible.
Suggested-by: Andi Kleen
Signed-off-by: Chao Gao
---
include/linux/swiotlb.h | 13 +
kernel/dma/swiotlb.c| 63 -
2 files changed, 32 insertions(+), 44 deletions(-)
diff --git a/include/
pair of allocation and freeing.
Signed-off-by: Chao Gao
---
include/linux/swiotlb.h | 6 ++--
kernel/dma/swiotlb.c| 64 -
2 files changed, 34 insertions(+), 36 deletions(-)
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index
: Split up single swiotlb lock
Chao Gao (2):
swiotlb: Use bitmap to track free slots
swiotlb: Allocate memory in a cache-friendly way
.../admin-guide/kernel-parameters.txt | 4 +-
arch/x86/kernel/acpi/boot.c | 4 +
include/linux/swiotlb.h
On Wed, Apr 13, 2022 at 06:59:58AM +0200, Christoph Hellwig wrote:
>So for now I'd be happy with the one liner presented here, but
>eventually the whole area could use an overhaul.
Thanks. Do you want me to post a new version with the Fixes tag or you
will take care of it?
Fixes: 55897af63091 ("d
On Tue, Apr 12, 2022 at 02:33:05PM +0100, Robin Murphy wrote:
>On 12/04/2022 12:38 pm, Chao Gao wrote:
>> When we looked into FIO performance with swiotlb enabled in VM, we found
>> swiotlb_bounce() is always called one more time than expected for each DMA
>> read request.
>
On Tue, Apr 12, 2022 at 07:38:05PM +0800, Chao Gao wrote:
>When we looked into FIO performance with swiotlb enabled in VM, we found
>swiotlb_bounce() is always called one more time than expected for each DMA
>read request.
>
>It turns out that the bounce buffer is copied to orig
This fix increases FIO 64KB sequential read throughput in a guest with
swiotlb=force by 5.6%.
Reported-by: Wang Zhaoyang1
Reported-by: Gao Liang
Signed-off-by: Chao Gao
Reviewed-by: Kevin Tian
---
kernel/dma/direct.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kerne
On Thu, Sep 16, 2021 at 11:49:39AM -0400, Konrad Rzeszutek Wilk wrote:
>On Wed, Sep 01, 2021 at 12:21:35PM +0800, Chao Gao wrote:
>> Currently, swiotlb uses a global index to indicate the starting point
>> of next search. The index increases from 0 to the number of slots - 1
&g
ache as well and then the overhead is almost negligible.
Suggested-by: Andi Kleen
Signed-off-by: Chao Gao
---
include/linux/swiotlb.h | 15 --
kernel/dma/swiotlb.c| 43 +++--
2 files changed, 20 insertions(+), 38 deletions(-)
diff --git a/
11 matches
Mail list logo