On Tue, Sep 28, 2021 at 05:23:31PM +0800, Tianyu Lan wrote:
>>
>> - the bare memremap usage in swiotlb looks strange and I'd
>> definitively expect a well documented wrapper.
>
> OK. Should the wrapper in the DMA code? How about dma_map_decrypted()
> introduced in the V4?
A mentioned then t
On 9/28/2021 1:39 PM, Christoph Hellwig wrote:
On Mon, Sep 27, 2021 at 10:26:43PM +0800, Tianyu Lan wrote:
Hi Christoph:
Gentile ping. The swiotlb and shared memory mapping changes in this
patchset needs your reivew. Could you have a look? >
I'm a little too busy for a review of such a hug
On Mon, Sep 27, 2021 at 10:26:43PM +0800, Tianyu Lan wrote:
> Hi Christoph:
> Gentile ping. The swiotlb and shared memory mapping changes in this
> patchset needs your reivew. Could you have a look?
I'm a little too busy for a review of such a huge patchset right now.
That being said here are
Hi Christoph:
Gentile ping. The swiotlb and shared memory mapping changes in this
patchset needs your reivew. Could you have a look?
Thanks.
On 9/22/2021 6:34 PM, Tianyu Lan wrote:
Hi Christoph:
This patch follows your purposal in the previous discussion.
Could you have a look?
"u
Hi Christoph:
This patch follows your purposal in the previous discussion.
Could you have a look?
"use vmap_pfn as in the current series. But in that case I think
we should get rid of the other mapping created by vmalloc. I
though a bit about finding a way to apply the offset in
On 9/16/2021 12:21 AM, Michael Kelley wrote:
I think you are proposing this approach to allocating memory for the send
and receive buffers so that you can avoid having two virtual mappings for
the memory, per comments from Christop Hellwig. But overall, the approach
seems a bit complex and I won
On 9/16/2021 12:46 AM, Haiyang Zhang wrote:
+ memset(vmap_pages, 0,
+ sizeof(*vmap_pages) * vmap_page_index);
+ vmap_page_index = 0;
+
+ for (j = 0; j < i; j++)
+
> -Original Message-
> From: Michael Kelley
> Sent: Wednesday, September 15, 2021 12:22 PM
> To: Tianyu Lan ; KY Srinivasan ;
> > + memset(vmap_pages, 0,
> > + sizeof(*vmap_pages) * vmap_page_index);
> > +
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39
AM
>
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills nee
; linux-ker...@vger.kernel.org; linux-
> s...@vger.kernel.org; net...@vger.kernel.org; vkuznets
> ; parri.and...@gmail.com; dave.han...@intel.com
> Subject: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for
> netvsc driver
>
> From: Tianyu Lan
>
&g
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() stills need to be handled. Use DMA API to map/umap
these memory
11 matches
Mail list logo