RE: [RFC v1 2/4] virtio-gpu uapi: Add VIRTIO_GPU_F_OUT_FENCE feature

2021-07-29 Thread Kasireddy, Vivek
Hi Gerd, > > On Thu, Jul 29, 2021 at 01:16:57AM -0700, Vivek Kasireddy wrote: > > This feature enables the Guest to wait to know when a resource > > is completely consumed by the Host. > > virtio spec update? > > What are the exact semantics? [Kasireddy, Vivek]

RE: [RFC v1 4/4] drm/virtio: Probe and implement VIRTIO_GPU_F_OUT_FENCE feature

2021-07-29 Thread Kasireddy, Vivek
Looks like you don't need has_out_fence, you can just use > vgdev->ddev->mode_config.deferred_out_fence instead. [Kasireddy, Vivek] Right, I don't need has_out_fence; will fix it. Thanks, Vivek > > take care, > Gerd

RE: [RFC v1 0/4] drm: Add support for DRM_CAP_DEFERRED_OUT_FENCE capability

2021-08-01 Thread Kasireddy, Vivek
tlab.freedesktop.org/wayland/weston/-/merge_requests/668 > > Uh I kinda wanted to discuss this a bit more before we jump into typing > code, but well I guess not that much work yet. [Kasireddy, Vivek] Right, it wasn't a lot of work :) > > So maybe I'm not understanding th

RE: [RFC v1 0/4] drm: Add support for DRM_CAP_DEFERRED_OUT_FENCE capability

2021-08-02 Thread Kasireddy, Vivek
> > > > https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/668 > > > > > > Uh I kinda wanted to discuss this a bit more before we jump into typing > > > code, but well I guess not that much work yet. > > [Kasireddy, Vivek] Right, it wasn'

RE: [RFC v1 0/4] drm: Add support for DRM_CAP_DEFERRED_OUT_FENCE capability

2021-08-02 Thread Kasireddy, Vivek
ase ... > > virtio_gpu_primary_plane_update() will send RESOURCE_FLUSH only for > DIRTYFB and both SET_SCANOUT + RESOURCE_FLUSH for page-flip, and I > think for the page-flip case the host (aka qemu) doesn't get the > "wait until old framebuffer is not in use any more"

RE: [RFC v1 0/4] drm: Add support for DRM_CAP_DEFERRED_OUT_FENCE capability

2021-08-04 Thread Kasireddy, Vivek
ts. > >>> In order for this to happen, the dma-fence that the Guest KMS waits on -- > >>> before > sending > >>> pageflip completion -- cannot be tied to a wl_buffer.release event. This > >>> means that, > the > >>> Guest compositor has t

RE: [RFC v1 0/4] drm: Add support for DRM_CAP_DEFERRED_OUT_FENCE capability

2021-08-04 Thread Kasireddy, Vivek
old framebuffer is not in use any more" right yet. > > [Kasireddy, Vivek] As you know, with the GTK UI backend and this patch > > series: > > https://lists.nongnu.org/archive/html/qemu-devel/2021-06/msg06745.html > > we do create a sync file fd -- after the Blit -- and

RE: [RFC v1 0/4] drm: Add support for DRM_CAP_DEFERRED_OUT_FENCE capability

2021-08-04 Thread Kasireddy, Vivek
means that, > the > > >>> Guest compositor has to be forced to use a new buffer for its next > > >>> repaint cycle > when it > > >>> gets a pageflip completion. > > >> > > >> Is that really the only solution? > > >

RE: [RFC v1 0/4] drm: Add support for DRM_CAP_DEFERRED_OUT_FENCE capability

2021-08-06 Thread Kasireddy, Vivek
gt; >>> pageflip completion -- cannot be tied to a wl_buffer.release event. > > > > >>> This means > that, > > > the > > > > >>> Guest compositor has to be forced to use a new buffer for its next > > > > >>>

RE: [RFC v1 0/4] drm: Add support for DRM_CAP_DEFERRED_OUT_FENCE capability

2021-08-10 Thread Kasireddy, Vivek
Hi Daniel, > On Fri, Aug 06, 2021 at 07:27:13AM +0000, Kasireddy, Vivek wrote: > > Hi Daniel, > > > > > > > > >>> The solution: > > > > > > >>> - To ensure full framerate, the Guest compositor has to start > > > >

RE: [RFC v1 0/4] drm: Add support for DRM_CAP_DEFERRED_OUT_FENCE capability

2021-08-11 Thread Kasireddy, Vivek
Hi Daniel, > On Tue, Aug 10, 2021 at 08:21:09AM +0000, Kasireddy, Vivek wrote: > > Hi Daniel, > > > > > On Fri, Aug 06, 2021 at 07:27:13AM +, Kasireddy, Vivek wrote: > > > > Hi Daniel, > > > > > > > > > > > > >

RE: [RFC v1 0/4] drm: Add support for DRM_CAP_DEFERRED_OUT_FENCE capability

2021-08-11 Thread Kasireddy, Vivek
Hi Michel, > On 2021-08-10 10:30 a.m., Daniel Vetter wrote: > > On Tue, Aug 10, 2021 at 08:21:09AM +, Kasireddy, Vivek wrote: > >>> On Fri, Aug 06, 2021 at 07:27:13AM +0000, Kasireddy, Vivek wrote: > >>>>>>> > >>>>>>&

RE: [PATCH 1/2] drm/virtio: Create Dumb BOs as guest Blobs

2021-03-31 Thread Kasireddy, Vivek
in virtio > terms) *can* create a shared mapping. So, the guest sends still needs to > send transfer > commands, and then the device can shortcut the transfer commands on the host > side in > case a shared mapping exists. [Kasireddy, Vivek] Ok. IIUC, are you saying that the

RE: [RFC v3 2/3] virtio: Introduce Vdmabuf driver

2021-02-08 Thread Kasireddy, Vivek
Hi Gerd, Daniel, > -Original Message- > From: Daniel Vetter > Sent: Monday, February 08, 2021 1:39 AM > To: Gerd Hoffmann > Cc: Daniel Vetter ; Kasireddy, Vivek > ; > virtualizat...@lists.linux-foundation.org; dri-devel@lists.freedesktop.org; > Vette

RE: [RFC v3 2/3] virtio: Introduce Vdmabuf driver

2021-02-09 Thread Kasireddy, Vivek
Hi Gerd, > -Original Message- > From: Gerd Hoffmann > Sent: Tuesday, February 09, 2021 12:45 AM > To: Kasireddy, Vivek > Cc: Daniel Vetter ; > virtualizat...@lists.linux-foundation.org; dri- > de...@lists.freedesktop.org; Vetter, Daniel ; > daniel.vet..

RE: [RFC v3 2/3] virtio: Introduce Vdmabuf driver

2021-02-12 Thread Kasireddy, Vivek
Hi Gerd, > > > You don't have to use the rendering pipeline. You can let the i915 > > > gpu render into a dma-buf shared with virtio-gpu, then use > > > virtio-gpu only for buffer sharing with the host. [Kasireddy, Vivek] Just to confirm my understanding of w

RE: [RFC v3 2/3] virtio: Introduce Vdmabuf driver

2021-02-12 Thread Kasireddy, Vivek
Hi Christian, > > Hi Vivek, > > > [Kasireddy, Vivek] What if I do mmap() on the fd followed by mlock() > > or mmap() followed by get_user_pages()? If it still fails, would > > ioremapping the device memory and poking at the backing storage be an > > option? Or

RE: [RFC v3 2/3] virtio: Introduce Vdmabuf driver

2021-02-22 Thread Kasireddy, Vivek
Hi Gerd, > > On Fri, Feb 12, 2021 at 08:15:12AM +, Kasireddy, Vivek wrote: > > Hi Gerd, > > [Kasireddy, Vivek] Just to confirm my understanding of what you are > > suggesting, are you saying that we need to either have Weston allocate > > scanout buffers (GBM su

RE: [PATCH 1/3] virtio-gpu uapi: Add VIRTIO_GPU_F_EXPLICIT_FLUSH feature

2021-05-11 Thread Kasireddy, Vivek
Hi Gerd, > On Tue, May 11, 2021 at 01:36:08AM -0700, Vivek Kasireddy wrote: > > This feature enables the Guest to wait until a flush has been > > performed on a buffer it has submitted to the Host. > > This needs a virtio-spec update documenting the new feature. [Kasired

RE: [PATCH 1/3] virtio-gpu uapi: Add VIRTIO_GPU_F_EXPLICIT_FLUSH feature

2021-05-12 Thread Kasireddy, Vivek
;t need the previous one? That is likewise > linked to a command, although it is set_scanout this time. [Kasireddy, Vivek] Mainly for page-flipping but I'd also like to have fbcon, Xorg that do frontbuffer rendering/updates to work seamlessly as well. > > So, right now qemu simply que

RE: [PATCH 1/3] virtio-gpu uapi: Add VIRTIO_GPU_F_EXPLICIT_FLUSH feature

2021-05-17 Thread Kasireddy, Vivek
Hi Gerd, > > [Kasireddy, Vivek] Correct, that is exactly what I want -- make the Guest > > wait > > until it gets notified that the Host is completely done processing/using > > the fb. > > However, there can be two resources the guest can be made to wait on: wait

RE: [Intel-gfx] [PATCH v6 2/2] drm/i915/gem: Don't try to map and fence large scanout buffers (v9)

2022-03-15 Thread Kasireddy, Vivek
thing) > > 3. Pin the buffer with the most lenient approach > > > > Even the non-blocking interim stage is dangerous, since it'll just > > result in other buffers (e.g. when triple-buffering) getting unbound > > and we're back to the same stall. Note that t

RE: [Intel-gfx] [PATCH v6 2/2] drm/i915/gem: Don't try to map and fence large scanout buffers (v9)

2022-03-16 Thread Kasireddy, Vivek
Hi Tvrtko, > > On 15/03/2022 07:28, Kasireddy, Vivek wrote: > > Hi Tvrtko, Daniel, > > > >> > >> On 11/03/2022 09:39, Daniel Vetter wrote: > >>> On Mon, 7 Mar 2022 at 21:38, Vivek Kasireddy > >>> wrote: > >>>> &

RE: [Intel-gfx] [PATCH v6 2/2] drm/i915/gem: Don't try to map and fence large scanout buffers (v9)

2022-03-17 Thread Kasireddy, Vivek
Hi Tvrtko, > > On 16/03/2022 07:37, Kasireddy, Vivek wrote: > > Hi Tvrtko, > > > >> > >> On 15/03/2022 07:28, Kasireddy, Vivek wrote: > >>> Hi Tvrtko, Daniel, > >>> > >>>> > >>>> On 11/03/2022

RE: [PATCH v2 1/3] drm/mm: Ensure that the entry is not NULL before extracting rb_node

2022-02-22 Thread Kasireddy, Vivek
Hi Tvrtko, > > On 18/02/2022 03:47, Kasireddy, Vivek wrote: > > Hi Tvrtko, > > > >> > >> On 17/02/2022 07:50, Vivek Kasireddy wrote: > >>> While looking for next holes suitable for an allocation, although, > >>> it is highly unlike

RE: [Intel-gfx] [CI 1/2] drm/mm: Add an iterator to optimally walk over holes for an allocation (v4)

2022-02-28 Thread Kasireddy, Vivek
gt; alignment = 0; > > > > - once = mode & DRM_MM_INSERT_ONCE; > > - mode &= ~DRM_MM_INSERT_ONCE; > > - > > remainder_mask = is_power_of_2(alignment) ? alignment - 1 : 0; > > - for (hole = first_hole(mm, range_start, range_end, si

RE: [RFC v1 4/6] drm/virtio: Probe and implement VIRTIO_GPU_F_RELEASE_FENCE feature

2021-09-15 Thread Kasireddy, Vivek
t; enum virtio_gpu_ctrl_type { > > VIRTIO_GPU_UNDEFINED = 0, > > Where is the virtio-spec update for that? [Kasireddy, Vivek] I was going to do that if there'd a consensus over DRM_CAP_RELEASE_FENCE. Otherwise, I don't think VIRTIO_GPU_F_RELEASE_FENCE is needed. Thanks, Vivek > > thanks, > Gerd

RE: [RFC v1 3/6] drm: Add a capability flag to support additional flip completion signalling

2021-10-14 Thread Kasireddy, Vivek
Hi Pekka, Thank you for reviewing this patch. > On Mon, 13 Sep 2021 16:35:26 -0700 > Vivek Kasireddy wrote: > > > If a driver supports this capability, it means that there would be an > > additional signalling mechanism for a page flip completion in addition > > to out_fence or DRM_MODE_PAGE_FL

RE: [RFC v1 3/6] drm: Add a capability flag to support additional flip completion signalling

2021-10-17 Thread Kasireddy, Vivek
does not know that userspace can handle pageflip > > > completing "too early", then it has no choice but to wait until the old > > > buffer is really free before signalling pageflip completion. > > > > > > Wouldn't that make sense? > > [V

RE: [PATCH v2 11/12] drm/virtio: implement context init: add virtio_gpu_fence_event

2021-09-17 Thread Kasireddy, Vivek
memory buffer of type > BLOB_MEM_GUEST [this is the common way to receive responses with > virtgpu]. As such, there is no context specific read(..) > implementation either -- just a poll(..) implementation. [Kasireddy, Vivek] Given my limited understanding of virtio_gpu 3D/Virgl, I am

RE: [PATCH v3 11/12] drm/virtio: implement context init: add virtio_gpu_fence_event

2021-11-15 Thread Kasireddy, Vivek
Hi Daniel, Greg, If it is the same or a similar crash reported here: https://lists.freedesktop.org/archives/dri-devel/2021-November/330018.html and here: https://lists.freedesktop.org/archives/dri-devel/2021-November/330212.html then the fix is already merged: https://git.kernel.org/pub/scm/linux

RE: [PATCH v2 1/2] drm/virtio: .release ops for virtgpu fence release

2022-06-06 Thread Kasireddy, Vivek
> virtio_gpu_fence_release is added to free virtio-gpu-fence > upon release of dma_fence. > > Cc: Gurchetan Singh > Cc: Gerd Hoffmann > Cc: Vivek Kasireddy > Signed-off-by: Dongwon Kim > --- > drivers/gpu/drm/virtio/virtgpu_fence.c | 8 > 1 file changed, 8 insertions(+) > > diff --g

RE: [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update

2022-06-06 Thread Kasireddy, Vivek
pu_framebuffer(plane->state->fb); > bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); > - if (vgfb->fence) { > - struct virtio_gpu_object_array *objs; > > + if (!bo) > + return; [Kasireddy, Vivek] I think you can drop the above check as

RE: [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update

2022-06-14 Thread Kasireddy, Vivek
ails, it falls back to non-fence path so it > won't fail for primary-plane-update. > > For cursor plane update, it returns if fence is NULL but we could change > it to just proceed and just make it skip waiting like, [Kasireddy, Vivek] But cursor plane update is always tied to a

RE: [PATCH 1/2] drm/mm: Add an iterator to optimally walk over holes for an allocation

2022-02-03 Thread Kasireddy, Vivek
Hi Tvrtko, > -Original Message- > From: Tvrtko Ursulin > Sent: Wednesday, February 02, 2022 5:04 AM > To: Kasireddy, Vivek ; > dri-devel@lists.freedesktop.org > Subject: Re: [PATCH 1/2] drm/mm: Add an iterator to optimally walk over holes > for an > allocation

RE: [PATCH v2 1/3] drm/mm: Ensure that the entry is not NULL before extracting rb_node

2022-02-17 Thread Kasireddy, Vivek
t. > > Was the need for this just a consequence of insufficient locking in the > i915 patch? [Kasireddy, Vivek] Partly, yes; but I figured since we are anyway doing if (!entry || ..), it makes sense to dereference entry and extract the rb_node after this check. Thanks, Vivek > > Reg

RE: [PATCH 1/3] virtio-gpu uapi: Add VIRTIO_GPU_F_EXPLICIT_FLUSH feature

2021-05-24 Thread Kasireddy, Vivek
Hi Gerd, Any further comments on this? Thanks, Vivek > > Hi Gerd, > > > > [Kasireddy, Vivek] Correct, that is exactly what I want -- make the > > > Guest wait until it gets notified that the Host is completely done > > > processing/using the > fb. > &

RE: [PATCH v1 2/2] drm/virtio: Add the hotplug_mode_update property for rescanning of modes

2023-01-09 Thread Kasireddy, Vivek
is this need to have userspace also check for position > info updates added by patch #1)? [Kasireddy, Vivek] Yes, that is exactly the reason why this property is needed. In other words, Mutter does not seem to look at suggested_x/y values (or position info) if hotplug_mode_property is not the

RE: [PATCH v1 2/2] drm/virtio: Add the hotplug_mode_update property for rescanning of modes

2023-01-09 Thread Kasireddy, Vivek
#x27;s not documented anywhere, and it's also not done with any > piece of common code. Which all looks really fishy. [Kasireddy, Vivek] AFAIU, this property appears to be useful only for virtual GPU drivers to share the Host output(s) layout with the Guest compositor. The sugg

RE: [PATCH] udmabuf: revert 'Add support for mapping hugepages (v4)'

2023-06-14 Thread Kasireddy, Vivek
Hi David, > > On 13.06.23 10:26, Kasireddy, Vivek wrote: > > Hi David, > > > >> > >> On 12.06.23 09:10, Kasireddy, Vivek wrote: > >>> Hi Mike, > >> > >> Hi Vivek, > >> > >>> > >>> Sorry for the

RE: [PATCH] mm: fix hugetlb page unmap count balance issue

2023-06-19 Thread Kasireddy, Vivek
Hi Gerd, > > On Mon, May 15, 2023 at 10:04:42AM -0700, Mike Kravetz wrote: > > On 05/12/23 16:29, Mike Kravetz wrote: > > > On 05/12/23 14:26, James Houghton wrote: > > > > On Fri, May 12, 2023 at 12:20 AM Junxiao Chang > wrote: > > > > > > > > This alone doesn't fix mapcounting for PTE-mapped H

RE: [PATCH v1 0/2] udmabuf: Add back support for mapping hugetlb pages

2023-06-22 Thread Kasireddy, Vivek
Hi David, > > The first patch ensures that the mappings needed for handling mmap > > operation would be managed by using the pfn instead of struct page. > > The second patch restores support for mapping hugetlb pages where > > subpages of a hugepage are not directly used anymore (main reason > > f

RE: [PATCH v1 0/2] udmabuf: Add back support for mapping hugetlb pages

2023-06-26 Thread Kasireddy, Vivek
Hi Peter, > > On Fri, Jun 23, 2023 at 06:13:02AM +, Kasireddy, Vivek wrote: > > Hi David, > > > > > > The first patch ensures that the mappings needed for handling mmap > > > > operation would be managed by using the pfn instead of struct page. >

RE: [PATCH v1 0/2] udmabuf: Add back support for mapping hugetlb pages

2023-06-26 Thread Kasireddy, Vivek
Hi David, > On 26.06.23 19:52, Peter Xu wrote: > > On Mon, Jun 26, 2023 at 07:45:37AM +, Kasireddy, Vivek wrote: > >> Hi Peter, > >> > >>> > >>> On Fri, Jun 23, 2023 at 06:13:02AM +, Kasireddy, Vivek wrote: > >>>> Hi David

RE: [PATCH v1 0/2] udmabuf: Add back support for mapping hugetlb pages

2023-06-28 Thread Kasireddy, Vivek
Hi David, > > On 27.06.23 08:37, Kasireddy, Vivek wrote: > > Hi David, > > > > Hi! > > sorry for taking a bit longer to reply lately. No problem. > > [...] > > >>> Sounds right, maybe it needs to go back to the old GUP solution, though

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-07-20 Thread Kasireddy, Vivek
Hi Alistair, > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 64a3239b6407..1f2f0209101a 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -6096,8 +6096,12 @@ vm_fault_t hugetlb_fault(struct mm_struct > *mm, struct vm_area_struct *vma, > > * hugetlb_no_page will dro

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-07-24 Thread Kasireddy, Vivek
Hi Alistair, > > > "Kasireddy, Vivek" writes: > > > Hi Alistair, > > > >> > >> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > >> > index 64a3239b6407..1f2f0209101a 100644 > >> > --- a/mm/hugetlb.c > >> >

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-07-24 Thread Kasireddy, Vivek
Hi Jason, > > On Mon, Jul 24, 2023 at 07:54:38AM +, Kasireddy, Vivek wrote: > > > > I'm not at all familiar with the udmabuf use case but that sounds > > > brittle and effectively makes this notifier udmabuf specific right? > > Oh, Qemu uses the u

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-07-24 Thread Kasireddy, Vivek
Hi Alistair, > >> > >> Yes, although obviously as I think you point out below you wouldn't be > >> able to take any sleeping locks in mmu_notifier_update_mapping(). > > Yes, I understand that, but I am not sure how we can prevent any potential > > notifier callback from taking sleeping locks other

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-07-25 Thread Kasireddy, Vivek
Hi Hugh, > > On Mon, 24 Jul 2023, Kasireddy, Vivek wrote: > > Hi Jason, > > > On Mon, Jul 24, 2023 at 07:54:38AM +0000, Kasireddy, Vivek wrote: > > > > > > > > I'm not at all familiar with the udmabuf use case but that sounds > > > >

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-07-25 Thread Kasireddy, Vivek
Hi Jason, > > > > > > > > I'm not at all familiar with the udmabuf use case but that sounds > > > > > brittle and effectively makes this notifier udmabuf specific right? > > > > Oh, Qemu uses the udmabuf driver to provide Host Graphics > components > > > > (such as Spice, Gstreamer, UI, etc) zero-

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-07-27 Thread Kasireddy, Vivek
Hi Jason, > > On Tue, Jul 25, 2023 at 10:44:09PM +, Kasireddy, Vivek wrote: > > > If you still need the memory mapped then you re-call hmm_range_fault > > > and re-obtain it. hmm_range_fault will resolve all the races and you > > > get new pages. > &

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-07-28 Thread Kasireddy, Vivek
Hi Peter, > > > > > > > I'm not at all familiar with the udmabuf use case but that sounds > > > > > > > brittle and effectively makes this notifier udmabuf specific > > > > > > > right? > > > > > > Oh, Qemu uses the udmabuf driver to provide Host Graphics > > > components > > > > > > (such as Spi

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-07-28 Thread Kasireddy, Vivek
Hi Jason, > > > > > If you still need the memory mapped then you re-call > hmm_range_fault > > > > > and re-obtain it. hmm_range_fault will resolve all the races and you > > > > > get new pages. > > > > > > > IIUC, for my udmabuf use-case, it looks like calling hmm_range_fault > > > > immediately

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-07-31 Thread Kasireddy, Vivek
Hi Jason, > > > > Later the importer decides it needs the memory again so it again asks > > > for the dmabuf to be present, which does hmm_range_fault and gets > > > whatever is appropriate at the time. > > Unless I am missing something, I think just doing the above still won't > > solve > > the

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-08-01 Thread Kasireddy, Vivek
Hi Peter, > > > > > > > > > > > I'm not at all familiar with the udmabuf use case but that > sounds > > > > > > > > > brittle and effectively makes this notifier udmabuf specific > right? > > > > > > > > Oh, Qemu uses the udmabuf driver to provide Host Graphics > > > > > components > > > > > > > >

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-08-01 Thread Kasireddy, Vivek
gt;> On 01.08.23 14:19, Jason Gunthorpe wrote: > >>>>> On Tue, Aug 01, 2023 at 05:32:38AM +, Kasireddy, Vivek wrote: > >>>>> > >>>>>>> You get another invalidate because the memfd removes the zero > pages > >>>>>

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-08-03 Thread Kasireddy, Vivek
Hi Jason, > > > Right, the "the zero pages are changed into writable pages" in your > > > above comment just might not apply, because there won't be any page > > > replacement (hopefully :) ). > > > If the page replacement does not happen when there are new writes to the > > area where the hole p

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-08-03 Thread Kasireddy, Vivek
Hi Peter, > > Ok, I'll keep your use-case in mind but AFAICS, the process that creates > > the udmabuf can be considered the owner. So, I think it makes sense that > > the owner's VMA range can be registered (via mmu_notifiers) for updates. > > No need to have your special attention on this; my u

RE: [RFC v1 2/3] udmabuf: Replace pages when there is FALLOC_FL_PUNCH_HOLE in memfd

2023-08-03 Thread Kasireddy, Vivek
Hi Daniel, > > On Tue, Jul 18, 2023 at 01:28:57AM -0700, Vivek Kasireddy wrote: > > When a hole is punched in the memfd or when a page is replaced for > > any reason, the udmabuf driver needs to get notified in order to > > update its list of pages with the new page. To accomplish this, we > > fi

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-08-03 Thread Kasireddy, Vivek
Hi Alistair, David, Jason, > >> Right, the "the zero pages are changed into writable pages" in your > >> above comment just might not apply, because there won't be any > page > >> replacement (hopefully :) ). > > > If the page replacement does not happen when there are new wri

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-08-04 Thread Kasireddy, Vivek
Hi David, > > > Right, the "the zero pages are changed into writable pages" in your > above comment just might not apply, because there won't be any > >> page > replacement (hopefully :) ). > >> > >>> If the page replacement does not happen when there are new > w

RE: [PATCH] udmabuf: revert 'Add support for mapping hugepages (v4)'

2023-06-12 Thread Kasireddy, Vivek
Hi Mike, Sorry for the late reply; I just got back from vacation. If it is unsafe to directly use the subpages of a hugetlb page, then reverting this patch seems like the only option for addressing this issue immediately. So, this patch is Acked-by: Vivek Kasireddy As far as the use-case is conc

RE: [PATCH] udmabuf: revert 'Add support for mapping hugepages (v4)'

2023-06-13 Thread Kasireddy, Vivek
Hi David, > > On 12.06.23 09:10, Kasireddy, Vivek wrote: > > Hi Mike, > > Hi Vivek, > > > > > Sorry for the late reply; I just got back from vacation. > > If it is unsafe to directly use the subpages of a hugetlb page, then > > reverting

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-08-08 Thread Kasireddy, Vivek
Hi Jason, > > > No, adding HMM_PFN_REQ_WRITE still doesn't help in fixing the issue. > > Although, I do not have THP enabled (or built-in), shmem does not evict > > the pages after hole punch as noted in the comment in shmem_fallocate(): > > This is the source of all your problems. > > Things t

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-08-15 Thread Kasireddy, Vivek
Hi Jason, > > > > > > > No, adding HMM_PFN_REQ_WRITE still doesn't help in fixing the issue. > > > > Although, I do not have THP enabled (or built-in), shmem does not evict > > > > the pages after hole punch as noted in the comment in > shmem_fallocate(): > > > > > > This is the source of all your

RE: [PATCH v1 0/3] udmabuf: Add support for page migration out of movable zone or CMA

2023-08-21 Thread Kasireddy, Vivek
Hi Jason, > > This patch series adds support for migrating pages associated with > > a udmabuf out of the movable zone or CMA to avoid breaking features > > such as memory hotunplug. > > > > The first patch exports check_and_migrate_movable_pages() function > > out of GUP so that the udmabuf drive

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-08-21 Thread Kasireddy, Vivek
Hi Alistair, > >> > > > No, adding HMM_PFN_REQ_WRITE still doesn't help in fixing the > issue. > >> > > > Although, I do not have THP enabled (or built-in), shmem does not > evict > >> > > > the pages after hole punch as noted in the comment in > >> shmem_fallocate(): > >> > > > >> > > This is the

RE: [PATCH v1 0/3] udmabuf: Add support for page migration out of movable zone or CMA

2023-08-23 Thread Kasireddy, Vivek
Hi David, > > >> - Add a new API to the backing store/allocator to longterm-pin the page. > >>For example, something along the lines of > shmem_pin_mapping_page_longterm() > >>for shmem as suggested by Daniel. A similar one needs to be added for > >>hugetlbfs as well. > > > > This may

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-08-23 Thread Kasireddy, Vivek
Hi Alistair, > > > >> >> > > > No, adding HMM_PFN_REQ_WRITE still doesn't help in fixing the > >> issue. > >> >> > > > Although, I do not have THP enabled (or built-in), shmem does > not > >> evict > >> >> > > > the pages after hole punch as noted in the comment in > >> >> shmem_fallocate(): > >>

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-07-18 Thread Kasireddy, Vivek
Hi Jason, > > On Tue, Jul 18, 2023 at 01:28:56AM -0700, Vivek Kasireddy wrote: > > Currently, there does not appear to be any mechanism for letting > > drivers or other kernel entities know about updates made in a > > mapping particularly when a new page is faulted in. Providing > > notifications

RE: [PATCH v2 2/2] udmabuf: Add back support for mapping hugetlb pages (v2)

2023-07-18 Thread Kasireddy, Vivek
Hi Mike, > > On 07/18/23 01:26, Vivek Kasireddy wrote: > > A user or admin can configure a VMM (Qemu) Guest's memory to be > > backed by hugetlb pages for various reasons. However, a Guest OS > > would still allocate (and pin) buffers that are backed by regular > > 4k sized pages. In order to map

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-07-18 Thread Kasireddy, Vivek
Hi Jason, > > On Wed, Jul 19, 2023 at 12:05:29AM +, Kasireddy, Vivek wrote: > > > > If there is no change to the PTEs then it is hard to see why this > > > would be part of a mmu_notifier. > > IIUC, the PTEs do get changed but only when a new page is faulted

RE: [PATCH v4 1/5] udmabuf: direct map pfn when first page fault

2024-08-28 Thread Kasireddy, Vivek
Hi Huan, > Subject: [PATCH v4 1/5] udmabuf: direct map pfn when first page fault > > The current udmabuf mmap uses a page fault to populate the vma. > > However, the current udmabuf has already obtained and pinned the folio > upon completion of the creation.This means that the physical memory ha

RE: [PATCH v4 3/5] udmabuf: fix vmap_udmabuf error page set

2024-08-28 Thread Kasireddy, Vivek
Hi Huan, > Subject: [PATCH v4 3/5] udmabuf: fix vmap_udmabuf error page set > > Currently vmap_udmabuf set page's array by each folio. > But, ubuf->folios is only contain's the folio's head page. > > That mean we repeatedly mapped the folio head page to the vmalloc area. > > Due to udmabuf can

RE: [PATCH v4 4/5] udmabuf: udmabuf_create codestyle cleanup

2024-08-28 Thread Kasireddy, Vivek
Hi Huan, > Subject: [PATCH v4 4/5] udmabuf: udmabuf_create codestyle cleanup > > There are some variables in udmabuf_create that are only used inside the > loop. Therefore, there is no need to declare them outside the scope. > This patch moved it into loop. > > It is difficult to understand the

RE: [PATCH v4 5/5] udmabuf: remove udmabuf_folio

2024-08-28 Thread Kasireddy, Vivek
Hi Huan, > Subject: [PATCH v4 5/5] udmabuf: remove udmabuf_folio > > Currently, udmabuf handles folio by creating an unpin list to record > each folio obtained from the list and unpinning them when released. To > maintain this approach, many data structures have been established. > > However, ma

RE: [PATCH v5 1/7] udmabuf: pre-fault when first page fault

2024-09-06 Thread Kasireddy, Vivek
Hi Huan, > Subject: [PATCH v5 1/7] udmabuf: pre-fault when first page fault > > The current udmabuf mmap uses a page fault to populate the vma. > > However, the current udmabuf has already obtained and pinned the folio > upon completion of the creation.This means that the physical memory has > a

RE: [PATCH v5 4/7] udmabuf: udmabuf_create pin folio codestyle cleanup

2024-09-06 Thread Kasireddy, Vivek
Hi Huan, > Subject: [PATCH v5 4/7] udmabuf: udmabuf_create pin folio codestyle > cleanup > > This patch split pin folios into single function: udmabuf_pin_folios. > > When record folio and offset into udmabuf_folio and offsets, the outer > loop of this patch iterates through folios, while the in

RE: [PATCH v5 5/7] udmabuf: introduce udmabuf init and deinit helper

2024-09-06 Thread Kasireddy, Vivek
Hi Huan, > Subject: [PATCH v5 5/7] udmabuf: introduce udmabuf init and deinit helper > > After udmabuf is allocated, its resources need to be initialized, > including various array structures. The current array structure has > already been greatly expanded. > > Also, before udmabuf needs to be k

RE: [PATCH v5 6/7] udmabuf: remove udmabuf_folio

2024-09-06 Thread Kasireddy, Vivek
Hi Huan, > Subject: [PATCH v5 6/7] udmabuf: remove udmabuf_folio > > Currently, udmabuf handles folio by creating an unpin list to record > each folio obtained from the list and unpinning them when released. To > maintain this approach, many data structures have been established. > > However, ma

RE: [PATCH v5 7/7] udmabuf: reuse folio array when pin folios

2024-09-06 Thread Kasireddy, Vivek
Hi Huan, > Subject: [PATCH v5 7/7] udmabuf: reuse folio array when pin folios > > When invoke memfd_pin_folios, we need offer an array to save each folio > which we pinned. > > The currently way is dynamic alloc an array, get folios, save into > udmabuf and then free. > > If the size is tiny, a

RE: [PATCH v5 4/7] udmabuf: udmabuf_create pin folio codestyle cleanup

2024-09-08 Thread Kasireddy, Vivek
Hi Huan, > Subject: Re: [PATCH v5 4/7] udmabuf: udmabuf_create pin folio codestyle > cleanup > > > 在 2024/9/6 16:17, Kasireddy, Vivek 写道: > > Hi Huan, > > > >> Subject: [PATCH v5 4/7] udmabuf: udmabuf_create pin folio codestyle > >> cleanup >

RE: [PATCH v1 0/3] udmabuf: Add support for page migration out of movable zone or CMA

2023-08-27 Thread Kasireddy, Vivek
Hi Jason, David, > > > Sure, we can simply always fail when we detect ZONE_MOVABLE or > MIGRATE_CMA. > > Maybe that keeps at least some use cases working. > > That seems fairly reasonable AFAICS, failing udmabuf_create() if we detect one or more pages are in ZONE_MOVABLE or MIGRATE_CMA would not

RE: [PATCH v1 0/3] udmabuf: Add support for page migration out of movable zone or CMA

2023-08-27 Thread Kasireddy, Vivek
Hi Jason, David, > > > Sure, we can simply always fail when we detect ZONE_MOVABLE or > > MIGRATE_CMA. > > > Maybe that keeps at least some use cases working. > > > > That seems fairly reasonable > AFAICS, failing udmabuf_create() if we detect one or more pages are in > ZONE_MOVABLE or MIGRATE_CMA

RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-08-27 Thread Kasireddy, Vivek
Hi Alistair, > > > > > > >> >> > > > No, adding HMM_PFN_REQ_WRITE still doesn't help in fixing the > > >> issue. > > >> >> > > > Although, I do not have THP enabled (or built-in), shmem does > > not > > >> evict > > >> >> > > > the pages after hole punch as noted in the comment in > > >> >> shmem

RE: [syzbot] [mm?] kernel BUG in filemap_unaccount_folio

2023-09-10 Thread Kasireddy, Vivek
Hi Fengwei, > > Add udmabuf maintainers. > > On 9/7/2023 2:51 AM, syzbot wrote: > > Hello, > > > > syzbot found the following issue on: > > > > HEAD commit:db906f0ca6bb Merge tag 'phy-for-6.6' of git://git.kernel.o.. > > git tree: upstream > > console+strace: https://syzkaller.appspot.

RE: [PATCH v1 1/3] mm/gup: Introduce pin_user_pages_fd() for pinning shmem/hugetlbfs file pages

2023-10-17 Thread Kasireddy, Vivek
Hi David, > > For drivers that would like to longterm-pin the pages associated > > with a file, the pin_user_pages_fd() API provides an option to > > not only FOLL_PIN the pages but also to check and migrate them > > if they reside in movable zone or CMA block. For now, this API > > can only work

RE: [PATCH v16 3/9] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios

2024-07-05 Thread Kasireddy, Vivek
Hi Andrew and SJ, > > On Fri, 5 Jul 2024 13:48:25 -0700 SeongJae Park wrote: > > > > + * memfd_pin_folios() - pin folios associated with a memfd > > [...] > > > + for (i = 0; i < nr_found; i++) { > > > + /* > > > + * As there ca

RE: virtio_gpu_cmd_resource_flush

2024-07-06 Thread Kasireddy, Vivek
Hi Josh, > It looks like the virtio-gpu flush should be fenced, but on the host side the > received flush cmd doesn't have the fence flag set, and no fence_id.  So, > I have to reply right away instead of waiting for scanout to complete. > Is that expected?  then what's the right way to vsync the

RE: virtio_gpu_cmd_resource_flush

2024-07-08 Thread Kasireddy, Vivek
Hi Josh, > > > If virgl=true (which means blob=false at the moment), then things work > > very differently. > Yes, we're using virglrenderer. The flushed resources are backed by host Virgl is not my forte. Someone working on virgl should be able to help you. Thanks, Vivek > allocated buffers.

RE: [PATCH v16 3/9] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios

2024-07-13 Thread Kasireddy, Vivek
Hi Andrew, > > > Hi Andrew and SJ, > > > > > > > > > > > > > I didn't look deep into the patch, so unsure if that's a valid fix, > > > > though. > > > > May I ask your thoughts? > > > > > > Perhaps we should propagate the errno which was returned by > > > try_grab_folio()? > > > > > > I'll do it

RE: [PATCH v1 4/5] drm/virtio: Import prime buffers from other devices as guest blobs

2024-07-20 Thread Kasireddy, Vivek
Hi Dmitry, > > +static void virtgpu_dma_buf_move_notify(struct dma_buf_attachment > *attach) > > +{ > > + struct drm_gem_object *obj = attach->importer_priv; > > + struct virtio_gpu_device *vgdev = obj->dev->dev_private; > > + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); > > +

RE: [PATCH 0/5] udmbuf bug fix and some improvements

2024-08-01 Thread Kasireddy, Vivek
Hi Huan, > This patchset attempts to fix some errors in udmabuf and remove the > upin_list structure. > > Some of this fix just gather the patches which I upload before. > > Patch1 > === > Try to remove page fault mmap and direct map it. > Due to current udmabuf has already obtained and pinned t

RE: [PATCH v2 1/4] udmabuf: cancel mmap page fault, direct map it

2024-08-09 Thread Kasireddy, Vivek
Hi Huan, > > The current udmabuf mmap uses a page fault mechanism to populate the > vma. > > However, the current udmabuf has already obtained and pinned the folio > upon completion of the creation.This means that the physical memory has > already been acquired, rather than being accessed dynami

RE: [PATCH v2 2/4] udmabuf: change folios array from kmalloc to kvmalloc

2024-08-09 Thread Kasireddy, Vivek
Hi Huan, > > When PAGE_SIZE 4096, MAX_PAGE_ORDER 10, 64bit machine, > page_alloc only support 4MB. > If above this, trigger this warn and return NULL. > > udmabuf can change size limit, if change it to 3072(3GB), and then alloc > 3GB udmabuf, will fail create. > > [ 4080.876581] [ c

RE: [PATCH v2 3/4] fix vmap_udmabuf error page set

2024-08-09 Thread Kasireddy, Vivek
Hi Huan, > > Currently vmap_udmabuf set page's array by each folio. > But, ubuf->folios is only contain's the folio's head page. > > That mean we repeatedly mapped the folio head page to the vmalloc area. > > This patch fix it, set each folio's page correct, so that pages array > contains right

RE: [PATCH v2 4/4] udmabuf: remove folio unpin list

2024-08-09 Thread Kasireddy, Vivek
Hi Huan, > > Currently, udmabuf handles folio by creating an unpin list to record > each folio obtained from the list and unpinning them when released. To > maintain this approach, many data structures have been established. > > However, maintaining this type of data structure requires a signifi

RE: [PATCH v3 1/5] udmabuf: cancel mmap page fault, direct map it

2024-08-16 Thread Kasireddy, Vivek
Hi Huan, > > The current udmabuf mmap uses a page fault to populate the vma. > > However, the current udmabuf has already obtained and pinned the folio > upon completion of the creation.This means that the physical memory has > already been acquired, rather than being accessed dynamically. The >

RE: [PATCH v3 3/5] fix vmap_udmabuf error page set

2024-08-16 Thread Kasireddy, Vivek
Hi Huan, > Subject: [PATCH v3 3/5] fix vmap_udmabuf error page set Please prepend a "udmabuf:" to the subject line and improve the wording. > > Currently vmap_udmabuf set page's array by each folio. > But, ubuf->folios is only contain's the folio's head page. > > That mean we repeatedly mapped

  1   2   >