Hi Gerd,
>
> On Thu, Jul 29, 2021 at 01:16:57AM -0700, Vivek Kasireddy wrote:
> > This feature enables the Guest to wait to know when a resource
> > is completely consumed by the Host.
>
> virtio spec update?
>
> What are the exact semantics?
[Kasireddy, Vivek]
Looks like you don't need has_out_fence, you can just use
> vgdev->ddev->mode_config.deferred_out_fence instead.
[Kasireddy, Vivek] Right, I don't need has_out_fence; will fix it.
Thanks,
Vivek
>
> take care,
> Gerd
tlab.freedesktop.org/wayland/weston/-/merge_requests/668
>
> Uh I kinda wanted to discuss this a bit more before we jump into typing
> code, but well I guess not that much work yet.
[Kasireddy, Vivek] Right, it wasn't a lot of work :)
>
> So maybe I'm not understanding th
> > > > https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/668
> > >
> > > Uh I kinda wanted to discuss this a bit more before we jump into typing
> > > code, but well I guess not that much work yet.
> > [Kasireddy, Vivek] Right, it wasn'
ase ...
>
> virtio_gpu_primary_plane_update() will send RESOURCE_FLUSH only for
> DIRTYFB and both SET_SCANOUT + RESOURCE_FLUSH for page-flip, and I
> think for the page-flip case the host (aka qemu) doesn't get the
> "wait until old framebuffer is not in use any more"
ts.
> >>> In order for this to happen, the dma-fence that the Guest KMS waits on --
> >>> before
> sending
> >>> pageflip completion -- cannot be tied to a wl_buffer.release event. This
> >>> means that,
> the
> >>> Guest compositor has t
old framebuffer is not in use any more" right yet.
> > [Kasireddy, Vivek] As you know, with the GTK UI backend and this patch
> > series:
> > https://lists.nongnu.org/archive/html/qemu-devel/2021-06/msg06745.html
> > we do create a sync file fd -- after the Blit -- and
means that,
> the
> > >>> Guest compositor has to be forced to use a new buffer for its next
> > >>> repaint cycle
> when it
> > >>> gets a pageflip completion.
> > >>
> > >> Is that really the only solution?
> > >
gt; >>> pageflip completion -- cannot be tied to a wl_buffer.release event.
> > > > >>> This means
> that,
> > > the
> > > > >>> Guest compositor has to be forced to use a new buffer for its next
> > > > >>>
Hi Daniel,
> On Fri, Aug 06, 2021 at 07:27:13AM +0000, Kasireddy, Vivek wrote:
> > Hi Daniel,
> >
> > > > > > >>> The solution:
> > > > > > >>> - To ensure full framerate, the Guest compositor has to start
> > > >
Hi Daniel,
> On Tue, Aug 10, 2021 at 08:21:09AM +0000, Kasireddy, Vivek wrote:
> > Hi Daniel,
> >
> > > On Fri, Aug 06, 2021 at 07:27:13AM +, Kasireddy, Vivek wrote:
> > > > Hi Daniel,
> > > >
> > > > > > > > >
Hi Michel,
> On 2021-08-10 10:30 a.m., Daniel Vetter wrote:
> > On Tue, Aug 10, 2021 at 08:21:09AM +, Kasireddy, Vivek wrote:
> >>> On Fri, Aug 06, 2021 at 07:27:13AM +0000, Kasireddy, Vivek wrote:
> >>>>>>>
> >>>>>>&
in virtio
> terms) *can* create a shared mapping. So, the guest sends still needs to
> send transfer
> commands, and then the device can shortcut the transfer commands on the host
> side in
> case a shared mapping exists.
[Kasireddy, Vivek] Ok. IIUC, are you saying that the
Hi Gerd, Daniel,
> -Original Message-
> From: Daniel Vetter
> Sent: Monday, February 08, 2021 1:39 AM
> To: Gerd Hoffmann
> Cc: Daniel Vetter ; Kasireddy, Vivek
> ;
> virtualizat...@lists.linux-foundation.org; dri-devel@lists.freedesktop.org;
> Vette
Hi Gerd,
> -Original Message-
> From: Gerd Hoffmann
> Sent: Tuesday, February 09, 2021 12:45 AM
> To: Kasireddy, Vivek
> Cc: Daniel Vetter ;
> virtualizat...@lists.linux-foundation.org; dri-
> de...@lists.freedesktop.org; Vetter, Daniel ;
> daniel.vet..
Hi Gerd,
> > > You don't have to use the rendering pipeline. You can let the i915
> > > gpu render into a dma-buf shared with virtio-gpu, then use
> > > virtio-gpu only for buffer sharing with the host.
[Kasireddy, Vivek] Just to confirm my understanding of w
Hi Christian,
>
> Hi Vivek,
>
> > [Kasireddy, Vivek] What if I do mmap() on the fd followed by mlock()
> > or mmap() followed by get_user_pages()? If it still fails, would
> > ioremapping the device memory and poking at the backing storage be an
> > option? Or
Hi Gerd,
>
> On Fri, Feb 12, 2021 at 08:15:12AM +, Kasireddy, Vivek wrote:
> > Hi Gerd,
> > [Kasireddy, Vivek] Just to confirm my understanding of what you are
> > suggesting, are you saying that we need to either have Weston allocate
> > scanout buffers (GBM su
Hi Gerd,
> On Tue, May 11, 2021 at 01:36:08AM -0700, Vivek Kasireddy wrote:
> > This feature enables the Guest to wait until a flush has been
> > performed on a buffer it has submitted to the Host.
>
> This needs a virtio-spec update documenting the new feature.
[Kasired
;t need the previous one? That is likewise
> linked to a command, although it is set_scanout this time.
[Kasireddy, Vivek] Mainly for page-flipping but I'd also like to have fbcon,
Xorg that
do frontbuffer rendering/updates to work seamlessly as well.
>
> So, right now qemu simply que
Hi Gerd,
> > [Kasireddy, Vivek] Correct, that is exactly what I want -- make the Guest
> > wait
> > until it gets notified that the Host is completely done processing/using
> > the fb.
> > However, there can be two resources the guest can be made to wait on: wait
thing)
> > 3. Pin the buffer with the most lenient approach
> >
> > Even the non-blocking interim stage is dangerous, since it'll just
> > result in other buffers (e.g. when triple-buffering) getting unbound
> > and we're back to the same stall. Note that t
Hi Tvrtko,
>
> On 15/03/2022 07:28, Kasireddy, Vivek wrote:
> > Hi Tvrtko, Daniel,
> >
> >>
> >> On 11/03/2022 09:39, Daniel Vetter wrote:
> >>> On Mon, 7 Mar 2022 at 21:38, Vivek Kasireddy
> >>> wrote:
> >>>>
&
Hi Tvrtko,
>
> On 16/03/2022 07:37, Kasireddy, Vivek wrote:
> > Hi Tvrtko,
> >
> >>
> >> On 15/03/2022 07:28, Kasireddy, Vivek wrote:
> >>> Hi Tvrtko, Daniel,
> >>>
> >>>>
> >>>> On 11/03/2022
Hi Tvrtko,
>
> On 18/02/2022 03:47, Kasireddy, Vivek wrote:
> > Hi Tvrtko,
> >
> >>
> >> On 17/02/2022 07:50, Vivek Kasireddy wrote:
> >>> While looking for next holes suitable for an allocation, although,
> >>> it is highly unlike
gt; alignment = 0;
> >
> > - once = mode & DRM_MM_INSERT_ONCE;
> > - mode &= ~DRM_MM_INSERT_ONCE;
> > -
> > remainder_mask = is_power_of_2(alignment) ? alignment - 1 : 0;
> > - for (hole = first_hole(mm, range_start, range_end, si
t; enum virtio_gpu_ctrl_type {
> > VIRTIO_GPU_UNDEFINED = 0,
>
> Where is the virtio-spec update for that?
[Kasireddy, Vivek] I was going to do that if there'd a consensus over
DRM_CAP_RELEASE_FENCE.
Otherwise, I don't think VIRTIO_GPU_F_RELEASE_FENCE is needed.
Thanks,
Vivek
>
> thanks,
> Gerd
Hi Pekka,
Thank you for reviewing this patch.
> On Mon, 13 Sep 2021 16:35:26 -0700
> Vivek Kasireddy wrote:
>
> > If a driver supports this capability, it means that there would be an
> > additional signalling mechanism for a page flip completion in addition
> > to out_fence or DRM_MODE_PAGE_FL
does not know that userspace can handle pageflip
> > > completing "too early", then it has no choice but to wait until the old
> > > buffer is really free before signalling pageflip completion.
> > >
> > > Wouldn't that make sense?
> > [V
memory buffer of type
> BLOB_MEM_GUEST [this is the common way to receive responses with
> virtgpu]. As such, there is no context specific read(..)
> implementation either -- just a poll(..) implementation.
[Kasireddy, Vivek] Given my limited understanding of virtio_gpu 3D/Virgl, I am
Hi Daniel, Greg,
If it is the same or a similar crash reported here:
https://lists.freedesktop.org/archives/dri-devel/2021-November/330018.html
and here:
https://lists.freedesktop.org/archives/dri-devel/2021-November/330212.html
then the fix is already merged:
https://git.kernel.org/pub/scm/linux
> virtio_gpu_fence_release is added to free virtio-gpu-fence
> upon release of dma_fence.
>
> Cc: Gurchetan Singh
> Cc: Gerd Hoffmann
> Cc: Vivek Kasireddy
> Signed-off-by: Dongwon Kim
> ---
> drivers/gpu/drm/virtio/virtgpu_fence.c | 8
> 1 file changed, 8 insertions(+)
>
> diff --g
pu_framebuffer(plane->state->fb);
> bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
> - if (vgfb->fence) {
> - struct virtio_gpu_object_array *objs;
>
> + if (!bo)
> + return;
[Kasireddy, Vivek] I think you can drop the above check as
ails, it falls back to non-fence path so it
> won't fail for primary-plane-update.
>
> For cursor plane update, it returns if fence is NULL but we could change
> it to just proceed and just make it skip waiting like,
[Kasireddy, Vivek] But cursor plane update is always tied to a
Hi Tvrtko,
> -Original Message-
> From: Tvrtko Ursulin
> Sent: Wednesday, February 02, 2022 5:04 AM
> To: Kasireddy, Vivek ;
> dri-devel@lists.freedesktop.org
> Subject: Re: [PATCH 1/2] drm/mm: Add an iterator to optimally walk over holes
> for an
> allocation
t.
>
> Was the need for this just a consequence of insufficient locking in the
> i915 patch?
[Kasireddy, Vivek] Partly, yes; but I figured since we are anyway doing
if (!entry || ..), it makes sense to dereference entry and extract the rb_node
after this check.
Thanks,
Vivek
>
> Reg
Hi Gerd,
Any further comments on this?
Thanks,
Vivek
>
> Hi Gerd,
>
> > > [Kasireddy, Vivek] Correct, that is exactly what I want -- make the
> > > Guest wait until it gets notified that the Host is completely done
> > > processing/using the
> fb.
> &
is this need to have userspace also check for position
> info updates added by patch #1)?
[Kasireddy, Vivek] Yes, that is exactly the reason why this property is needed.
In
other words, Mutter does not seem to look at suggested_x/y values (or position
info)
if hotplug_mode_property is not the
#x27;s not documented anywhere, and it's also not done with any
> piece of common code. Which all looks really fishy.
[Kasireddy, Vivek] AFAIU, this property appears to be useful only for virtual
GPU drivers to share the Host output(s) layout with the Guest compositor. The
sugg
Hi David,
>
> On 13.06.23 10:26, Kasireddy, Vivek wrote:
> > Hi David,
> >
> >>
> >> On 12.06.23 09:10, Kasireddy, Vivek wrote:
> >>> Hi Mike,
> >>
> >> Hi Vivek,
> >>
> >>>
> >>> Sorry for the
Hi Gerd,
>
> On Mon, May 15, 2023 at 10:04:42AM -0700, Mike Kravetz wrote:
> > On 05/12/23 16:29, Mike Kravetz wrote:
> > > On 05/12/23 14:26, James Houghton wrote:
> > > > On Fri, May 12, 2023 at 12:20 AM Junxiao Chang
> wrote:
> > > >
> > > > This alone doesn't fix mapcounting for PTE-mapped H
Hi David,
> > The first patch ensures that the mappings needed for handling mmap
> > operation would be managed by using the pfn instead of struct page.
> > The second patch restores support for mapping hugetlb pages where
> > subpages of a hugepage are not directly used anymore (main reason
> > f
Hi Peter,
>
> On Fri, Jun 23, 2023 at 06:13:02AM +, Kasireddy, Vivek wrote:
> > Hi David,
> >
> > > > The first patch ensures that the mappings needed for handling mmap
> > > > operation would be managed by using the pfn instead of struct page.
>
Hi David,
> On 26.06.23 19:52, Peter Xu wrote:
> > On Mon, Jun 26, 2023 at 07:45:37AM +, Kasireddy, Vivek wrote:
> >> Hi Peter,
> >>
> >>>
> >>> On Fri, Jun 23, 2023 at 06:13:02AM +, Kasireddy, Vivek wrote:
> >>>> Hi David
Hi David,
>
> On 27.06.23 08:37, Kasireddy, Vivek wrote:
> > Hi David,
> >
>
> Hi!
>
> sorry for taking a bit longer to reply lately.
No problem.
>
> [...]
>
> >>> Sounds right, maybe it needs to go back to the old GUP solution, though
Hi Alistair,
>
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 64a3239b6407..1f2f0209101a 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -6096,8 +6096,12 @@ vm_fault_t hugetlb_fault(struct mm_struct
> *mm, struct vm_area_struct *vma,
> > * hugetlb_no_page will dro
Hi Alistair,
>
>
> "Kasireddy, Vivek" writes:
>
> > Hi Alistair,
> >
> >>
> >> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> >> > index 64a3239b6407..1f2f0209101a 100644
> >> > --- a/mm/hugetlb.c
> >> >
Hi Jason,
>
> On Mon, Jul 24, 2023 at 07:54:38AM +, Kasireddy, Vivek wrote:
>
> > > I'm not at all familiar with the udmabuf use case but that sounds
> > > brittle and effectively makes this notifier udmabuf specific right?
> > Oh, Qemu uses the u
Hi Alistair,
> >>
> >> Yes, although obviously as I think you point out below you wouldn't be
> >> able to take any sleeping locks in mmu_notifier_update_mapping().
> > Yes, I understand that, but I am not sure how we can prevent any potential
> > notifier callback from taking sleeping locks other
Hi Hugh,
>
> On Mon, 24 Jul 2023, Kasireddy, Vivek wrote:
> > Hi Jason,
> > > On Mon, Jul 24, 2023 at 07:54:38AM +0000, Kasireddy, Vivek wrote:
> > >
> > > > > I'm not at all familiar with the udmabuf use case but that sounds
> > > >
Hi Jason,
> > >
> > > > > I'm not at all familiar with the udmabuf use case but that sounds
> > > > > brittle and effectively makes this notifier udmabuf specific right?
> > > > Oh, Qemu uses the udmabuf driver to provide Host Graphics
> components
> > > > (such as Spice, Gstreamer, UI, etc) zero-
Hi Jason,
>
> On Tue, Jul 25, 2023 at 10:44:09PM +, Kasireddy, Vivek wrote:
> > > If you still need the memory mapped then you re-call hmm_range_fault
> > > and re-obtain it. hmm_range_fault will resolve all the races and you
> > > get new pages.
>
&
Hi Peter,
> > > > > > > I'm not at all familiar with the udmabuf use case but that sounds
> > > > > > > brittle and effectively makes this notifier udmabuf specific
> > > > > > > right?
> > > > > > Oh, Qemu uses the udmabuf driver to provide Host Graphics
> > > components
> > > > > > (such as Spi
Hi Jason,
> > > > > If you still need the memory mapped then you re-call
> hmm_range_fault
> > > > > and re-obtain it. hmm_range_fault will resolve all the races and you
> > > > > get new pages.
> > >
> > > > IIUC, for my udmabuf use-case, it looks like calling hmm_range_fault
> > > > immediately
Hi Jason,
>
> > > Later the importer decides it needs the memory again so it again asks
> > > for the dmabuf to be present, which does hmm_range_fault and gets
> > > whatever is appropriate at the time.
> > Unless I am missing something, I think just doing the above still won't
> > solve
> > the
Hi Peter,
> >
> > > > > > > > > I'm not at all familiar with the udmabuf use case but that
> sounds
> > > > > > > > > brittle and effectively makes this notifier udmabuf specific
> right?
> > > > > > > > Oh, Qemu uses the udmabuf driver to provide Host Graphics
> > > > > components
> > > > > > > >
gt;> On 01.08.23 14:19, Jason Gunthorpe wrote:
> >>>>> On Tue, Aug 01, 2023 at 05:32:38AM +, Kasireddy, Vivek wrote:
> >>>>>
> >>>>>>> You get another invalidate because the memfd removes the zero
> pages
> >>>>>
Hi Jason,
> > > Right, the "the zero pages are changed into writable pages" in your
> > > above comment just might not apply, because there won't be any page
> > > replacement (hopefully :) ).
>
> > If the page replacement does not happen when there are new writes to the
> > area where the hole p
Hi Peter,
> > Ok, I'll keep your use-case in mind but AFAICS, the process that creates
> > the udmabuf can be considered the owner. So, I think it makes sense that
> > the owner's VMA range can be registered (via mmu_notifiers) for updates.
>
> No need to have your special attention on this; my u
Hi Daniel,
>
> On Tue, Jul 18, 2023 at 01:28:57AM -0700, Vivek Kasireddy wrote:
> > When a hole is punched in the memfd or when a page is replaced for
> > any reason, the udmabuf driver needs to get notified in order to
> > update its list of pages with the new page. To accomplish this, we
> > fi
Hi Alistair, David, Jason,
> >> Right, the "the zero pages are changed into writable pages" in your
> >> above comment just might not apply, because there won't be any
> page
> >> replacement (hopefully :) ).
>
> > If the page replacement does not happen when there are new wri
Hi David,
> >
> Right, the "the zero pages are changed into writable pages" in your
> above comment just might not apply, because there won't be any
> >> page
> replacement (hopefully :) ).
> >>
> >>> If the page replacement does not happen when there are new
> w
Hi Mike,
Sorry for the late reply; I just got back from vacation.
If it is unsafe to directly use the subpages of a hugetlb page, then reverting
this patch seems like the only option for addressing this issue immediately.
So, this patch is
Acked-by: Vivek Kasireddy
As far as the use-case is conc
Hi David,
>
> On 12.06.23 09:10, Kasireddy, Vivek wrote:
> > Hi Mike,
>
> Hi Vivek,
>
> >
> > Sorry for the late reply; I just got back from vacation.
> > If it is unsafe to directly use the subpages of a hugetlb page, then
> > reverting
Hi Jason,
>
> > No, adding HMM_PFN_REQ_WRITE still doesn't help in fixing the issue.
> > Although, I do not have THP enabled (or built-in), shmem does not evict
> > the pages after hole punch as noted in the comment in shmem_fallocate():
>
> This is the source of all your problems.
>
> Things t
Hi Jason,
> > >
> > > > No, adding HMM_PFN_REQ_WRITE still doesn't help in fixing the issue.
> > > > Although, I do not have THP enabled (or built-in), shmem does not evict
> > > > the pages after hole punch as noted in the comment in
> shmem_fallocate():
> > >
> > > This is the source of all your
Hi Jason,
> > This patch series adds support for migrating pages associated with
> > a udmabuf out of the movable zone or CMA to avoid breaking features
> > such as memory hotunplug.
> >
> > The first patch exports check_and_migrate_movable_pages() function
> > out of GUP so that the udmabuf drive
Hi Alistair,
> >> > > > No, adding HMM_PFN_REQ_WRITE still doesn't help in fixing the
> issue.
> >> > > > Although, I do not have THP enabled (or built-in), shmem does not
> evict
> >> > > > the pages after hole punch as noted in the comment in
> >> shmem_fallocate():
> >> > >
> >> > > This is the
Hi David,
>
> >> - Add a new API to the backing store/allocator to longterm-pin the page.
> >>For example, something along the lines of
> shmem_pin_mapping_page_longterm()
> >>for shmem as suggested by Daniel. A similar one needs to be added for
> >>hugetlbfs as well.
> >
> > This may
Hi Alistair,
> >
> >> >> > > > No, adding HMM_PFN_REQ_WRITE still doesn't help in fixing the
> >> issue.
> >> >> > > > Although, I do not have THP enabled (or built-in), shmem does
> not
> >> evict
> >> >> > > > the pages after hole punch as noted in the comment in
> >> >> shmem_fallocate():
> >>
Hi Jason,
>
> On Tue, Jul 18, 2023 at 01:28:56AM -0700, Vivek Kasireddy wrote:
> > Currently, there does not appear to be any mechanism for letting
> > drivers or other kernel entities know about updates made in a
> > mapping particularly when a new page is faulted in. Providing
> > notifications
Hi Mike,
>
> On 07/18/23 01:26, Vivek Kasireddy wrote:
> > A user or admin can configure a VMM (Qemu) Guest's memory to be
> > backed by hugetlb pages for various reasons. However, a Guest OS
> > would still allocate (and pin) buffers that are backed by regular
> > 4k sized pages. In order to map
Hi Jason,
>
> On Wed, Jul 19, 2023 at 12:05:29AM +, Kasireddy, Vivek wrote:
>
> > > If there is no change to the PTEs then it is hard to see why this
> > > would be part of a mmu_notifier.
> > IIUC, the PTEs do get changed but only when a new page is faulted
Hi Huan,
> Subject: [PATCH v4 1/5] udmabuf: direct map pfn when first page fault
>
> The current udmabuf mmap uses a page fault to populate the vma.
>
> However, the current udmabuf has already obtained and pinned the folio
> upon completion of the creation.This means that the physical memory ha
Hi Huan,
> Subject: [PATCH v4 3/5] udmabuf: fix vmap_udmabuf error page set
>
> Currently vmap_udmabuf set page's array by each folio.
> But, ubuf->folios is only contain's the folio's head page.
>
> That mean we repeatedly mapped the folio head page to the vmalloc area.
>
> Due to udmabuf can
Hi Huan,
> Subject: [PATCH v4 4/5] udmabuf: udmabuf_create codestyle cleanup
>
> There are some variables in udmabuf_create that are only used inside the
> loop. Therefore, there is no need to declare them outside the scope.
> This patch moved it into loop.
>
> It is difficult to understand the
Hi Huan,
> Subject: [PATCH v4 5/5] udmabuf: remove udmabuf_folio
>
> Currently, udmabuf handles folio by creating an unpin list to record
> each folio obtained from the list and unpinning them when released. To
> maintain this approach, many data structures have been established.
>
> However, ma
Hi Huan,
> Subject: [PATCH v5 1/7] udmabuf: pre-fault when first page fault
>
> The current udmabuf mmap uses a page fault to populate the vma.
>
> However, the current udmabuf has already obtained and pinned the folio
> upon completion of the creation.This means that the physical memory has
> a
Hi Huan,
> Subject: [PATCH v5 4/7] udmabuf: udmabuf_create pin folio codestyle
> cleanup
>
> This patch split pin folios into single function: udmabuf_pin_folios.
>
> When record folio and offset into udmabuf_folio and offsets, the outer
> loop of this patch iterates through folios, while the in
Hi Huan,
> Subject: [PATCH v5 5/7] udmabuf: introduce udmabuf init and deinit helper
>
> After udmabuf is allocated, its resources need to be initialized,
> including various array structures. The current array structure has
> already been greatly expanded.
>
> Also, before udmabuf needs to be k
Hi Huan,
> Subject: [PATCH v5 6/7] udmabuf: remove udmabuf_folio
>
> Currently, udmabuf handles folio by creating an unpin list to record
> each folio obtained from the list and unpinning them when released. To
> maintain this approach, many data structures have been established.
>
> However, ma
Hi Huan,
> Subject: [PATCH v5 7/7] udmabuf: reuse folio array when pin folios
>
> When invoke memfd_pin_folios, we need offer an array to save each folio
> which we pinned.
>
> The currently way is dynamic alloc an array, get folios, save into
> udmabuf and then free.
>
> If the size is tiny, a
Hi Huan,
> Subject: Re: [PATCH v5 4/7] udmabuf: udmabuf_create pin folio codestyle
> cleanup
>
>
> 在 2024/9/6 16:17, Kasireddy, Vivek 写道:
> > Hi Huan,
> >
> >> Subject: [PATCH v5 4/7] udmabuf: udmabuf_create pin folio codestyle
> >> cleanup
>
Hi Jason, David,
>
> > Sure, we can simply always fail when we detect ZONE_MOVABLE or
> MIGRATE_CMA.
> > Maybe that keeps at least some use cases working.
>
> That seems fairly reasonable
AFAICS, failing udmabuf_create() if we detect one or more pages are in
ZONE_MOVABLE or MIGRATE_CMA would not
Hi Jason, David,
> > > Sure, we can simply always fail when we detect ZONE_MOVABLE or
> > MIGRATE_CMA.
> > > Maybe that keeps at least some use cases working.
> >
> > That seems fairly reasonable
> AFAICS, failing udmabuf_create() if we detect one or more pages are in
> ZONE_MOVABLE or MIGRATE_CMA
Hi Alistair,
>
> > >
> > >> >> > > > No, adding HMM_PFN_REQ_WRITE still doesn't help in fixing the
> > >> issue.
> > >> >> > > > Although, I do not have THP enabled (or built-in), shmem does
> > not
> > >> evict
> > >> >> > > > the pages after hole punch as noted in the comment in
> > >> >> shmem
Hi Fengwei,
>
> Add udmabuf maintainers.
>
> On 9/7/2023 2:51 AM, syzbot wrote:
> > Hello,
> >
> > syzbot found the following issue on:
> >
> > HEAD commit:db906f0ca6bb Merge tag 'phy-for-6.6' of git://git.kernel.o..
> > git tree: upstream
> > console+strace: https://syzkaller.appspot.
Hi David,
> > For drivers that would like to longterm-pin the pages associated
> > with a file, the pin_user_pages_fd() API provides an option to
> > not only FOLL_PIN the pages but also to check and migrate them
> > if they reside in movable zone or CMA block. For now, this API
> > can only work
Hi Andrew and SJ,
>
> On Fri, 5 Jul 2024 13:48:25 -0700 SeongJae Park wrote:
>
> > > + * memfd_pin_folios() - pin folios associated with a memfd
> > [...]
> > > + for (i = 0; i < nr_found; i++) {
> > > + /*
> > > + * As there ca
Hi Josh,
> It looks like the virtio-gpu flush should be fenced, but on the host side the
> received flush cmd doesn't have the fence flag set, and no fence_id. So,
> I have to reply right away instead of waiting for scanout to complete.
> Is that expected? then what's the right way to vsync the
Hi Josh,
>
> > If virgl=true (which means blob=false at the moment), then things work
> > very differently.
> Yes, we're using virglrenderer. The flushed resources are backed by host
Virgl is not my forte. Someone working on virgl should be able to help you.
Thanks,
Vivek
> allocated buffers.
Hi Andrew,
>
> > Hi Andrew and SJ,
> >
> > >
> > > >
> > > > I didn't look deep into the patch, so unsure if that's a valid fix,
> > > > though.
> > > > May I ask your thoughts?
> > >
> > > Perhaps we should propagate the errno which was returned by
> > > try_grab_folio()?
> > >
> > > I'll do it
Hi Dmitry,
> > +static void virtgpu_dma_buf_move_notify(struct dma_buf_attachment
> *attach)
> > +{
> > + struct drm_gem_object *obj = attach->importer_priv;
> > + struct virtio_gpu_device *vgdev = obj->dev->dev_private;
> > + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
> > +
Hi Huan,
> This patchset attempts to fix some errors in udmabuf and remove the
> upin_list structure.
>
> Some of this fix just gather the patches which I upload before.
>
> Patch1
> ===
> Try to remove page fault mmap and direct map it.
> Due to current udmabuf has already obtained and pinned t
Hi Huan,
>
> The current udmabuf mmap uses a page fault mechanism to populate the
> vma.
>
> However, the current udmabuf has already obtained and pinned the folio
> upon completion of the creation.This means that the physical memory has
> already been acquired, rather than being accessed dynami
Hi Huan,
>
> When PAGE_SIZE 4096, MAX_PAGE_ORDER 10, 64bit machine,
> page_alloc only support 4MB.
> If above this, trigger this warn and return NULL.
>
> udmabuf can change size limit, if change it to 3072(3GB), and then alloc
> 3GB udmabuf, will fail create.
>
> [ 4080.876581] [ c
Hi Huan,
>
> Currently vmap_udmabuf set page's array by each folio.
> But, ubuf->folios is only contain's the folio's head page.
>
> That mean we repeatedly mapped the folio head page to the vmalloc area.
>
> This patch fix it, set each folio's page correct, so that pages array
> contains right
Hi Huan,
>
> Currently, udmabuf handles folio by creating an unpin list to record
> each folio obtained from the list and unpinning them when released. To
> maintain this approach, many data structures have been established.
>
> However, maintaining this type of data structure requires a signifi
Hi Huan,
>
> The current udmabuf mmap uses a page fault to populate the vma.
>
> However, the current udmabuf has already obtained and pinned the folio
> upon completion of the creation.This means that the physical memory has
> already been acquired, rather than being accessed dynamically. The
>
Hi Huan,
> Subject: [PATCH v3 3/5] fix vmap_udmabuf error page set
Please prepend a "udmabuf:" to the subject line and improve the wording.
>
> Currently vmap_udmabuf set page's array by each folio.
> But, ubuf->folios is only contain's the folio's head page.
>
> That mean we repeatedly mapped
1 - 100 of 189 matches
Mail list logo