> Subject: [PATCH 2/3] udmabuf: use sgtable-based scatterlist wrappers
>
> Use common wrappers operating directly on the struct sg_table objects to
> fix incorrect use of statterlists sync calls. dma_sync_sg_for_*()
> functions have to be called with the number of elements originally passed
> to d
Hi Dmitry,
> Subject: Re: [PATCH] drm/virtio: Fix NULL pointer deref in
> virtgpu_dma_buf_free_obj()
>
> On 5/2/25 02:24, Vivek Kasireddy wrote:
> > There is a chance that obj->dma_buf would be NULL by the time
> > virtgpu_dma_buf_free_obj() is called. This can happen for imported
> > prime objec
Hi Huan,
> Subject: Re: [PATCH 1/2] Revert "udmabuf: fix vmap_udmabuf error page set"
>
> From 38aa11d92f209e7529736f3e11e08dfc804bdfae Mon Sep 17 00:00:00
> 2001
> From: Huan Yang
> Date: Tue, 15 Apr 2025 10:04:18 +0800
> Subject: [PATCH 1/2] Revert "udmabuf: fix vmap_udmabuf error page set"
>
Hi Huan,
> Subject: [PATCH 2/2] udmabuf: fix vmap missed offset page
>
> Before invoke vmap, we need offer a pages pointer array which each page
> need to map in vmalloc area.
>
> But currently vmap_udmabuf only set each folio's head page into pages,
> missed each offset pages when iter.
>
> Th
Hi Huan,
> Subject: [PATCH 1/2] Revert "udmabuf: fix vmap_udmabuf error page set"
>
> This reverts commit 18d7de823b7150344d242c3677e65d68c5271b04.
>
> This given a misuse of vmap_pfn, vmap_pfn only allow none-page based
> user invoke, i.e. PCIe BARs and other.
The commit message can be improved
Hi Dmitry,
> Subject: Re: [PATCH v1 2/2] drm/virtio: Fix missed dmabuf unpinning in error
> path of prepare_fb()
>
> On 3/26/25 08:14, Kasireddy, Vivek wrote:
> ...
> >> static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
> >>
Hi Dmitry,
> Subject: [PATCH v1 2/2] drm/virtio: Fix missed dmabuf unpinning in error path
> of prepare_fb()
>
> Unpin imported dmabuf on fence allocation failure in prepare_fb().
>
> Fixes: 4a696a2ee646 ("drm/virtio: Add prepare and cleanup routines for
> imported dmabuf obj")
> Cc: # v6.14+
>
> Subject: Re: [PATCH] drm/virtio: Fix flickering issue seen with imported
> dmabufs
>
> On 3/25/25 23:10, Vivek Kasireddy wrote:
> > We need to save the reservation object pointer associated with the
> > imported dmabuf in the newly created GEM object to allow
> > drm_gem_plane_helper_prepare_fb(
Hi Christian,
> Subject: Re: [PATCH] udmabuf: fix a buf size overflow issue during udmabuf
> creation
>
> Am 25.03.25 um 07:23 schrieb Kasireddy, Vivek:
> > Hi Christian,
> >
> >> Am 21.03.25 um 17:41 schrieb Xiaogang.Chen:
> >>> From: Xiaogang Chen
Hi Christian,
> Am 21.03.25 um 17:41 schrieb Xiaogang.Chen:
> > From: Xiaogang Chen
> >
> > by casting size_limit_mb to u64 when calculate pglimit.
> >
> > Signed-off-by: Xiaogang Chen
>
> Reviewed-by: Christian König
>
> If nobody objects I'm going to push that to drm-misc-fixes.
No objectio
Hi Jason,
> Subject: Re: [PATCH 0/4] cover-letter: Allow MMIO regions to be exported
> through dmabuf
>
> On Wed, Feb 26, 2025 at 07:55:07AM +0000, Kasireddy, Vivek wrote:
>
> > > Is there any update or ETA for the v3? Are there any ways we can help?
>
> > I be
Hi Wei Lin,
[...]
>
> Yeah, the mmap handler is really needed as a debugging tool given
> that the
> importer would not be able to provide access to the dmabuf's
> underlying
> memory via the CPU in any other way.
>
>
>
> - Rather than handle different regions w
>
> syzbot has found a reproducer for the following issue on:
>
> HEAD commit:69e858e0b8b2 Merge tag 'uml-for-linus-6.14-rc1' of git://g..
> git tree: upstream
> console+strace: https://syzkaller.appspot.com/x/log.txt?x=1431cb2458
> kernel config: https://syzkaller.appspot.com/x/.c
> Subject: Re: [syzbot] [mm?] kernel BUG in alloc_hugetlb_folio_reserve
>
> syzbot has found a reproducer for the following issue on:
>
> HEAD commit:69e858e0b8b2 Merge tag 'uml-for-linus-6.14-rc1' of git://g..
> git tree: upstream
> console+strace: https://syzkaller.appspot.com/x/log.t
Hi David,
> Subject: Re: [PATCH v2 1/2] mm/memfd: reserve hugetlb folios before
> allocation
>
> On 14.01.25 09:08, Vivek Kasireddy wrote:
> > There are cases when we try to pin a folio but discover that it has
> > not been faulted-in. So, we try to allocate it in memfd_alloc_folio()
> > but ther
Hi Andrew,
> Subject: Re: [PATCH v2 1/2] mm/memfd: reserve hugetlb folios before
> allocation
>
>
> > There are cases when we try to pin a folio but discover that it has
> > not been faulted-in. So, we try to allocate it in memfd_alloc_folio()
> > but there is a chance that we might encounter a
Hi Dmitry,
> Subject: Re: [PATCH] drm/virtio: Lock the VGA resources during initialization
>
> On 12/11/24 09:43, Vivek Kasireddy wrote:
> > +static int __init virtio_gpu_driver_init(void)
> > +{
> > + struct pci_dev *pdev;
> > + int ret;
> > +
> > + pdev = pci_get_device(PCI_VENDOR_ID_REDH
Hi Christian,
> Subject: Re: [PATCH 0/4] cover-letter: Allow MMIO regions to be exported
> through dmabuf
>
> >>
> >>> I will resend the patch series. I was experiencing issues with my email
> >>> client, which inadvertently split the series into two separate emails.
> >>
> >> Alternatively I c
Hi Wei Lin,
> Subject: Re: [PATCH 0/4] cover-letter: Allow MMIO regions to be exported
> through dmabuf
> >>
> >> From: Wei Lin Guay
> >>
> >> This is another attempt to revive the patches posted by Jason
> >> Gunthorpe and Vivek Kasireddy, at
> >> https://patchwork.kernel.org/project/linux-media
Hi Christian,
> Subject: Re: [PATCH 0/4] cover-letter: Allow MMIO regions to be exported
> through dmabuf
>
>
>> I will resend the patch series. I was experiencing issues with my email
>> client, which inadvertently split the series into two separate emails.
>
>
> Alternatively I can also
Hi Wei Lin,
> Subject: [PATCH 0/4] cover-letter: Allow MMIO regions to be exported
> through dmabuf
>
> From: Wei Lin Guay
>
> This is another attempt to revive the patches posted by Jason
> Gunthorpe and Vivek Kasireddy, at
> https://patchwork.kernel.org/project/linux-media/cover/0-v2-
> 47261
Hi Jann,
> Subject: [PATCH v2 0/3] fixes for udmabuf (memfd sealing checks and a leak)
>
> I have tested that patches 2 and 3 work using the following reproducers.
> I did not write a reproducer for the issue described in patch 1.
>
> Reproducer for F_SEAL_FUTURE_WRITE not being respected:
> ```
> Subject: [PATCH v2 1/3] udmabuf: fix racy memfd sealing check
>
> The current check_memfd_seals() is racy: Since we first do
> check_memfd_seals() and then udmabuf_pin_folios() without holding any
> relevant lock across both, F_SEAL_WRITE can be set in between.
> This is problematic because we c
Hi Jann,
> Subject: [PATCH 3/3] udmabuf: fix memory leak on last export_udmabuf()
> error path
>
> In export_udmabuf(), if dma_buf_fd() fails because the FD table is full, a
> dma_buf owning the udmabuf has already been created; but the error
> handling
> in udmabuf_create() will tear down the ud
> Subject: [PATCH 2/3] udmabuf: also check for F_SEAL_FUTURE_WRITE
>
> When F_SEAL_FUTURE_WRITE was introduced, it was overlooked that
> udmabuf
> must reject memfds with this flag, just like ones with F_SEAL_WRITE.
> Fix it by adding F_SEAL_FUTURE_WRITE to SEALS_DENIED.
>
> Fixes: ab3948f58ff8 (
Hi Jann,
> Subject: [PATCH 1/3] udmabuf: fix racy memfd sealing check
>
> The current check_memfd_seals() is racy: Since we first do
> check_memfd_seals() and then udmabuf_pin_folios() without holding any
> relevant lock across both, F_SEAL_WRITE can be set in between.
> This is problematic becau
Hi Jann, Julian,
> Subject: udmabuf: check_memfd_seals() is racy
>
> Hi!
>
> Julian Orth reported at
> https://bugzilla.kernel.org/show_bug.cgi?id=219106 that
Thank you for reporting this bug.
> udmabuf_create() checks for F_SEAL_WRITE in a racy way, so a udmabuf
> can end up holding reference
Hi Dmitry,
> Subject: [PATCH v1] drm/virtio: Factor out common dmabuf unmapping code
>
> Move out dmabuf detachment and unmapping into separate function. This
> removes duplicated code and there is no need to check the GEM's kref now,
> since both bo->attached and bo->sgt are unset under held res
> Subject: [PATCH v1] drm/virtio: Set missing bo->attached flag
>
> VirtIO-GPU driver now supports detachment of shmem BOs from host, but
> doing it only for imported dma-bufs. Mark all shmem BOs as attached, not
> just dma-bufs. This is a minor correction since detachment of a non-dmabuf
> BOs no
Hi Dmitry,
> Subject: Re: [PATCH v5 0/5] drm/virtio: Import scanout buffers from other
> devices
>
> Hello, Vivek
>
> All patches applied to misc-next with a small modification, thanks!
Thank you so much for taking the time to test, review and merge this series!!
>
> Note: While verifying move
Hi Dmitry,
> >> Wondering if it could be a problem with my guest kernel config. I
> >> attached my config to the email, please try to boot guest with my config
> >> if you'll have time.
> > Sure, let me try to test with your config. Could you also please share your
> > Qemu launch parameters?
>
>
Hi Dmitry,
> Subject: Re: [PATCH v4 4/5] drm/virtio: Import prime buffers from other
> devices as guest blobs
>
> > struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device
> *dev,
> > struct dma_buf *buf)
> > {
> > + struct virtio_gpu_devi
Hi Dmitry,
> Subject: Re: [PATCH v4 2/5] drm/virtio: Add a helper to map and note the
> dma addrs and lengths
>
> > +int virtgpu_dma_buf_import_sgt(struct virtio_gpu_mem_entry **ents,
> > + unsigned int *nents,
> > + struct virtio_gpu_object *bo,
Hi Dmitry,
> Subject: Re: [PATCH v2 2/5] drm/virtio: Add a helper to map and note the
> dma addrs and lengths
> >> ...
> >>> After rebasing v2 of this patch series on top of the above patch, I see
> >>> that
> >>> this use-case works as expected with Qemu master. Let me send out v3,
> >>> which w
Hi Dmitry,
> Subject: Re: [PATCH v2 2/5] drm/virtio: Add a helper to map and note the
> dma addrs and lengths
>
> ...
> > After rebasing v2 of this patch series on top of the above patch, I see that
> > this use-case works as expected with Qemu master. Let me send out v3,
> > which would be a reb
Hi Dmitry,
> > Subject: Re: [PATCH v2 2/5] drm/virtio: Add a helper to map and note the
> > dma addrs and lengths
> >
> > On 10/29/24 09:18, Kasireddy, Vivek wrote:
> > >>>> BTW, is any DG2 GPU suitable for testing of this patchset? Will I be
> > &g
Hi Dmitry,
> Subject: Re: [PATCH v2 2/5] drm/virtio: Add a helper to map and note the
> dma addrs and lengths
>
> On 10/29/24 09:18, Kasireddy, Vivek wrote:
> >>>> BTW, is any DG2 GPU suitable for testing of this patchset? Will I be
> >>>> able to
Hi Bjorn,
> Subject: Re: [PATCH v2 1/5] PCI/P2PDMA: Don't enforce ACS check for
> functions of same device
>
> On Wed, Oct 30, 2024 at 03:20:02PM -0600, Logan Gunthorpe wrote:
> > On 2024-10-30 12:46, Bjorn Helgaas wrote:
> > > On Fri, Oct 25, 2024 at 06:57:37
Hi Dmitry,
> Subject: Re: [PATCH v2 2/5] drm/virtio: Add a helper to map and note the
> dma addrs and lengths
> > +long virtgpu_dma_buf_import_sgt(struct virtio_gpu_mem_entry
> >> **ents,
> > + unsigned int *nents,
> > + struc
Hi Bjorn,
> Subject: Re: [PATCH v2 1/5] PCI/P2PDMA: Don't enforce ACS check for
> functions of same device
>
> On Thu, Oct 24, 2024 at 05:58:48AM +, Kasireddy, Vivek wrote:
> > > Subject: Re: [PATCH v2 1/5] PCI/P2PDMA: Don't enforce ACS check for
> > >
Hi Bjorn,
> Subject: Re: [PATCH v2 1/5] PCI/P2PDMA: Don't enforce ACS check for
> functions of same device
>
> On Sun, Oct 20, 2024 at 10:21:29PM -0700, Vivek Kasireddy wrote:
> > Functions of the same PCI device (such as a PF and a VF) share the
> > same bus and have a common root port and typic
Hi Logan,
> Subject: Re: [PATCH v2 1/5] PCI/P2PDMA: Don't enforce ACS check for
> functions of same device
>
>
>
> On 2024-10-22 09:16, Bjorn Helgaas wrote:
> > On Sun, Oct 20, 2024 at 10:21:29PM -0700, Vivek Kasireddy wrote:
> >> Functions of the same PCI device (such as a PF and a VF) share t
Hi Dmitry,
> Subject: Re: [PATCH v2 2/5] drm/virtio: Add a helper to map and note the
> dma addrs and lengths
>
> On 10/22/24 07:51, Kasireddy, Vivek wrote:
> > Hi Dmitry,
> >
> >>
> >> On 8/13/24 06:49, Vivek Kasireddy wrote:
> >>> +long
Hi Dmitry,
>
> On 8/13/24 06:49, Vivek Kasireddy wrote:
> > +long virtgpu_dma_buf_import_sgt(struct virtio_gpu_mem_entry **ents,
> > + unsigned int *nents,
> > + struct virtio_gpu_object *bo,
> > + struct dma_buf_attach
Hi Dmitry,
> Subject: [PATCH v3 2/2] drm/virtio: New fence for every plane update
>
> From: Dongwon Kim
>
> Having a fence linked to a virtio_gpu_framebuffer in the plane update
> sequence would cause conflict when several planes referencing the same
> framebuffer (e.g. Xorg screen covering mul
> Subject: [PATCH v3 1/2] drm/virtio: Use
> drm_gem_plane_helper_prepare_fb()
>
> From: Dongwon Kim
>
> Use drm_gem_plane_helper_prepare_fb() helper for explicit framebuffer
> synchronization. We need to wait for explicit fences in a case of
> Venus and native contexts when guest user space uses
Hi Matt,
>
> On Fri, Oct 11, 2024 at 07:40:26PM -0700, Vivek Kasireddy wrote:
> > For BOs of type ttm_bo_type_sg, that are backed by PCI BAR addresses
> > associated with a VF, we need to adjust and translate these addresses
> > to LMEM addresses to make the BOs usable by the PF. Otherwise, the B
Hi Logan,
>
> On 2024-10-11 20:40, Vivek Kasireddy wrote:
> > Functions of the same PCI device (such as a PF and a VF) share the
> > same bus and have a common root port and typically, the PF provisions
> > resources for the VF. Therefore, they can be considered compatible
> > as far as P2P acces
os.
>
> Some of this fix just gather the patches which I upload before.
>
> Any patch has passed the udmabuf self-test suite's tests.
> Suggested by Kasireddy, Vivek
> Patch6 modified the unpin function, therefore running the udmabuf
> self-test program in a loop did not re
Hi Huan,
> Subject: [PATCH v6 7/7] udmabuf: reuse folio array when pin folios
>
> When invoke memfd_pin_folios, we need offer an array to save each folio
> which we pinned.
>
> The currently way is dynamic alloc an array, get folios, save into
*current
> udmabuf and then free.
>
> If the size
Hi Huan,
> Subject: [PATCH v6 6/7] udmabuf: remove udmabuf_folio
>
> Currently, udmabuf handles folio by creating an unpin list to record
> each folio obtained from the list and unpinning them when released. To
> maintain this approach, many data structures have been established.
>
> However, ma
Hi Huan,
> Subject: [PATCH v6 4/7] udmabuf: udmabuf_create pin folio codestyle
> cleanup
>
> This patch aims to simplify the pinning of folio during the udmabuf
> creation. No functional changes.
>
> This patch moves the memfd pin folio to udmabuf_pin_folios and modifies
> the original loop cond
Hi Huan,
> Subject: [PATCH v6 3/7] udmabuf: fix vmap_udmabuf error page set
>
> Currently vmap_udmabuf set page's array by each folio.
> But, ubuf->folios is only contain's the folio's head page.
>
> That mean we repeatedly mapped the folio head page to the vmalloc area.
>
> Due to udmabuf can
Hi Dmitry,
> Subject: [PATCH v2 0/5] drm/virtio: Import scanout buffers from other
> devices
>
> Having virtio-gpu import scanout buffers (via prime) from other
> devices means that we'd be adding a head to headless GPUs assigned
> to a Guest VM or additional heads to regular GPU devices that are
Hi Huan,
> Subject: Re: [PATCH v5 4/7] udmabuf: udmabuf_create pin folio codestyle
> cleanup
>
>
> 在 2024/9/6 16:17, Kasireddy, Vivek 写道:
> > Hi Huan,
> >
> >> Subject: [PATCH v5 4/7] udmabuf: udmabuf_create pin folio codestyle
> >> cleanup
>
Hi Huan,
> Subject: [PATCH v5 7/7] udmabuf: reuse folio array when pin folios
>
> When invoke memfd_pin_folios, we need offer an array to save each folio
> which we pinned.
>
> The currently way is dynamic alloc an array, get folios, save into
> udmabuf and then free.
>
> If the size is tiny, a
Hi Huan,
> Subject: [PATCH v5 6/7] udmabuf: remove udmabuf_folio
>
> Currently, udmabuf handles folio by creating an unpin list to record
> each folio obtained from the list and unpinning them when released. To
> maintain this approach, many data structures have been established.
>
> However, ma
Hi Huan,
> Subject: [PATCH v5 5/7] udmabuf: introduce udmabuf init and deinit helper
>
> After udmabuf is allocated, its resources need to be initialized,
> including various array structures. The current array structure has
> already been greatly expanded.
>
> Also, before udmabuf needs to be k
Hi Huan,
> Subject: [PATCH v5 4/7] udmabuf: udmabuf_create pin folio codestyle
> cleanup
>
> This patch split pin folios into single function: udmabuf_pin_folios.
>
> When record folio and offset into udmabuf_folio and offsets, the outer
> loop of this patch iterates through folios, while the in
Hi Huan,
> Subject: [PATCH v5 1/7] udmabuf: pre-fault when first page fault
>
> The current udmabuf mmap uses a page fault to populate the vma.
>
> However, the current udmabuf has already obtained and pinned the folio
> upon completion of the creation.This means that the physical memory has
> a
Hi Huan,
> Subject: [PATCH v4 5/5] udmabuf: remove udmabuf_folio
>
> Currently, udmabuf handles folio by creating an unpin list to record
> each folio obtained from the list and unpinning them when released. To
> maintain this approach, many data structures have been established.
>
> However, ma
Hi Huan,
> Subject: [PATCH v4 4/5] udmabuf: udmabuf_create codestyle cleanup
>
> There are some variables in udmabuf_create that are only used inside the
> loop. Therefore, there is no need to declare them outside the scope.
> This patch moved it into loop.
>
> It is difficult to understand the
Hi Huan,
> Subject: [PATCH v4 3/5] udmabuf: fix vmap_udmabuf error page set
>
> Currently vmap_udmabuf set page's array by each folio.
> But, ubuf->folios is only contain's the folio's head page.
>
> That mean we repeatedly mapped the folio head page to the vmalloc area.
>
> Due to udmabuf can
Hi Huan,
> Subject: [PATCH v4 1/5] udmabuf: direct map pfn when first page fault
>
> The current udmabuf mmap uses a page fault to populate the vma.
>
> However, the current udmabuf has already obtained and pinned the folio
> upon completion of the creation.This means that the physical memory ha
Hi Huan,
>
> Currently, udmabuf handles folio by creating an unpin list to record
> each folio obtained from the list and unpinning them when released. To
> maintain this approach, many data structures have been established.
>
> However, maintaining this type of data structure requires a signifi
Hi Huan,
>
> There are some variables in udmabuf_create that are only used inside the
> loop. Therefore, there is no need to declare them outside the scope.
> This patch moved it into loop.
>
> It is difficult to understand the loop condition of the code that adds
> folio to the unpin_list.
>
>
Hi Huan,
> Subject: [PATCH v3 3/5] fix vmap_udmabuf error page set
Please prepend a "udmabuf:" to the subject line and improve the wording.
>
> Currently vmap_udmabuf set page's array by each folio.
> But, ubuf->folios is only contain's the folio's head page.
>
> That mean we repeatedly mapped
Hi Huan,
>
> The current udmabuf mmap uses a page fault to populate the vma.
>
> However, the current udmabuf has already obtained and pinned the folio
> upon completion of the creation.This means that the physical memory has
> already been acquired, rather than being accessed dynamically. The
>
Hi Huan,
>
> Currently, udmabuf handles folio by creating an unpin list to record
> each folio obtained from the list and unpinning them when released. To
> maintain this approach, many data structures have been established.
>
> However, maintaining this type of data structure requires a signifi
Hi Huan,
>
> Currently vmap_udmabuf set page's array by each folio.
> But, ubuf->folios is only contain's the folio's head page.
>
> That mean we repeatedly mapped the folio head page to the vmalloc area.
>
> This patch fix it, set each folio's page correct, so that pages array
> contains right
Hi Huan,
>
> When PAGE_SIZE 4096, MAX_PAGE_ORDER 10, 64bit machine,
> page_alloc only support 4MB.
> If above this, trigger this warn and return NULL.
>
> udmabuf can change size limit, if change it to 3072(3GB), and then alloc
> 3GB udmabuf, will fail create.
>
> [ 4080.876581] [ c
Hi Huan,
>
> The current udmabuf mmap uses a page fault mechanism to populate the
> vma.
>
> However, the current udmabuf has already obtained and pinned the folio
> upon completion of the creation.This means that the physical memory has
> already been acquired, rather than being accessed dynami
Hi Huan,
> This patchset attempts to fix some errors in udmabuf and remove the
> upin_list structure.
>
> Some of this fix just gather the patches which I upload before.
>
> Patch1
> ===
> Try to remove page fault mmap and direct map it.
> Due to current udmabuf has already obtained and pinned t
Hi Dmitry,
> > +static void virtgpu_dma_buf_move_notify(struct dma_buf_attachment
> *attach)
> > +{
> > + struct drm_gem_object *obj = attach->importer_priv;
> > + struct virtio_gpu_device *vgdev = obj->dev->dev_private;
> > + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
> > +
Hi Andrew,
>
> > Hi Andrew and SJ,
> >
> > >
> > > >
> > > > I didn't look deep into the patch, so unsure if that's a valid fix,
> > > > though.
> > > > May I ask your thoughts?
> > >
> > > Perhaps we should propagate the errno which was returned by
> > > try_grab_folio()?
> > >
> > > I'll do it
Hi Josh,
>
> > If virgl=true (which means blob=false at the moment), then things work
> > very differently.
> Yes, we're using virglrenderer. The flushed resources are backed by host
Virgl is not my forte. Someone working on virgl should be able to help you.
Thanks,
Vivek
> allocated buffers.
Hi Josh,
> It looks like the virtio-gpu flush should be fenced, but on the host side the
> received flush cmd doesn't have the fence flag set, and no fence_id. So,
> I have to reply right away instead of waiting for scanout to complete.
> Is that expected? then what's the right way to vsync the
Hi Andrew and SJ,
>
> On Fri, 5 Jul 2024 13:48:25 -0700 SeongJae Park wrote:
>
> > > + * memfd_pin_folios() - pin folios associated with a memfd
> > [...]
> > > + for (i = 0; i < nr_found; i++) {
> > > + /*
> > > + * As there ca
Hi Andrew,
> Subject: [PATCH v16 0/9] mm/gup: Introduce memfd_pin_folios() for pinning
> memfd folios
>
> Currently, some drivers (e.g, Udmabuf) that want to longterm-pin
> the pages/folios associated with a memfd, do so by simply taking a
> reference on them. This is not desirable because the pa
Hi Gurchetan,
>
> On Thu, May 30, 2024 at 12:21 AM Kasireddy, Vivek
> mailto:vivek.kasire...@intel.com> > wrote:
>
>
> Hi Gurchetan,
>
> >
> > On Fri, May 24, 2024 at 11:33 AM Kasireddy, Vivek
> > mailto:vivek.kasire...@i
Hi Oscar,
>
> On Thu, Jun 13, 2024 at 02:42:05PM -0700, Vivek Kasireddy wrote:
> > For drivers that would like to longterm-pin the folios associated
> > with a memfd, the memfd_pin_folios() API provides an option to
> > not only pin the folios via FOLL_PIN but also to check and migrate
> > them i
>
> HEAD commit:9d99040b1bc8 Add linux-next specific files for 20240529
> git tree: linux-next
> console+strace: https://syzkaller.appspot.com/x/log.txt?x=14c083e698
> kernel config: https://syzkaller.appspot.com/x/.config?x=735e953fee00ec19
> dashboard link:
> https://syzkaller.app
Hi Gurchetan,
>
> On Fri, May 24, 2024 at 11:33 AM Kasireddy, Vivek
> mailto:vivek.kasire...@intel.com> > wrote:
>
>
> Hi,
>
> Sorry, my previous reply got messed up as a result of HTML
> formatting. This is
> a
> From: Arnd Bergmann
>
> There is no !CONFIG_MMU version of vmf_insert_pfn():
>
> arm-linux-gnueabi-ld: drivers/dma-buf/udmabuf.o: in function
> `udmabuf_vm_fault':
> udmabuf.c:(.text+0xaa): undefined reference to `vmf_insert_pfn'
>
> Fixes: f7254e043ff1 ("udmabuf: use vmf_insert_pfn and VM_PF
Hi,
Sorry, my previous reply got messed up as a result of HTML formatting. This is
a plain text version of the same reply.
>
>
> Having virtio-gpu import scanout buffers (via prime) from other
> devices means that we'd be adding a head to headless GPUs assigned
> to a Guest VM
Hi Gurchetan,
Thank you for taking a look at this patch series!
On Thu, Mar 28, 2024 at 2:01 AM Vivek Kasireddy
mailto:vivek.kasire...@intel.com>> wrote:
Having virtio-gpu import scanout buffers (via prime) from other
devices means that we'd be adding a head to headless GPUs assigned
to a Gues
Hi Gerd, Dave,
>
> On Thu, May 23, 2024 at 01:13:11PM GMT, Dave Airlie wrote:
> > Hey
> >
> > Gerd, do you have any time to look at this series again, I think at
> > v14 we should probably consider landing it.
>
> Phew. Didn't follow recent MM changes closely, don't know much about
> folios bey
Hi Rob,
>
> On Mon, May 13, 2024 at 11:27 AM Christian König
> wrote:
> >
> > Am 10.05.24 um 18:34 schrieb Zack Rusin:
> > > Hey,
> > >
> > > so this is a bit of a silly problem but I'd still like to solve it
> > > properly. The tldr is that virtualized drivers abuse
> > > drm_driver::gem_prime_
Hi Jason,
>
> On Tue, Apr 30, 2024 at 04:24:50PM -0600, Alex Williamson wrote:
> > > +static vm_fault_t vfio_pci_dma_buf_fault(struct vm_fault *vmf)
> > > +{
> > > + struct vm_area_struct *vma = vmf->vma;
> > > + struct vfio_pci_dma_buf *priv = vma->vm_private_data;
> > > + pgoff_t pgoff = vmf->p
Hi David,
>
> On 25.02.24 08:56, Vivek Kasireddy wrote:
> > Currently, some drivers (e.g, Udmabuf) that want to longterm-pin
> > the pages/folios associated with a memfd, do so by simply taking a
> > reference on them. This is not desirable because the pages/folios
> > may reside in Movable zone
Hi Andrew,
>
> On 1/26/24 1:25 AM, Kasireddy, Vivek wrote:
> >>>> Currently this driver creates a SGT table using the CPU as the
> >>>> target device, then performs the dma_sync operations against
> >>>> that SGT. This is backwards to how DM
> >> Currently this driver creates a SGT table using the CPU as the
> >> target device, then performs the dma_sync operations against
> >> that SGT. This is backwards to how DMA-BUFs are supposed to behave.
> >> This may have worked for the case where these buffers were given
> >> only back to the
Hi Andrew,
> Currently this driver creates a SGT table using the CPU as the
> target device, then performs the dma_sync operations against
> that SGT. This is backwards to how DMA-BUFs are supposed to behave.
> This may have worked for the case where these buffers were given
> only back to the sam
Acked-by: Vivek Kasireddy
>
> Now that we do not need to call dma_coerce_mask_and_coherent() on our
> miscdevice device, use the module_misc_device() helper for registering and
> module init/exit.
>
> Signed-off-by: Andrew Davis
> ---
> drivers/dma-buf/udmabuf.c | 30 +
Hi Andrew,
> When a device attaches to and maps our buffer we need to keep track
> of this mapping/device. This is needed for synchronization with these
> devices when beginning and ending CPU access for instance. Add a list
> that tracks device mappings as part of {map,unmap}_udmabuf().
>
> Sign
> Hej,
>
> debug dma code is not happy with virtio gpu (arm64 VM):
>
> [ 305.881733] [ cut here ]
> [ 305.883117] DMA-API: virtio-pci :07:00.0: mapping sg segment longer
> than device claims to support [len=262144] [max=65536]
> [ 305.885976] WARNING: CPU: 8 PID: 20
Hi David,
>
> On 12.12.23 08:38, Vivek Kasireddy wrote:
> > For drivers that would like to longterm-pin the folios associated
> > with a memfd, the memfd_pin_folios() API provides an option to
> > not only pin the folios via FOLL_PIN but also to check and migrate
> > them if they reside in movabl
Hi David,
> >
> >> On 05.12.23 06:35, Vivek Kasireddy wrote:
> >>> For drivers that would like to longterm-pin the pages associated
> >>> with a memfd, the pin_user_pages_fd() API provides an option to
> >>> not only pin the pages via FOLL_PIN but also to check and migrate
> >>> them if they resid
Hi David,
> On 05.12.23 06:35, Vivek Kasireddy wrote:
> > For drivers that would like to longterm-pin the pages associated
> > with a memfd, the pin_user_pages_fd() API provides an option to
> > not only pin the pages via FOLL_PIN but also to check and migrate
> > them if they reside in movable zo
Hi,
> > +struct page *memfd_alloc_page(struct file *memfd, pgoff_t idx)
> > +{
> > +#ifdef CONFIG_HUGETLB_PAGE
> > + struct folio *folio;
> > + int err;
> > +
> > + if (is_file_hugepages(memfd)) {
> > + folio = alloc_hugetlb_folio_nodemask(hstate_file(memfd),
> > +
1 - 100 of 181 matches
Mail list logo