Hi,
> 1. virtio_gpu_pci_quirk():
>
> * what is the explicit framebuffer removal about ?
unregister boot framebuffer (typically efifb or vesafb on x86).
> * why is it necessary to rename the device with "pci:" prefix ?
>
> does it only work w/ pci transport ?
> what's the backgrou
Hi,
> https://github.com/intel/gvt-linux/blob/topic/gvt-xengt/drivers/gpu/drm/i915/gvt/xengt.c
> But it's hard for some customers to contribute their own "hypervisor"
> module to the upstream Linux kernel. I am thinking what would be a
> better solution here? The MPT layer in the kernel helps a
On Thu, Jul 29, 2021 at 01:16:57AM -0700, Vivek Kasireddy wrote:
> This feature enables the Guest to wait to know when a resource
> is completely consumed by the Host.
virtio spec update?
What are the exact semantics?
Why a new command? Can't you simply fence one of the commands sent
anyway (se
Hi,
> + bool has_out_fence;
> + if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_OUT_FENCE)) {
> + vgdev->has_out_fence = true;
> + vgdev->ddev->mode_config.deferred_out_fence = true;
Looks like you don't need has_out_fence, you can just use
vgdev->ddev->mode_co
Hi,
> - We fix virtio to send out the completion event at the end of this entire
> pipeline, i.e. virtio code needs to take care of sending out the
> crtc_state->event correctly.
That sounds sensible to me. Fence the virtio commands, make sure (on
the host side) the command completes only
Hi,
> > That sounds sensible to me. Fence the virtio commands, make sure (on
> > the host side) the command completes only when the work is actually done
> > not only submitted. Has recently been added to qemu for RESOURCE_FLUSH
> > (aka frontbuffer rendering) and doing the same for SET_SCANOU
Hi,
> > virtio_gpu_primary_plane_update() will send RESOURCE_FLUSH only for
> > DIRTYFB and both SET_SCANOUT + RESOURCE_FLUSH for page-flip, and I
> > think for the page-flip case the host (aka qemu) doesn't get the
> > "wait until old framebuffer is not in use any more" right yet.
> [Kasireddy,
> > +
> > + if (vgdev->has_resource_blob) {
> > + params.blob_mem = VIRTGPU_BLOB_MEM_GUEST;
> > + params.blob_flags = VIRTGPU_BLOB_FLAG_USE_SHAREABLE;
> >
>
> This creates some log spam with crosvm + virgl_3d + vanilla linux, since
> transfers don't work for guest
Hi,
> > IIRC the VIRTGPU_BLOB_FLAG_USE_SHAREABLE flag means that the host *can*
> > create a shared mapping (i.e. the host seeing guest-side changes without
> > explicit transfer doesn't cause problems for the guest). It doesn not
> > mean the host *must* create a shared mapping (note that ther
On Wed, Apr 14, 2021 at 06:36:55AM +, Zhang, Tina wrote:
> Hi Gerd,
>
> Speaking of the modifier, we notice that the virtio-gpu driver's
> mode_config.allow_fb_modifiers = false, which means virtio-gpu doesn't
> support modifier. With mode_config.allow_fb_modifiers=false, the DRM
> Modifier AP
On Wed, Apr 14, 2021 at 04:31:45PM -0700, Gurchetan Singh wrote:
> On Mon, Apr 12, 2021 at 10:36 PM Vivek Kasireddy
> wrote:
>
> > If support for Blob resources is available, then dumb BOs created
> > by the driver can be considered as guest Blobs.
> >
> > v2: Don't skip transfer and flush comman
Hi,
> > > Patches 4 to 8 add the simpledrm driver. It's build on simple DRM helpers
> > > and SHMEM. It supports 16-bit, 24-bit and 32-bit RGB framebuffers. During
> >
> > if support for 8-bit frame buffers would be added?
>
> Is that 8-bit greyscale or 8-bit indexed with 256 entry palett
> > However, a tricky part is that the QEMU vga code does treat VGA_ATT_IW
> > register always as "flip-flop"; the first write is for index and the
> > second write is for the data like palette. Meanwhile, in the current
> > bochs DRM driver, the flip-flop wasn't considered, and it calls only
> >
> > I'm fine to change in any better way, of course, so feel free to
> > modify the patch.
>
> If no one objects, I'll merge it as-is. It's somewhat wrong wrt to VGA, but
> apparently what qemu wants.
No objections.
Acked-by: Gerd Hoffmann
FYI: cirrus is in
Hi,
> +static struct sg_table *virtgpu_gem_map_dma_buf(
> + struct dma_buf_attachment *attach,
> + enum dma_data_direction dir)
checkpatch doesn't like that:
-:47: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#47: FILE: drivers/gpu/drm/virtio/virtgpu_prime.c:4
On Wed, Aug 11, 2021 at 01:04:01PM +0900, David Stevens wrote:
> Blob resources without the cross device flag don't have a uuid to share
> with other virtio devices. When exporting such blobs, set uuid_state to
> STATE_ERR so that virtgpu_virtio_get_uuid doesn't hang.
>
> Signed-off-by: David Stev
there some specific reason?
commit c66df701e783bc666593e6e665f13670760883ee
Author: Gerd Hoffmann
Date: Thu Aug 29 12:32:57 2019 +0200
drm/virtio: switch from ttm to gem shmem helpers
HTH,
Gerd
On Sun, Aug 15, 2021 at 09:51:02PM -0700, lepton wrote:
> Hi Gerd,
>
> Thanks for your reply. I was aware of that change, but need a fix for
> 5.4 kernel as a temp solution for now.
> If the reason is just that you will move away from ttm soon,then I
> guess a CL like http://crrev.com/c/3092457 sh
On Fri, Aug 13, 2021 at 09:54:41AM +0900, David Stevens wrote:
> Implement virtgpu specific map_dma_buf callback to support mapping
> exported vram object dma-bufs. The dma-buf callback is used directly, as
> vram objects don't have backing pages and thus can't implement the
> drm_gem_object_funcs.
On Tue, Mar 30, 2021 at 08:04:38PM -0700, Vivek Kasireddy wrote:
> If support for Blob resources is available, then dumb BOs created
> by the driver can be considered as guest Blobs. And, for guest
> Blobs, there is no need to do any transfers or flushes
No. VIRTGPU_BLOB_FLAG_USE_SHAREABLE means
Hi,
> -#define MAX_INLINE_CMD_SIZE 96
> +#define MAX_INLINE_CMD_SIZE 112
Separate patch please.
> --- a/include/uapi/linux/virtio_gpu.h
> +++ b/include/uapi/linux/virtio_gpu.h
> @@ -409,6 +409,7 @@ struct virtio_gpu_set_scanout_blob {
> __le32 width;
> __le32 height;
> __
Hi,
> > No. VIRTGPU_BLOB_FLAG_USE_SHAREABLE means the host (aka device in virtio
> > terms) *can* create a shared mapping. So, the guest sends still needs to
> > send transfer
> > commands, and then the device can shortcut the transfer commands on the
> > host side in
> > case a shared mappi
Balances the qxl_create_bo(..., pinned=true, ...);
call in qxl_release_bo_alloc().
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_release.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/qxl/qxl_release.c
b/drivers/gpu/drm/qxl/qxl_release.c
index 0fcfc952d5e9
Some progress. Not complete though, I still
get an unclean mm warning on shutdown due to
some release objects not being freed yet.
Gerd Hoffmann (4):
drm/qxl: use drmm_mode_config_init
drm/qxl: unpin release objects
drm/qxl: release shadow on shutdown
drm/qxl: handle shadow in primary
qxl_primary_atomic_disable must check whenever the framebuffer bo has a
shadow surface and in case it has check the shadow primary status.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
In case we have a shadow surface on shutdown release
it so it doesn't leak.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
b/drivers/gpu/drm/qxl/qxl_display.c
index 38d6b59
Signed-off-by: Gerd Hoffmann
Reviewed-by: Daniel Vetter
---
drivers/gpu/drm/qxl/qxl_display.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
b/drivers/gpu/drm/qxl/qxl_display.c
index 012bce0cdb65..38d6b596094d 100644
--- a/drivers/gpu
Hi,
> > > > > + select TRACE_GPU_MEM
> > > > > +#ifdef CONFIG_TRACE_GPU_MEM
That doesn't make sense btw.
> > > > > +#ifdef CONFIG_TRACE_GPU_MEM
> > > > > +static inline void virtio_gpu_trace_total_mem(struct
> > > > > virtio_gpu_device *vgdev,
> > > > > +
On Wed, Jan 20, 2021 at 10:52:11AM -0800, Yiwei Zhang wrote:
> On Wed, Jan 20, 2021 at 5:33 AM Gerd Hoffmann wrote:
> >
> > Hi,
> >
> > > > > > > + select TRACE_GPU_MEM
> >
> > > > > > > +#ifdef CONFIG_TRACE_GPU_MEM
>
On Fri, Jan 22, 2021 at 09:13:42AM +0100, Thomas Zimmermann wrote:
> Hi
>
> Am 20.01.21 um 12:12 schrieb Gerd Hoffmann:
> > Balances the qxl_create_bo(..., pinned=true, ...);
> > call in qxl_release_bo_alloc().
> >
> > Signed-off-by: Gerd Hoffmann
> > ---
it
> to 0 kinda defeats the warning.
Figured the unpin is at the completely wrong place while trying to
reproduce the lockdep splat ...
take care,
Gerd
>From 43befab4a935114e8620af62781666fa81288255 Mon Sep 17 00:00:00 2001
From: Gerd Hoffmann
Date: Mon, 25 Jan 2021 13:10:50 +0100
Subje
0x120 [qxl]
drm_minor_release+0x3d/0x60
but I don't think this is the qxl driver's fault.
Gerd Hoffmann (5):
drm/qxl: use drmm_mode_config_init
drm/qxl: unpin release objects
drm/qxl: release shadow on shutdown
drm/qxl: handle shadow in primary destroy
drm/qxl: properly free qxl releases
Balances the qxl_create_bo(..., pinned=true, ...);
call in qxl_release_bo_alloc().
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_release.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/qxl/qxl_release.c
b/drivers/gpu/drm/qxl/qxl_release.c
index c52412724c26
qxl_primary_atomic_disable must check whenever the framebuffer bo has a
shadow surface and in case it has check the shadow primary status.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
In case we have a shadow surface on shutdown release
it so it doesn't leak.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
b/drivers/gpu/drm/qxl/qxl_display.c
index 38d6b59
Signed-off-by: Gerd Hoffmann
Reviewed-by: Daniel Vetter
Acked-by: Thomas Zimmermann
---
drivers/gpu/drm/qxl/qxl_display.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
b/drivers/gpu/drm/qxl/qxl_display.c
index 012bce0cdb65
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_drv.h | 1 +
drivers/gpu/drm/qxl/qxl_kms.c | 22 --
drivers/gpu/drm/qxl/qxl_release.c | 2 ++
3 files changed, 23 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl
> > + /*
> > +* Ask host to release resources (+fill release ring),
> > +* then wait for the release actually happening.
> > +*/
> > + qxl_io_notify_oom(qdev);
> > + for (try = 0; try < 20 && atomic_read(&qdev->release_count) > 0; try++)
> > + msleep(20);
>
> A bit icky
Signed-off-by: Gerd Hoffmann
Reviewed-by: Daniel Vetter
Acked-by: Thomas Zimmermann
---
drivers/gpu/drm/qxl/qxl_display.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
b/drivers/gpu/drm/qxl/qxl_display.c
index 012bce0cdb65
0x120 [qxl]
drm_minor_release+0x3d/0x60
but I don't think this is the qxl driver's fault.
v5:
- add release_event wait queue.
- also cleanup qxl_fence_wait().
Gerd Hoffmann (6):
drm/qxl: use drmm_mode_config_init
drm/qxl: unpin release objects
drm/qxl: release shadow on shutdown
drm/q
In case we have a shadow surface on shutdown release
it so it doesn't leak.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
b/drivers/gpu/drm/qxl/qxl_display.c
index 38d6b59
Balances the qxl_create_bo(..., pinned=true, ...);
call in qxl_release_bo_alloc().
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_release.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/qxl/qxl_release.c
b/drivers/gpu/drm/qxl/qxl_release.c
index c52412724c26
Now that we have the new release_event wait queue we can just
use that in qxl_fence_wait() and simplify the code alot.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_release.c | 42 +++
1 file changed, 4 insertions(+), 38 deletions(-)
diff --git a/drivers
Reorganize qxl_device_fini() a bit.
Add missing unpin() calls.
Count releases. Add wait queue for releases. That way
qxl_device_fini() can easily wait until everything is
ready for proper shutdown.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_drv.h | 2 ++
drivers/gpu/drm
qxl_primary_atomic_disable must check whenever the framebuffer bo has a
shadow surface and in case it has check the shadow primary status.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
Balances the qxl_create_bo(..., pinned=true, ...);
call in qxl_release_bo_alloc().
Signed-off-by: Gerd Hoffmann
Acked-by: Thomas Zimmermann
---
drivers/gpu/drm/qxl/qxl_release.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/qxl/qxl_release.c
b/drivers/gpu/drm/qxl
v5:
- add release_event wait queue.
- also cleanup qxl_fence_wait().
v6:
- add shadow pinning fix (Thomas).
- use ram for dumb allocation.
Gerd Hoffmann (10):
[hack] silence ttm fini WARNING
Revert "drm/qxl: do not run release if qxl failed to init"
drm/qxl: use drmm_mode_c
Signed-off-by: Gerd Hoffmann
Reviewed-by: Daniel Vetter
Acked-by: Thomas Zimmermann
---
drivers/gpu/drm/qxl/qxl_display.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
b/drivers/gpu/drm/qxl/qxl_display.c
index 012bce0cdb65
kobject: '(null)' ((ptrval)): is not initialized, yet kobject_put() is
being called.
WARNING: CPU: 0 PID: 209 at lib/kobject.c:750 kobject_put+0x3a/0x60
[ ... ]
Call Trace:
ttm_device_fini+0x133/0x1b0 [ttm]
qxl_ttm_fini+0x2f/0x40 [qxl]
---
drivers/gpu/drm/ttm/ttm_device.c | 2 +-
1 file
dumb buffers are shadowed anyway, so there is no need to store them
in device memory. Use QXL_GEM_DOMAIN_CPU (TTM_PL_SYSTEM) instead.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_dumb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/qxl
This reverts commit b91907a6241193465ca92e357adf16822242296d.
Patch is broken, it effectively makes qxl_drm_release() a nop
because on normal driver shutdown qxl_drm_release() is called
*after* drm_dev_unregister().
Cc: Tong Zhang
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_drv.c
Now that we have the new release_event wait queue we can just
use that in qxl_fence_wait() and simplify the code alot.
Signed-off-by: Gerd Hoffmann
Acked-by: Thomas Zimmermann
---
drivers/gpu/drm/qxl/qxl_release.c | 44 +++
1 file changed, 4 insertions(+), 40
Suggested-by: Thomas Zimmermann
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
b/drivers/gpu/drm/qxl/qxl_display.c
index 60331e31861a..d25fd3acc891 100644
--- a/drivers/gpu/drm/qxl
In case we have a shadow surface on shutdown release
it so it doesn't leak.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
b/drivers/gpu/drm/qxl/qxl_display.c
index 38d6b59
qxl_primary_atomic_disable must check whenever the framebuffer bo has a
shadow surface and in case it has check the shadow primary status.
Signed-off-by: Gerd Hoffmann
Acked-by: Thomas Zimmermann
---
drivers/gpu/drm/qxl/qxl_display.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a
Reorganize qxl_device_fini() a bit.
Add missing unpin() calls.
Count releases. Add wait queue for releases. That way
qxl_device_fini() can easily wait until everything is
ready for proper shutdown.
Signed-off-by: Gerd Hoffmann
Acked-by: Thomas Zimmermann
---
drivers/gpu/drm/qxl/qxl_drv.h
On Thu, Feb 04, 2021 at 03:58:33PM +0100, Christian König wrote:
> ?
>
> What's the background here?
>
> Christian.
>
> Am 04.02.21 um 15:57 schrieb Gerd Hoffmann:
> > kobject: '(null)' ((ptrval)): is not initialized, yet kobject_put()
> &g
On Thu, Feb 04, 2021 at 11:30:50AM -0500, Tong Zhang wrote:
> if qxl_device_init() fail, drm device will not be registered,
> in this case, do not run qxl_drm_release()
How do you trigger this?
take care,
Gerd
___
dri-devel mailing list
dri-devel@lis
Hi,
> I smoke-tested the code by running fbdev, Xorg and weston with the
> converted mgag200 driver.
Looks sane to me.
Survived cirrus smoke test too.
Tested-by: Gerd Hoffmann
Acked-by: Gerd Hoffmann
take care,
Gerd
___
dri-devel mailin
Hi,
> > +/* extract pages referenced by sgt */
> > +static struct page **extr_pgs(struct sg_table *sgt, int *nents, int
> > *last_len)
>
> Nack, this doesn't work on dma-buf. And it'll blow up at runtime when you
> enable the very recently merged CONFIG_DMABUF_DEBUG (would be good to test
> wi
Specifically do not try release resources which where
not allocated in the first place.
Cc: Tong Zhang
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 3 +++
drivers/gpu/drm/qxl/qxl_kms.c | 4
2 files changed, 7 insertions(+)
diff --git a/drivers/gpu/drm/qxl
On Sun, Feb 07, 2021 at 07:33:24PM +0100, Thomas Zimmermann wrote:
> Hi
>
> Am 05.02.21 um 10:05 schrieb Gerd Hoffmann:
> >Hi,
> >
> > > I smoke-tested the code by running fbdev, Xorg and weston with the
> > > converted mgag200 driver.
> >
&g
Hi,
> > > > Nack, this doesn't work on dma-buf. And it'll blow up at runtime
> > > > when you enable the very recently merged CONFIG_DMABUF_DEBUG (would
> > > > be good to test with that, just to make sure).
> [Kasireddy, Vivek] Although, I have not tested it yet but it looks like this
> will
>
On Mon, Feb 08, 2021 at 12:07:01PM -0500, Tong Zhang wrote:
> Does this patch fix an issue raised previously? Or should they be used
> together?
> https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg2466541.html
>
> IMHO using this patch alone won’t fix the issue
This patch on top of d
> > You don't have to use the rendering pipeline. You can let the i915 gpu
> > render into a dma-buf shared with virtio-gpu, then use virtio-gpu only for
> > buffer sharing with the host.
> [Kasireddy, Vivek] Is this the most viable path forward? I am not sure how
> complex or
> feasible it woul
Hi,
> This is because of the fundamental concept of DMA-buf that the exporter
> needs to setup mappings (both CPU page tables as well as stuff like IOMMU).
> When the guest exports something it would mean that you give the guest
> control over the IOMMU and/or host page tables. And that is not s
On Fri, Feb 12, 2021 at 08:15:12AM +, Kasireddy, Vivek wrote:
> Hi Gerd,
>
> > > > You don't have to use the rendering pipeline. You can let the i915
> > > > gpu render into a dma-buf shared with virtio-gpu, then use
> > > > virtio-gpu only for buffer sharing with the host.
> [Kasireddy, Vive
Move qxl_io_notify_oom() call into wait condition.
That way the driver will call it again if one call
wasn't enough.
Also allows to remove the extra dma_fence_is_signaled()
check and the goto.
Fixes: 5a838e5d5825 ("drm/qxl: simplify qxl_fence_wait")
Signed-off-by: Gerd Hoffmann
-
Mostly around locking.
Gerd Hoffmann (10):
drm/qxl: properly handle device init failures
drm/qxl: more fence wait rework
drm/qxl: use ttm bo priorities
drm/qxl: fix lockdep issue in qxl_alloc_release_reserved
drm/qxl: rename qxl_bo_kmap -> qxl_bo_kmap_locked
drm/qxl: add qxl_bo_k
Call qxl_bo_unpin (which does a reservation) without holding the
release_mutex lock. Fixes lockdep (correctly) warning on a possible
deadlock.
Fixes: 65ffea3c6e73 ("drm/qxl: unpin release objects")
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_release.c | 13 ++-
() picks something which can't be evicted and
throws an error after waiting a while without success.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_object.h | 1 +
drivers/gpu/drm/qxl/qxl_cmd.c | 2 +-
drivers/gpu/drm/qxl/qxl_display.c | 4 ++--
drivers/gpu/drm/qxl/qxl_gem.c
Add kmap/kunmap variants which reserve (and pin) the bo.
They can be used in case the caller doesn't hold a reservation
for the bo.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_object.h | 2 ++
drivers/gpu/drm/qxl/qxl_object.c | 36
2 files ch
Use the correct kmap variant. We don't have a reservation here,
so we can't use the _locked version.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_prime.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/g
Try avoid re-introducing locking bugs.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_object.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 22748b9566af..90d5e5b7f927 100644
--- a/drivers/gpu/drm/qxl
Use the correct kmap variant. We don't hold a reservation here,
so we can't use the _locked variant. We can drop the pin because
qxl_bo_kmap will do that for us.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 7 ++-
1 file changed, 2 insertions(+), 5
We don't have to map in atomic_update callback then,
making locking a bit less complicated.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 14 +-
1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
b/drivers/gp
Make clear that these functions should be called with reserved
bo's only. No functional change.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_object.h | 4 ++--
drivers/gpu/drm/qxl/qxl_display.c | 14 +++---
drivers/gpu/drm/qxl/qxl_draw.c| 8
drivers/gp
Specifically do not try release resources which where
not allocated in the first place.
Cc: Tong Zhang
Tested-by: Tong Zhang
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 3 +++
drivers/gpu/drm/qxl/qxl_kms.c | 4
2 files changed, 7 insertions(+)
diff --git a
> v2:
> * convert to drm_shadow_plane_state helpers
Looks all sane to me.
Acked-by: Gerd Hoffmann
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Tue, Feb 16, 2021 at 02:46:21PM +0100, Thomas Zimmermann wrote:
>
>
> Am 16.02.21 um 14:27 schrieb Thomas Zimmermann:
> > Hi
> >
> > this is a shadow-buffered plane. Did you consider using the new helpers
> > for shadow-buffered planes? They will map the user BO for you and
> > provide the ma
Mostly around locking.
v2:
- use 'vmap' instead of 'kmap'.
- rework cursor update workflow.
Gerd Hoffmann (11):
drm/qxl: properly handle device init failures
drm/qxl: more fence wait rework
drm/qxl: use ttm bo priorities
drm/qxl: fix lockdep issue in qxl_alloc_re
Specifically do not try release resources which where
not allocated in the first place.
Cc: Tong Zhang
Tested-by: Tong Zhang
Signed-off-by: Gerd Hoffmann
Acked-by: Thomas Zimmermann
---
drivers/gpu/drm/qxl/qxl_display.c | 3 +++
drivers/gpu/drm/qxl/qxl_kms.c | 4
2 files changed, 7
Use the correct vmap variant. We don't have a reservation here,
so we can't use the _locked version.
Signed-off-by: Gerd Hoffmann
Acked-by: Thomas Zimmermann
---
drivers/gpu/drm/qxl/qxl_prime.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/g
Add helper functions to create and move the cursor.
Create the cursor_bo in prepare_fb callback, in the
atomic_commit callback we only send the update command
to the host.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 248 --
1 file changed
Call qxl_bo_unpin (which does a reservation) without holding the
release_mutex lock. Fixes lockdep (correctly) warning on a possible
deadlock.
Fixes: 65ffea3c6e73 ("drm/qxl: unpin release objects")
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_release.c | 13 ++-
() picks something which can't be evicted and
throws an error after waiting a while without success.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_object.h | 1 +
drivers/gpu/drm/qxl/qxl_cmd.c | 2 +-
drivers/gpu/drm/qxl/qxl_display.c | 4 ++--
drivers/gpu/drm/qxl/qxl_gem.c
Add vmap/vunmap variants which reserve (and pin) the bo.
They can be used in case the caller doesn't hold a reservation
for the bo.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_object.h | 2 ++
drivers/gpu/drm/qxl/qxl_object.c | 36
2 files ch
Try avoid re-introducing locking bugs.
Signed-off-by: Gerd Hoffmann
Acked-by: Thomas Zimmermann
---
drivers/gpu/drm/qxl/qxl_object.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 82c3bf195ad6..6e26d70f2f07
Move qxl_io_notify_oom() call into wait condition.
That way the driver will call it again if one call
wasn't enough.
Also allows to remove the extra dma_fence_is_signaled()
check and the goto.
Fixes: 5a838e5d5825 ("drm/qxl: simplify qxl_fence_wait")
Signed-off-by: Gerd Hoffmann
-
Append _locked to Make clear that these functions should be called with
reserved bo's only. While being at it also rename kmap -> vmap.
No functional change.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_object.h | 4 ++--
drivers/gpu/drm/qxl/qxl_displa
Pure code motion, no functional change.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 61 +--
1 file changed, 34 insertions(+), 27 deletions(-)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c
b/drivers/gpu/drm/qxl/qxl_display.c
index
Use the correct vmap variant. We don't hold a reservation here,
so we can't use the _locked variant. We can drop the pin because
qxl_bo_vmap will do that for us.
Signed-off-by: Gerd Hoffmann
---
drivers/gpu/drm/qxl/qxl_display.c | 7 ++-
1 file changed, 2 insertions(+), 5
Hi,
> I'm still trying to wrap my head around the qxl cursor code.
>
> Getting vmap out of the commit tail is good, but I feel like this isn't
> going in the right direction overall.
>
> In ast, these helper functions were only good when converting the drvier to
> atomic modesetting. So I remo
Hi,
> > Well. I suspect I could easily spend a month cleaning up and party
> > redesign the qxl driver (specifically qxl_draw.c + qxl_image.c).
> >
> > I'm not sure I'll find the time to actually do that anytime soon.
> > I have plenty of other stuff on my TODO list, and given that the
> > wor
On Thu, Feb 25, 2021 at 10:09:42AM +0100, Daniel Vetter wrote:
> On Wed, Feb 24, 2021 at 11:55 AM Sumera Priyadarsini
> wrote:
> >
> > Add a virtual hardware or vblank-less mode as a module to enable
> > VKMS to emulate virtual graphic drivers. This mode can be enabled
> > by setting enable_virtua
On Thu, Feb 25, 2021 at 11:32:08AM +0100, Daniel Vetter wrote:
> On Thu, Feb 25, 2021 at 11:25:20AM +0100, Gerd Hoffmann wrote:
> > On Thu, Feb 25, 2021 at 10:09:42AM +0100, Daniel Vetter wrote:
> > > On Wed, Feb 24, 2021 at 11:55 AM Sumera Priyadarsini
> > > wrote:
&g
On Thu, Mar 04, 2021 at 08:42:55AM +0100, Thomas Zimmermann wrote:
> (cc'ing Gerd)
>
> This might be related to the recent clean-up patches for the BO handling in
> qxl.
Yes, it is. Fixed in drm-misc-next, cherry-picked into drm-misc-fixes,
hopefully lands in -rc2.
take care,
Gerd
__
r pointless, qxl fix for this one
is already queued in drm-misc-fixes so this would only land after the
qxl fixes ...
But I think using WARN_ON_ONCE() is a good idea in general, especially
in a code path like this where a single bug can easily cause a flood of
stack traces.
On Thu, Mar 04, 2021 at 09:49:28AM +, Colin King wrote:
> From: Colin Ian King
>
> The surface_id struct field in head is not being initialized and
> static analysis warns that this is being passed through to
> dev->monitors_config->heads[i] on an assignment. Clear up this
> warning by initia
On Fri, Mar 05, 2021 at 11:18:19PM +0800, xndcn wrote:
> virtio_gpu_object array is not freed or unlocked in some
> failed cases.
Pushed to drm-misc-next.
thanks,
Gerd
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedeskt
1 - 100 of 2029 matches
Mail list logo