On Wed, Mar 23, 2022 at 4:01 AM Liu Zixian wrote:
> diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c
> b/drivers/gpu/drm/virtio/virtgpu_display.c
> index 5b00310ac..f73352e7b 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_display.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_display.c
> @@ -179,6
b7a5e10962 ("virtio-gpu: add 3d/virgl support")
> Signed-off-by: Xiaomeng Tong
Reviewed-by: Chia-I Wu
> ---
> drivers/gpu/drm/virtio/virtgpu_ioctl.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
>
On Fri, Feb 18, 2022 at 7:57 AM Rob Clark wrote:
>
> From: Rob Clark
>
> With native userspace drivers in guest, a lot of GEM objects need to be
> neither shared nor mappable. And in fact making everything mappable
> and/or sharable results in unreasonably high fd usage in host VMM.
>
> Signed-o
st VMM.
>
> Signed-off-by: Rob Clark
Reviewed-by: Chia-I Wu
On Fri, Feb 18, 2022 at 9:51 AM Rob Clark wrote:
>
> On Fri, Feb 18, 2022 at 8:42 AM Chia-I Wu wrote:
> >
> > On Fri, Feb 18, 2022 at 7:57 AM Rob Clark wrote:
> > >
> > > From: Rob Clark
> > >
> > > With native userspace drivers in guest
On Wed, Sep 8, 2021 at 6:37 PM Gurchetan Singh
wrote:
>
> We don't want fences from different 3D contexts (virgl, gfxstream,
> venus) to be on the same timeline. With explicit context creation,
> we can specify the number of ring each context wants.
>
> Execbuffer can specify which ring to use.
>
.
On Mon, Sep 13, 2021 at 10:48 AM Gurchetan Singh
wrote:
>
>
>
> On Fri, Sep 10, 2021 at 12:33 PM Chia-I Wu wrote:
>>
>> On Wed, Sep 8, 2021 at 6:37 PM Gurchetan Singh
>> wrote:
>> >
>> > We don't want fences from different 3D contexts (vir
,On Mon, Sep 13, 2021 at 6:57 PM Gurchetan Singh
wrote:
>
>
>
>
> On Mon, Sep 13, 2021 at 11:52 AM Chia-I Wu wrote:
>>
>> .
>>
>> On Mon, Sep 13, 2021 at 10:48 AM Gurchetan Singh
>> wrote:
>> >
>> >
>> >
>> > On F
i
On Tue, Sep 14, 2021 at 6:26 PM Gurchetan Singh
wrote:
>
>
>
> On Tue, Sep 14, 2021 at 10:53 AM Chia-I Wu wrote:
>>
>> ,On Mon, Sep 13, 2021 at 6:57 PM Gurchetan Singh
>> wrote:
>> >
>> >
>> >
>> >
>> > On Mon, Sep 1
On Thu, Oct 21, 2021 at 4:52 AM Gerd Hoffmann wrote:
>
> On Thu, Oct 21, 2021 at 11:55:47AM +0200, Maksym Wezdecki wrote:
> > I get your point. However, we need to make resource_create ioctl,
> > in order to create corresponding resource on the host.
>
> That used to be the case but isn't true any
; extend the protocol while maintaining backward compatibility.
>
> > What do you think about that?
>
> I still think that switching to blob resources would be the better
> solution. Yes, it's alot of work and not something which helps
> short-term. But adding a new API
Add Gurchetan Singh and me as reviewers for virtio-gpu.
Signed-off-by: Chia-I Wu
Acked-by: Gurchetan Singh
Cc: David Airlie
Cc: Gerd Hoffmann
---
MAINTAINERS | 2 ++
1 file changed, 2 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 3b79fd441dde..5474a0a708a8 100644
--- a
On Tue, Nov 2, 2021 at 6:07 AM Gerd Hoffmann wrote:
>
> On Tue, Nov 02, 2021 at 12:31:39PM +0100, Maksym Wezdecki wrote:
> > From: mwezdeck
> >
> > The idea behind the commit:
> > 1. not pin the pages during resource_create ioctl
> > 2. pin the pages on the first use during:
> > - trans
gt; ---
> v2: I originally sent this patch on 19 Jun 2020 but it was somehow
> not applied. As I review it now, I see that the bug is actually
> older than I originally thought and so I have updated the Fixes
> tag.
Reviewed-by: Chia-I Wu
They trigger the BUG_ON() in drm_gem_private_object_init otherwise.
Signed-off-by: Chia-I Wu
Cc: Gurchetan Singh
Cc: Gerd Hoffmann
---
drivers/gpu/drm/virtio/virtgpu_vram.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/virtio/virtgpu_vram.c
b/drivers/gpu/drm/virtio
The context might still be missing when DRM_IOCTL_PRIME_FD_TO_HANDLE is
the first ioctl on the drm_file.
Fixes: 72b48ae800da ("drm/virtio: enqueue virtio_gpu_create_context after the
first 3D ioctl")
Cc: Gurchetan Singh
Cc: Gerd Hoffmann
Signed-off-by: Chia-I Wu
---
drivers/gpu/
Gurchetan Singh
Cc: Thomas Zimmermann
Cc: Gerd Hoffmann
Signed-off-by: Chia-I Wu
---
drivers/gpu/drm/virtio/virtgpu_vram.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/virtio/virtgpu_vram.c
b/drivers/gpu/drm/virtio/virtgpu_vram.c
index d6f215c4ff8d..5cc34e7330fa 100644
-
(Add missing CCs)
On Mon, Apr 29, 2019 at 3:08 PM Chia-I Wu wrote:
>
> This is motivated by having meaningful ftrace events, but it also
> fixes use cases where dma_fence_is_later is called, such as in
> sync_file_merge.
>
> In other drivers, fence creation and cmdbuf s
Suggested-by: Emil Velikov
Reviewed-by: Chia-I Wu
> ---
>
> This patch was suggested in this email thread:
>
> [PATCH] drm/virtio: allocate fences with GFP_KERNEL
> https://www.spinics.net/lists/dri-devel/msg208536.html
>
> drivers/gpu/drm/virtio/virtgpu_drv.h | 2 +-
> dri
Hi,
I am still new to virgl, and missed the last round of discussion about
resource_create_v2.
>From the discussion below, semantically resource_create_v2 creates a host
resource object _without_ any storage; memory_create creates a host memory
object which provides the storage. Is that correct?
On Wed, Apr 17, 2019 at 2:57 AM Gerd Hoffmann wrote:
>
> On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> > Hi,
> >
> > I am still new to virgl, and missed the last round of discussion about
> > resource_create_v2.
> >
> > From the discuss
On Fri, Apr 19, 2019 at 9:21 AM Emil Velikov wrote:
>
> On Fri, 19 Apr 2019 at 16:57, Marc-André Lureau
> wrote:
> >
> > This patch does more harm than good, as it breaks both Xwayland and
> > gnome-shell with X11.
> >
> > Xwayland requires DRI3 & DRI3 requires PRIME.
> >
> > X11 crash for obscur
off-by: Chia-I Wu
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 1 -
drivers/gpu/drm/virtio/virtgpu_fence.c | 17 ++---
2 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 491dec071
For most drivers, drm_fence_init is followed by drm_fence_emit
immediately. But for our driver, they are done separately. We also
don't know the fence seqno until drm_fence_emit.
Signed-off-by: Chia-I Wu
---
drivers/gpu/drm/virtio/virtgpu_fence.c | 3 +++
1 file changed, 3 insertions(+)
Trace when commands are queued for both ctrlq and cursorq. Trace
when responses are received for ctrlq.
Signed-off-by: Chia-I Wu
---
drivers/gpu/drm/virtio/Makefile | 2 +-
drivers/gpu/drm/virtio/virtgpu_trace.h| 52 +++
drivers/gpu/drm/virtio
It was changed to GFP_ATOMIC in commit ec2f0577c (add & use
virtio_gpu_queue_fenced_ctrl_buffer) because the allocation happened
with a spinlock held. That was no longer true after commit
9fdd90c0f (add virtio_gpu_alloc_fence()).
Signed-off-by: Chia-I Wu
Cc: Gerd Hoffmann
Cc: Gustavo Pad
It gets the generic states from the adreno core.
This also adds a missing NULL check in msm_gpu_open.
Signed-off-by: Chia-I Wu
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 44 +++
drivers/gpu/drm/msm/msm_debugfs.c | 2 +-
2 files changed, 45 insertions(+), 1
memptrs_bo is used to store msm_rbmemptrs. Size it correctly.
Signed-off-by: Chia-I Wu
---
drivers/gpu/drm/msm/msm_gpu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index 11aac8337066..d23049eb29c4 100644
Ah, thanks. I was on drm-next branch. I will switch to msm-next.
On Thu, Dec 20, 2018 at 11:47 AM Jordan Crouse
wrote:
> On Thu, Dec 20, 2018 at 10:47:02AM -0800, Chia-I Wu wrote:
> > memptrs_bo is used to store msm_rbmemptrs. Size it correctly.
> >
> > Signed-off-by:
On Wed, Jun 19, 2019 at 11:07 PM Gerd Hoffmann wrote:
>
> Use drm_gem_reservation_object_wait() in virtio_gpu_wait_ioctl().
> This also makes the ioctl run lockless.
The userspace has a BO cache to avoid freeing BOs immediately but to
reuse them on next allocations. The BO cache checks if a BO is
On Wed, Jun 19, 2019 at 11:08 PM Gerd Hoffmann wrote:
>
> Some helper functions to manage an array of gem objects.
>
> v4: make them virtio-private instead of generic helpers.
>
> Signed-off-by: Gerd Hoffmann
> ---
> drivers/gpu/drm/virtio/virtgpu_drv.h | 10 ++
> drivers/gpu/drm/virtio/virt
On Wed, Jun 19, 2019 at 11:08 PM Gerd Hoffmann wrote:
>
> Use gem reservation helpers and direct reservation_object_* calls
> instead of ttm.
>
> v3: Also attach the array of gem objects to the virtio command buffer,
> so we can drop the object references in the completion callback. Needed
> beca
On Wed, Jun 19, 2019 at 11:07 PM Gerd Hoffmann wrote:
>
> Use gem reservation helpers and direct reservation_object_* calls
> instead of ttm.
>
> v3: Due to using the gem reservation object it is initialized and ready
> for use before calling ttm_bo_init, so we can also drop the tricky fence
> log
.
On Wed, Jun 19, 2019 at 11:08 PM Gerd Hoffmann wrote:
>
> virtio-gpu basically needs a sg_table for the bo, to tell the host where
> the backing pages for the object are. So the gem shmem helpers are a
> perfect fit. Some drm_gem_object_funcs need thin wrappers to update the
> host state, but
I tried my best to review this series. I am not really a kernel dev
so please take that with a grain of salt.
On Wed, Jun 19, 2019 at 11:01 PM Gerd Hoffmann wrote:
>
> Hi,
>
> > Also, I strongly recommend you do a very basic igt to exercise this, i.e.
> > allocate some buffers, submit them in
On Fri, Jun 28, 2019 at 5:13 AM Gerd Hoffmann wrote:
>
> Use gem reservation helpers and direct reservation_object_* calls
> instead of ttm.
>
> v5: fix fencing (Chia-I Wu).
> v3: Also attach the array of gem objects to the virtio command buffer,
> so we can drop the obj
On Fri, Jun 28, 2019 at 5:13 AM Gerd Hoffmann wrote:
>
> Use gem reservation helpers and direct reservation_object_* calls
> instead of ttm.
>
> v5: fix fencing (Chia-I Wu).
> v3: Due to using the gem reservation object it is initialized and ready
> for use before calling t
(pressed Send too early)
On Sun, Jun 30, 2019 at 11:20 AM Chia-I Wu wrote:
>
> On Fri, Jun 28, 2019 at 5:13 AM Gerd Hoffmann wrote:
> >
> > Use gem reservation helpers and direct reservation_object_* calls
> > instead of ttm.
> >
> > v5: fix fencing (Chia-I W
On Fri, Jun 28, 2019 at 3:49 AM Gerd Hoffmann wrote:
>
> > > static inline struct virtio_gpu_object*
> > > virtio_gpu_object_ref(struct virtio_gpu_object *bo)
>
> > The last users of these two helpers are removed with this patch. We
> > can remove them.
>
> patch 12/12 does that.
I meant virtio
On Fri, Jun 28, 2019 at 3:34 AM Gerd Hoffmann wrote:
>
> Hi,
>
> > > --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> > > +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> > > @@ -120,9 +120,9 @@ struct virtio_gpu_vbuffer {
> > >
> > > char *resp_buf;
> > > int resp_size;
> > > -
> > >
On Fri, Jun 28, 2019 at 3:05 AM Gerd Hoffmann wrote:
>
> On Wed, Jun 26, 2019 at 04:55:20PM -0700, Chia-I Wu wrote:
> > On Wed, Jun 19, 2019 at 11:07 PM Gerd Hoffmann wrote:
> > >
> > > Use drm_gem_reservation_object_wait() in virtio_gpu_wait_ioctl().
> > >
On Mon, Jul 1, 2019 at 11:04 AM Gurchetan Singh
wrote:
>
>
>
> On Fri, Jun 28, 2019 at 5:14 AM Gerd Hoffmann wrote:
> >
> > Use gem reservation helpers and direct reservation_object_* calls
> > instead of ttm.
> >
> > v5: fix fencing (Chia-I Wu).
>
On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann wrote:
>
> Call reservation_object_* directly instead
> of using ttm_bo_{reserve,unreserve}.
>
> v4: check for EINTR only.
> v3: check for EINTR too.
>
> Signed-off-by: Gerd Hoffmann
> Reviewed-by: Daniel Vetter
> ---
> drivers/gpu/drm/virtio/virtgpu
per.
> v5: some small optimizations (Chia-I Wu).
> v4: make them virtio-private instead of generic helpers.
>
> Signed-off-by: Gerd Hoffmann
> ---
> drivers/gpu/drm/virtio/virtgpu_drv.h | 17 ++
> drivers/gpu/drm/virtio/virtgpu_gem.c | 83
> 2 file
On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann wrote:
>
> Rework fencing workflow, starting with virtio_gpu_execbuffer_ioctl.
> Stop using ttm helpers, use the virtio_gpu_array_* helpers (which work
> on the reservation objects directly) instead.
>
> New workflow:
>
> (1) All gem objects needed by
On Wed, Jul 3, 2019 at 11:31 AM Chia-I Wu wrote:
>
> On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann wrote:
> >
> > Some helper functions to manage an array of gem objects.
> >
> > v6:
> > - add ticket to struct virtio_gpu_object_array.
> > - add v
On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann wrote:
>
> Switch to the virtio_gpu_array_* helper workflow.
>
> Signed-off-by: Gerd Hoffmann
> ---
> drivers/gpu/drm/virtio/virtgpu_drv.h | 2 +-
> drivers/gpu/drm/virtio/virtgpu_ioctl.c | 43 --
> drivers/gpu/drm/virtio/vi
On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann wrote:
>
> Switch to the virtio_gpu_array_* helper workflow.
(just repeating my question on patch 6)
Does this fix the obj refcount issue? When was the issue introduced?
>
> Signed-off-by: Gerd Hoffmann
> ---
> drivers/gpu/drm/virtio/virtgpu_drv.h
On Thu, Jul 4, 2019 at 4:25 AM Gerd Hoffmann wrote:
>
> Hi,
>
> > > if (fence)
> > > virtio_gpu_fence_emit(vgdev, hdr, fence);
> > > + if (vbuf->objs) {
> > > + virtio_gpu_array_add_fence(vbuf->objs, &fence->f);
> > > + virtio_gpu_array_u
On Thu, Jul 4, 2019 at 4:48 AM Gerd Hoffmann wrote:
>
> On Wed, Jul 03, 2019 at 01:05:12PM -0700, Chia-I Wu wrote:
> > On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann wrote:
> > >
> > > Switch to the virtio_gpu_array_* helper workflow.
> > (just repeating my qu
On Thu, Jul 4, 2019 at 4:51 AM Gerd Hoffmann wrote:
>
> Hi,
>
> > > convert_to_hw_box(&box, &args->box);
> > > if (!vgdev->has_virgl_3d) {
> > > virtio_gpu_cmd_transfer_to_host_2d
> > > - (vgdev, qobj, offset,
> > > + (v
On Thu, Jul 4, 2019 at 4:10 AM Gerd Hoffmann wrote:
>
> Hi,
>
> > > - r = ttm_bo_reserve(&bo->tbo, true, false, NULL);
> > > + r = reservation_object_lock_interruptible(bo->gem_base.resv,
> > > NULL);
> > Can you elaborate a bit about how TTM keeps the BOs alive in, for
> > example,
On Fri, Jul 5, 2019 at 1:53 AM Gerd Hoffmann wrote:
>
> On Thu, Jul 04, 2019 at 12:17:48PM -0700, Chia-I Wu wrote:
> > On Thu, Jul 4, 2019 at 4:10 AM Gerd Hoffmann wrote:
> > >
> > > Hi,
> > >
> > > > > - r = ttm_bo_res
virtio_gpu_dequeue_ctrl_func. If virtqueue_notify was called with
the vq lock held, the worker thread would busy wait inside
virtio_gpu_dequeue_ctrl_func.
Signed-off-by: Chia-I Wu
---
drivers/gpu/drm/virtio/virtgpu_vq.c | 19 +--
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/drivers
On Thu, Jul 4, 2019 at 11:46 AM Chia-I Wu wrote:
>
> On Thu, Jul 4, 2019 at 4:25 AM Gerd Hoffmann wrote:
> >
> > Hi,
> >
> > > > if (fence)
> > > > virtio_gpu_fence_emit(vg
virtio_gpu_dequeue_ctrl_func. If virtqueue_notify was called with
the vq lock held, the worker thread would have to busy wait inside
virtio_gpu_dequeue_ctrl_func.
v2: fix scrambled commit message
Signed-off-by: Chia-I Wu
---
drivers/gpu/drm/virtio/virtgpu_vq.c | 19 +--
1 file changed, 13
On Tue, Jul 2, 2019 at 7:19 AM Gerd Hoffmann wrote:
>
> virtio-gpu basically needs a sg_table for the bo, to tell the host where
> the backing pages for the object are. So the gem shmem helpers are a
> perfect fit. Some drm_gem_object_funcs need thin wrappers to update the
> host state, but othe
s.
>
> Also, add a modparam override for debugging and igt.
>
> v2: Send the right version of the patch (ie. the one that actually
> compiles)
>
> Signed-off-by: Rob Clark
Reviewed-by: Chia-I Wu
("drm/virtio: move virtio_gpu_mem_entry initialization to
> new function")
> Signed-off-by: Miaoqian Lin
> ---
> changes in v2:
> - Update Fixes tag.
> - rebase the working tree.
> v1 Link:
> https://lore.kernel.org/all/20211222072649.18169-1-linmq...@gmail.com/
Reviewed-by: Chia-I Wu
virtgpu_object.c | 1 +
> include/drm/drm_gem_shmem_helper.h | 2 ++
> 6 files changed, 10 insertions(+), 3 deletions(-)
Reviewed-by: Chia-I Wu
>
> --
> 2.34.1
>
On Tue, Apr 12, 2022 at 1:48 PM Chia-I Wu wrote:
>
> drm_sched_job and drm_run_job have the same prototype.
>
> v2: rename the class from drm_sched_job_entity to drm_sched_job (Andrey)
>
> Signed-off-by: Chia-I Wu
> Cc: Rob Clark
> Reviewed-by: Andrey Grodzovsky
This
On Tue, Apr 12, 2022 at 2:26 PM Chia-I Wu wrote:
>
> In practice, trace_dma_fence_init called from dma_fence_init is good
> enough and almost no driver calls trace_dma_fence_emit. But drm_sched
> and virtio both have cases where trace_dma_fence_init and
> trace_dma_fence_emit ca
König
> >>> wrote:
> >>>> Am 26.04.22 um 18:32 schrieb Chia-I Wu:
> >>>>> On Tue, Apr 12, 2022 at 2:26 PM Chia-I Wu wrote:
> >>>>>> In practice, trace_dma_fence_init called from dma_fence_init is good
> >>>
On Tue, Apr 26, 2022 at 11:02 AM Christian König
wrote:
>
> Am 26.04.22 um 19:40 schrieb Chia-I Wu:
> > [SNIP]
> >>>> Well I just send a patch to completely remove the trace point.
> >>>>
> >>>> As I said it absolutely doesn't ma
That would be great. I don't have push permission.
On Tue, Apr 26, 2022 at 11:25 AM Andrey Grodzovsky
wrote:
>
> It's ok to land but it wasn't, do you have push permissions to
> drm-misc-next ? If not, I will do it for you.
>
> Andrey
>
> On 2022-04-26 12:29,
On Wed, Apr 27, 2022 at 9:07 AM Rob Clark wrote:
>
> On Tue, Apr 26, 2022 at 11:20 PM Christian König
> wrote:
> >
> > Am 26.04.22 um 20:50 schrieb Chia-I Wu:
> > > On Tue, Apr 26, 2022 at 11:02 AM Christian König
> > > wrote:
> > >> Am
d-off-by: Rob Clark
Reviewed-by: Chia-I Wu
Might want to wait for Gurchetan to chime in as he added the mechanism.
> ---
> drivers/gpu/drm/virtio/virtgpu_ioctl.c | 8 +---
> include/uapi/drm/virtgpu_drm.h | 2 ++
> 2 files changed, 7 insertions(+), 3 deletions(-
drm_sched_job and drm_run_job have the same prototype.
Signed-off-by: Chia-I Wu
Cc: Rob Clark
---
.../gpu/drm/scheduler/gpu_scheduler_trace.h | 31 +--
1 file changed, 7 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h
b/drivers
Otherwise, ring names are marked [UNSAFE-MEMORY].
Signed-off-by: Chia-I Wu
Cc: Rob Clark
---
drivers/gpu/drm/scheduler/gpu_scheduler_trace.h | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h
b/drivers/gpu/drm
In practice, trace_dma_fence_init is good enough and almost no driver
calls trace_dma_fence_emit. But this is still more correct in theory.
Signed-off-by: Chia-I Wu
Cc: Rob Clark
---
drivers/gpu/drm/msm/msm_gpu.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/msm
On Sat, Apr 9, 2022 at 7:33 AM Christian König wrote:
>
> Am 08.04.22 um 23:12 schrieb Chia-I Wu:
> > In practice, trace_dma_fence_init is good enough and almost no driver
> > calls trace_dma_fence_emit. But this is still more correct in theory.
>
> Well, the reason wh
drm_sched_job and drm_run_job have the same prototype.
v2: rename the class from drm_sched_job_entity to drm_sched_job (Andrey)
Signed-off-by: Chia-I Wu
Cc: Rob Clark
Reviewed-by: Andrey Grodzovsky
---
.../gpu/drm/scheduler/gpu_scheduler_trace.h | 31 +--
1 file changed, 7
Otherwise, ring names are marked [UNSAFE-MEMORY].
Signed-off-by: Chia-I Wu
Cc: Rob Clark
Reviewed-by: Andrey Grodzovsky
---
drivers/gpu/drm/scheduler/gpu_scheduler_trace.h | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/scheduler
correct trace_dma_fence_emit when visualizing
fence timelines.
v2: improve commit message (Dmitry)
Signed-off-by: Chia-I Wu
Cc: Rob Clark
Reviewed-by: Dmitry Baryshkov
---
drivers/gpu/drm/msm/msm_gpu.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers
It is redundant since commit 7c0ffcd40b16 ("drm/msm/gpu: Respect PM QoS
constraints") because dev_pm_qos_update_request triggers get_dev_status.
Signed-off-by: Chia-I Wu
Cc: Rob Clark
---
drivers/gpu/drm/msm/msm_gpu_devfreq.c | 7 ---
1 file changed, 7 deletions(-)
diff --git
Move tracking and busy time calculation to msm_devfreq_get_dev_status.
Signed-off-by: Chia-I Wu
Cc: Rob Clark
---
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 19 ++--
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 15 +
drivers/gpu/drm/msm/msm_gpu.h | 9 +++-
drivers
msm_devfreq_idle/msm_devfreq_active.
This logic could potentially be moved into devfreq core.
Fixes: 7c0ffcd40b16 ("drm/msm/gpu: Respect PM QoS constraints")
Signed-off-by: Chia-I Wu
Cc: Rob Clark
---
drivers/gpu/drm/msm/msm_gpu.h | 3 ++
drivers/gpu/drm/msm/msm_gpu_devf
On Fri, Apr 15, 2022 at 5:33 PM Chia-I Wu wrote:
>
> simple_ondemand interacts poorly with clamp_to_idle. It only looks at
> the load since the last get_dev_status call, while it should really look
> at the load over polling_ms. When clamp_to_idle true, it almost always
> p
pported flag in
mode_config")
Suggested-by: Shao-Chuan Lee
Signed-off-by: Chia-I Wu
---
drivers/gpu/drm/virtio/virtgpu_display.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c
b/drivers/gpu/drm/virtio/virtgpu_display.c
index 5c7f198c0712..9ea7611a9e0f
On Thu, Sep 15, 2022 at 4:14 AM Dan Carpenter wrote:
>
> The ->ring_idx_mask variable is a u64 so static checkers, Smatch in
> this case, complain if the BIT() is not also a u64.
>
> drivers/gpu/drm/virtio/virtgpu_ioctl.c:50 virtio_gpu_fence_event_create()
> warn: should '(1 << ring_idx)' be a 64
#x27;(1 << ring_idx)' be a 64 bit type?
>
> Fixes: cd7f5ca33585 ("drm/virtio: implement context init: add
> virtio_gpu_fence_event")
> Signed-off-by: Dan Carpenter
> ---
> v2: Style change. Use BIT_ULL().
Reviewed-by: Chia-I Wu
>
> drivers/gpu/drm/virt
e shrinker to evict an
> obj queued up in gpu scheduler.)
>
> Fixes: f371bcc0c2ac ("drm/msm/gem: Unpin buffers earlier")
> Fixes: 025d27239a2f ("drm/msm/gem: Evict active GEM objects when necessary")
> Closes: https://gitlab.freedesktop.org/drm/msm/-/issues/19
>
On Wed, Dec 11, 2019 at 12:42 AM Gerd Hoffmann wrote:
>
> Signed-off-by: Gerd Hoffmann
> ---
> drivers/gpu/drm/virtio/virtgpu_plane.c | 31 ++
> 1 file changed, 17 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c
> b/drivers/gpu/drm/
On Wed, Dec 11, 2019 at 12:42 AM Gerd Hoffmann wrote:
>
> Signed-off-by: Gerd Hoffmann
> ---
> drivers/gpu/drm/virtio/virtgpu_plane.c | 41 +++---
> 1 file changed, 24 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c
> b/drivers/gpu/drm/
On Thu, Dec 12, 2019 at 4:53 AM Gerd Hoffmann wrote:
>
> v2: fix src rect handling (Chia-I Wu).
>
> Gerd Hoffmann (3):
> drm/virtio: skip set_scanout if framebuffer didn't change
> virtio-gpu: batch display update commands.
> virtio-gpu: use damage info for
Hi,
On Mon, Dec 9, 2019 at 2:44 PM Chia-I Wu wrote:
>
> On Mon, Dec 2, 2019 at 5:36 PM Gurchetan Singh
> wrote:
> >
> > With the misc device, we should end up using the result of
> > get_arch_dma_ops(..) or dma-direct ops.
> >
> > This can allow us
; .pin = drm_gem_shmem_pin,
> .unpin = drm_gem_shmem_unpin,
> .get_sg_table = drm_gem_shmem_get_sg_table,
> - .vmap = udl_gem_object_vmap,
> + .vmap = drm_gem_shmem_vmap,
> .vunmap = drm_gem_shmem_vunmap,
> - .mmap = udl_gem_obj
This series replaces the global disable_notify state by command-level bools to
control vq kicks. When command batching is applied to more places, this
prevents one process from affecting another process.
___
dri-devel mailing list
dri-devel@lists.freedes
Call virtqueue_kick_prepare once in virtio_gpu_enable_notify, not
whenever a command is added. This should be more efficient since
the intention is to batch commands.
Signed-off-by: Chia-I Wu
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 1 -
drivers/gpu/drm/virtio/virtgpu_vq.c | 28
bar -> add bar to ctrlq and commit is
caller-controlled
virtio_gpu_{disable,enable}_notify is also replaced by
virtio_gpu_commit_ctrl.
Signed-off-by: Chia-I Wu
---
drivers/gpu/drm/virtio/virtgpu_display.c | 9 ++-
drivers/gpu/drm/virtio/virtgpu_drv.h
bool has_virgl_3d;
> bool has_edid;
> + bool has_indirect;
has_indirect_desc? Either way,
Reviewed-by: Chia-I Wu
>
> struct work_struct config_changed_work;
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_debugfs.c
> b/drivers/gpu/drm/virtio/virtgp
virtio_gpu_queue_ctrl_sgs queues only. virtio_gpu_commit_ctrl must
be explicitly called. This however means that we need to grab the
spinlock twice.
Signed-off-by: Chia-I Wu
---
drivers/gpu/drm/virtio/virtgpu_vq.c | 29 ++---
1 file changed, 22 insertions(+), 7
Call virtqueue_kick_prepare once in virtio_gpu_enable_notify, not
whenever a command is added. This should be more efficient since
the intention is to batch commands.
Signed-off-by: Chia-I Wu
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 1 -
drivers/gpu/drm/virtio/virtgpu_vq.c | 34
Hi,
This series replaces the global disable_notify state by command-level bools to
control vq kicks. When command batching is applied to more places, this
prevents one process from affecting another process.
v2: update to this convention
virtio_gpu_cmd_foo: add foo and commit
virtio_gpu_add
bar -> add bar but do not commit
virtio_gpu_{disable,enable}_notify is replaced by
virtio_gpu_commit_ctrl.
Signed-off-by: Chia-I Wu
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 42 +-
drivers/gpu/drm/virtio/virtgpu_plane.c | 10 +++---
drivers/gpu/drm/virtio/virtgpu_vq.c
virtio_gpu_free_object(&shmem_obj->base);
> + return ret;
> + }
> +
> + ret = virtio_gpu_object_attach(vgdev, bo, ents, nents);
> if (ret != 0) {
> virtio_gpu_free_object(&shmem_obj->base);
>
The fixes look reasonable.
Reviewed-by: Chia-I Wu
On Tue, Feb 11, 2020 at 5:50 AM Gerd Hoffmann wrote:
>
>
>
> Gerd Hoffmann (2):
> drm/virtio: fix virtio_gpu_execbuffer_ioctl locking
> drm/virtio: fix virtio_gpu_cursor_plane_update().
>
> drivers/gpu/drm/vir
On Wed, Feb 12, 2020 at 3:13 AM Gerd Hoffmann wrote:
>
> Drop the virtio_gpu_{disable,enable}_notify(). Add a new
> virtio_gpu_notify() call instead, which must be called whenever
> the driver wants make sure the host is notified needed.
>
> Drop notification from command submission. Add virtio_
On Tue, Feb 11, 2020 at 3:56 PM Gurchetan Singh
wrote:
>
> We currently do it when open the DRM fd, let's delay it. First step,
> remove the hyercall from initialization.
>
> Signed-off-by: Gurchetan Singh
> ---
> drivers/gpu/drm/virtio/virtgpu_drv.h | 2 ++
> drivers/gpu/drm/virtio/virtgpu_i
On Tue, Feb 11, 2020 at 3:56 PM Gurchetan Singh
wrote:
>
> We only want create a new virglrenderer context after the first
> 3D ioctl.
>
> Signed-off-by: Gurchetan Singh
> ---
> drivers/gpu/drm/virtio/virtgpu_drv.h | 1 +
> drivers/gpu/drm/virtio/virtgpu_ioctl.c | 5 +
> drivers/gpu/drm/vi
1 - 100 of 220 matches
Mail list logo