Quoting Daniel Vetter (2020-01-16 06:52:42)
> On Wed, Jan 15, 2020 at 08:52:45PM +0000, Chris Wilson wrote:
> > Since we may try and flush the cachelines associated with large buffers
> > (an 8K framebuffer is about 128MiB, even before we try HDR), this leads
> > to unacce
Quoting David Laight (2020-01-16 12:26:45)
> However there is a call from __i915_gem_objet_set_pages() that
> is preceded by a lockdep_assert_held() check - so mustn't sleep.
That is a mutex; it's allowed to sleep. The might_sleep() here should
help assuage your fears.
-Chris
_
Quoting David Laight (2020-01-16 13:58:44)
> From: Chris Wilson
> > Sent: 16 January 2020 12:29
> >
> > Quoting David Laight (2020-01-16 12:26:45)
> > > However there is a call from __i915_gem_objet_set_pages() that
> > > is preceded by a lockdep
Quoting Akeem G Abodunrin (2020-01-14 17:45:48)
> diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.h
> b/drivers/gpu/drm/i915/gt/gen7_renderclear.h
> new file mode 100644
> index ..4b88dd8d0fd4
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.h
> @@ -0,0 +1,16 @@
>
Quoting Akeem G Abodunrin (2020-01-16 17:46:55)
> +static u32
> +gen7_fill_interface_descriptor(struct batch_chunk *state,
> + const struct batch_vals *bv,
> + const struct cb_kernel *kernel,
> + unsigned int cou
Quoting Piper Fowler-Wright (2020-01-18 20:28:42)
> I have recently (since the New Year) been experiencing regular GPU hangs
> which typically render the system unusable.
>
> During the hangs the kernel buffer is filled with messages of the form
>
> [ 8269.599926] [drm:gen8_reset_engines [i915]]
ter... Worryingly
some of those callbacks may be (implicitly) depending on the global
mutex.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/drm_file.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
index 92d16724f
Quoting Thomas Hellström (VMware) (2020-01-22 21:52:23)
> Hi, Chris,
>
> On 1/22/20 4:56 PM, Chris Wilson wrote:
> > The file is not part of the global drm resource and can be released
> > prior to take the global mutex to drop the open_count (and potentially
>
Quoting Colin King (2020-01-23 15:14:06)
> From: Colin Ian King
>
> Currently if the call to function context_get_vm_rcu returns
> a null pointer for vm then the error exit path via label err_put
> will call i915_vm_put on the null vm, causing a null pointer
> dereference. Fix this by adding a n
with the
drm_file_free() debug message -- and for good measure make that up as
reading outside of the mutex.
Signed-off-by: Chris Wilson
Cc: Thomas Hellström (VMware)
---
drivers/gpu/drm/drm_file.c | 8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/drm_
Quoting Dan Carpenter (2020-01-24 10:13:12)
> This is always called with IRQs disabled and we don't actually want to
> enable IRQs at the end.
>
> Fixes: a6aa8fca4d79 ("dma-buf/sw-sync: Reduce irqsave/irqrestore from known
> context")
> Signed-off-by: Dan Carpenter
> ---
> drivers/dma-buf/sync_
Quoting Dan Carpenter (2020-01-24 10:31:23)
> On Fri, Jan 24, 2020 at 10:20:56AM +0000, Chris Wilson wrote:
> > Quoting Dan Carpenter (2020-01-24 10:13:12)
> > > This is always called with IRQs disabled and we don't actually want to
> > > enable IRQs at the end.
nd there
may be more, so be cautious.
Signed-off-by: Chris Wilson
Cc: Thomas Hellström (VMware)
Acked-by: Thomas Hellström (VMware)
---
drivers/gpu/drm/drm_file.c | 36 -
drivers/gpu/drm/i915/i915_drv.c | 2 +-
include/drm/drm_file.h | 1 +
on of delaying acquiring the drm_global_mutex for the final
release by using atomic_dec_and_mutex_lock(), leaving the global
serialisation across the device opens.
Signed-off-by: Chris Wilson
Cc: Thomas Hellström (VMware)
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
drivers/gp
on of delaying acquiring the drm_global_mutex for the final
release by using atomic_dec_and_mutex_lock(), leaving the global
serialisation across the device opens.
Signed-off-by: Chris Wilson
Cc: Thomas Hellström (VMware)
---
atomic_dec_and_mutex_lock needs pairing with mutex_unlock (you
Quoting Thomas Hellström (VMware) (2020-01-24 13:37:47)
> On 1/24/20 2:01 PM, Chris Wilson wrote:
> > Since drm_global_mutex is a true global mutex across devices, we don't
> > want to acquire it unless absolutely necessary. For maintaining the
> > device local ope
Quoting Thomas Hellström (VMware) (2020-01-24 13:37:47)
> On 1/24/20 2:01 PM, Chris Wilson wrote:
> > Since drm_global_mutex is a true global mutex across devices, we don't
> > want to acquire it unless absolutely necessary. For maintaining the
> > device local ope
("drm: Release filp before global lock")
Signed-off-by: Chris Wilson
Cc: Ben Skeggs
---
drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c
b/drivers/gpu/drm/nouveau/nouveau_drm.c
index b6
("drm: Release filp before global lock")
Signed-off-by: Chris Wilson
Cc: Alex Deucher
Cc: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
b/drivers/gpu/drm/
y evil for the current situation,
Acked-by: Chris Wilson
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
Quoting Daniel Vetter (2020-01-27 09:05:57)
> On Sat, Jan 25, 2020 at 04:08:39PM +0000, Chris Wilson wrote:
> > Quoting Wambui Karuga (2020-01-22 12:57:48)
> > > This series is a part of the conversion to the new struct drm_device
> > > based logging macros in drm/i915
Quoting Daniel Vetter (2020-01-28 10:45:58)
> Kinda time to get this sorted. The locking around this really is not
> nice.
>
> Signed-off-by: Daniel Vetter
> ---
> drivers/gpu/drm/drm_drv.c | 6 ++
> include/drm/drm_drv.h | 3 +++
> 2 files changed, 9 insertions(+)
>
> diff --git a/driv
(partially) opt-in like
> with drm_release_noglobal().
>
> Cc: Chris Wilson
> Signed-off-by: Daniel Vetter
> ---
> drivers/gpu/drm/drm_drv.c | 14 +-
> drivers/gpu/drm/drm_file.c | 6 ++
> 2 files changed, 11 insertions(+), 9 deletions(-)
>
> diff -
audit are the various driver
> hooks - by keeping the BKL around if any of them are set we have a
> very simple cop-out!
>
> Note that one of the biggest prep pieces to get here was making
> dev->open_count atomic, which was done in
>
> commit 7e13ad896484a0165a68197a2e
Quoting Chris Wilson (2020-01-28 10:47:59)
> Quoting Daniel Vetter (2020-01-28 10:45:58)
> > Kinda time to get this sorted. The locking around this really is not
> > nice.
> >
> > Signed-off-by: Daniel Vetter
> > ---
> > drivers/gpu/drm/drm_drv.c | 6 ++
Quoting Jani Nikula (2020-01-28 13:48:10)
> On Tue, 28 Jan 2020, Tvrtko Ursulin wrote:
> >> -DRM_DEBUG(
> >> +drm_dbg(&T->drm,
> >
> > This changes DRM_UT_CORE to DRM_UT_DRIVER so our typical drm.debug=0xe
> > becomes much more spammy.
>
> This is what I've instructed Wambui to do in i915. It's
Quoting Daniel Vetter (2020-01-29 08:24:10)
> @@ -378,9 +409,10 @@ int drm_open(struct inode *inode, struct file *filp)
> if (IS_ERR(minor))
> return PTR_ERR(minor);
>
> - mutex_unlock(&drm_global_mutex);
> -
> dev = minor->dev;
> + if (drm_dev_needs_gl
Quoting Linus Torvalds (2020-01-30 16:13:24)
> On Wed, Jan 29, 2020 at 9:58 PM Dave Airlie wrote:
> >
> > It has two known conflicts, one in i915_gem_gtt, where you should juat
> > take what's in the pull (it looks messier than it is),
>
> That doesn't seem right. If I do that, I lose the added G
wn devices.
Reported-by: Taketo Kabe
Closes: https://gitlab.freedesktop.org/drm/intel/issues/1027
Fixes: de09d31dd38a ("page-flags: define PG_reserved behavior on compound
pages")
Signed-off-by: Chris Wilson
Cc: # v4.5+
---
drivers/gpu/drm/drm_pci.c | 23 ++
Quoting Daniel Vetter (2020-02-02 16:43:06)
> On Sun, Feb 02, 2020 at 04:10:09PM +0000, Chris Wilson wrote:
> > drm_pci_alloc/drm_pci_free are very thin wrappers around the core dma
> > facilities, and we have no special reason within the drm layer to behave
> > differently.
Internally for "consistent" maps, we create a temporary struct
drm_dma_handle in order to user our own dma_alloc_coherent wrapper then
destroy the temporary wrap. Simplify our logic by removing the temporary
wrapper!
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/drm_b
The drm_pci_alloc routines have been a thin wrapper around the core dma
coherent routines. Remove the crutch of a wrapper and the exported
symbols, marking it for only internal legacy use.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/drm_bufs.c | 5 +++--
drivers/gpu/drm/drm_legacy.h | 23
wn devices.
Reported-by: Taketo Kabe
Closes: https://gitlab.freedesktop.org/drm/intel/issues/1027
Fixes: de09d31dd38a ("page-flags: define PG_reserved behavior on compound
pages")
Signed-off-by: Chris Wilson
Cc: # v4.5+
---
drivers/gpu/drm/drm_pci.c | 23 ++
drm_pci_alloc is a thin wrapper over dma_coherent_alloc. Ditch the
wrapper and just use the dma routines directly.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/r128/ati_pcigart.c | 32 +++---
drivers/gpu/drm/r128/ati_pcigart.h | 2 +-
2 files changed, 17 insertions
action and using the dma functions directly.
Reported-by: Taketo Kabe
Closes: https://gitlab.freedesktop.org/drm/intel/issues/1027
Fixes: de09d31dd38a ("page-flags: define PG_reserved behavior on compound
pages")
Signed-off-by: Chris Wilson
Cc: # v4.5+
---
drivers/gpu/drm/i915/display/i
eak.
>
> Cc: Dan Carpenter
> Cc: Hillf Danton
> Cc: Reported-by: syzbot+0dc774d419e91...@syzkaller.appspotmail.com
> Cc: sta...@vger.kernel.org
> Cc: Emil Velikov
> Cc: Daniel Vetter
> Cc: Sean Paul
> Cc: Chris Wilson
> Cc: Eric Anholt
> Cc: Sam Ravn
Quoting Alex Deucher (2020-02-03 21:49:48)
> On Sun, Feb 2, 2020 at 12:16 PM Chris Wilson wrote:
> >
> > drm_pci_alloc/drm_pci_free are very thin wrappers around the core dma
> > facilities, and we have no special reason within the drm layer to behave
> > differ
(by signaling) on retirement before freeing the
fence, it can do so in a race-free manner.
See also 0fc89b6802ba ("dma-fence: Simply wrap dma_fence_signal_locked
with dma_fence_signal").
Signed-off-by: Chris Wilson
---
drivers/dma-buf/dma-fence.c | 11 +--
1 file changed, 5 ins
The ulterior motive to switching the booleans over to bitops is to
allow use of the allocated flag as a bitlock.
Signed-off-by: Chris Wilson
---
drivers/gpu/drm/drm_mm.c | 36 +++
.../gpu/drm/i915/gem/i915_gem_execbuffer.c| 6 ++--
drivers/gpu/drm/i915
sation if
they so choose.
"It is easier to add synchronisation later, than it is to take it away."
v2: Lots of little fixes, plus a real llseek() implements so that the
first basic little test cases work!
Testcase: igt/prime_rw
Signed-off-by: Chris Wilson
Cc: Laura Abbott
Cc: Sumit Se
Quoting Daniel Vetter (2019-09-19 16:28:41)
> On Thu, Sep 19, 2019 at 5:09 PM Chris Wilson wrote:
> >
> > It is expected that processes will pass dma-buf fd between drivers, and
> > only using the fd themselves for mmaping and synchronisation ioctls.
> > However,
_resv_get_fences_rcu(struct dma_resv *obj,
> if (pfence_excl)
> *pfence_excl = fence_excl;
> else if (fence_excl)
> - shared[++shared_count] = fence_excl;
> + shared[shared_count++] = fence_excl;
Oops.
Reviewed-by: Chris Wilson
-Chris
Quoting Chris Wilson (2019-09-22 13:17:19)
> Quoting Qiang Yu (2019-09-22 08:49:00)
> > This causes kernel crash when testing lima driver.
> >
> > Cc: Christian König
> > Fixes: b8c036dfc66f ("dma-buf: simplify reservation_object_get_fences_rcu a
>
astian Andrzej Siewior
Given the context though, they are moot.
Reviewed-by: Chris Wilson
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
and let all callers invoke intel_engine_breadcrumbs_irq()
> directly instead using intel_engine_signal_breadcrumbs().
>
> Reported-by: Clark Williams
> Signed-off-by: Sebastian Andrzej Siewior
All those irq save/restore look annoying, still the argument is valid
Reviewed: Chris Wilso
Quoting Joonas Lahtinen (2019-10-03 00:28:43)
> + Chris and Tvrtko
It's a trivial warning, that's already fixed. In this case by separating
out the poweroff into process context.
c7302f204490 ("drm/i915: Defer final intel_wakeref_put to process context")
-Chris
Quoting Ruhl, Michael J (2019-09-16 20:45:14)
> >-Original Message-
> >From: dri-devel [mailto:dri-devel-boun...@lists.freedesktop.org] On Behalf
> >Of Chris Wilson
> >Sent: Sunday, September 15, 2019 2:46 PM
> >@@ -424,9 +424,9 @@ int drm_mm_reserve_nod
(by signaling) on retirement before freeing the
fence, it can do so in a race-free manner.
See also 0fc89b6802ba ("dma-fence: Simply wrap dma_fence_signal_locked
with dma_fence_signal").
v2: Refactor all 3 enable_signaling paths to use a common function.
Signed-off-by: Chris Wilson
---
dr
Quoting Chris Wilson (2019-10-03 14:19:46)
> Make dma_fence_enable_sw_signaling() behave like its
> dma_fence_add_callback() and dma_fence_default_wait() counterparts and
> perform the test to enable signaling under the fence->lock, along with
> the action to do so. This ensure
(by signaling) on retirement before freeing the
fence, it can do so in a race-free manner.
See also 0fc89b6802ba ("dma-fence: Simply wrap dma_fence_signal_locked
with dma_fence_signal").
v2: Refactor all 3 enable_signaling paths to use a common function.
Signed-off-by: Chris Wilson
---
Ret
Quoting Ruhl, Michael J (2019-10-03 15:12:38)
> >-Original Message-
> >From: Intel-gfx [mailto:intel-gfx-boun...@lists.freedesktop.org] On Behalf Of
> >Chris Wilson
> >Sent: Thursday, October 3, 2019 9:24 AM
> >To: intel-...@lists.freedesktop.org
> >
Quoting Jani Nikula (2019-11-20 16:15:08)
> On Tue, 19 Nov 2019, Randy Dunlap wrote:
> > On 11/19/19 12:46 AM, Stephen Rothwell wrote:
> >> Hi all,
> >>
> >> Changes since 20191118:
> >
> >
> > on x86_64:
> >
> > ERROR: "pm_suspend_target_state" [drivers/gpu/drm/i915/i915.ko] undefined!
> >
> > #
Quoting kernel test robot (2019-11-21 07:19:43)
> Greetings,
>
> 0day kernel testing robot got the below dmesg and the first bad commit is
>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
>
> commit 2989f6451084aed3f8cc9992477f7a9bf57a3716
>
Quoting Nathan Chancellor (2019-11-23 19:53:22)
> -Wtautological-compare was recently added to -Wall in LLVM, which
> exposed an if statement in i915 that is always false:
>
> ../drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c:1485:22: warning:
> result of comparison of constant 576460752303423487
Quoting Niranjana Vishwanathapura (2019-11-22 20:57:24)
> Shared Virtual Memory (SVM) runtime allocator support allows
> binding a shared virtual address to a buffer object (BO) in the
> device page table through an ioctl call.
The ioctl though is not svm specific, it is to do with "bulk residency
Quoting Dan Carpenter (2019-11-26 12:50:03)
> We should be unmapping "page" instead of "s". This code originally used
> kmap_atomic() before it was changed to kmap(). The two mapping
> functions are different which leads to this common mistake.
>
> Fixes: 3e749f5199e1 ("drm/i915: Avoid atomic co
Quoting Nick Desaulniers (2019-12-02 19:18:20)
> On Sat, Nov 23, 2019 at 12:05 PM Chris Wilson
> wrote:
> >
> > Quoting Nathan Chancellor (2019-11-23 19:53:22)
> > > -Wtautological-compare was recently added to -Wall in LLVM, which
> > > exposed an if st
Quoting i...@dantalion.nl (2019-12-09 08:34:28)
> Hello everyone,
>
> This is my first message on this mailing list so bear with me. I am
> running an Arch based system with kernel 5.3.x, xorg-server 1.20.5 and
> xf86-video-intel 1:2.99.917.
>
> Recently I have been receiving GPU HANGS were my sc
108,7 +108,7 @@ static u32 trifilter(u32 *a)
>
> sort(a, COUNT, sizeof(*a), cmp_u32, NULL);
>
> - sum += mul_u32_u32(a[2], 2);
> + sum = mul_u32_u32(a[2], 2);
/o\
Reviewed-by: Chris Wilson
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
Quoting Christian König (2019-08-21 13:31:42)
> Add a new dma_resv_prune_fences() function to improve memory management.
>
> Signed-off-by: Christian König
> ---
> drivers/dma-buf/dma-resv.c | 37 ++
> drivers/gpu/drm/i915/gem/i915_gem_wait.c | 3 +-
> driv
Quoting Chris Wilson (2019-08-21 15:55:08)
> Quoting Christian König (2019-08-21 13:31:42)
> > Add a new dma_resv_prune_fences() function to improve memory management.
> >
> > Signed-off-by: Christian König
> > ---
> > drivers/dma-bu
Quoting Christian König (2019-08-21 13:31:45)
> @@ -117,17 +120,10 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
>
> busy_check_writer(rcu_dereference(obj->base.resv->fence_excl));
>
> /* Translate shared fences to READ set of engines */
> - list = rcu_
Quoting Christian König (2019-08-21 13:31:45)
> @@ -528,20 +352,9 @@ void dma_resv_prune_fences(struct dma_resv *obj)
> dma_fence_put(fence);
> }
>
> - list = dma_resv_get_list(obj);
> - if (!list)
> - return;
> -
> - for (i = 0; i < list->s
Quoting Christian König (2019-08-21 13:31:40)
> Try to recycle an dma_fence_array object by dropping the last
> reference to it without freeing it.
>
> Signed-off-by: Christian König
> ---
> drivers/dma-buf/dma-fence-array.c | 27 +++
> include/linux/dma-fence-array.h |
Quoting Chris Wilson (2019-08-21 16:24:22)
> Quoting Christian König (2019-08-21 13:31:45)
> > @@ -117,17 +120,10 @@ i915_gem_busy_ioctl(struct drm_device *dev, void
> > *data,
> >
> > busy_check_writer(rcu_dereference(obj->base.resv->fence_excl)
Quoting Christian König (2019-08-21 13:31:37)
> Hi everyone,
>
> In previous discussion it surfaced that different drivers use the shared and
> explicit fences in the dma_resv object with different meanings.
>
> This is problematic when we share buffers between those drivers and
> requirements
(by signaling) on retirement before freeing the
fence, it can do so in a race-free manner.
See also 0fc89b6802ba ("dma-fence: Simply wrap dma_fence_signal_locked
with dma_fence_signal").
Signed-off-by: Chris Wilson
---
drivers/dma-buf/dma-fence.c | 11 +--
1 file changed, 5 ins
>
> Unfortunately we can't do this in the usual module init functions,
> because kernel threads don't have an ->mm - we have to wait around for
> some user thread to do this.
>
> Solution is to spawn a worker (but only once). It's horrible, but it
> works.
>
held by insmod/655:
[ 18.513933] #0: 4dccb591 (&dev->mutex){}, at:
device_driver_attach+0x18/0x50
[ 18.513938] #1: 9118ecae (&mm->mmap_sem#2){}, at:
i915_driver_probe+0x8c8/0x1470 [i915]
[ 18.513962] #2: a85b
t; Signed-off-by: Lyude Paul
> Cc: Chris Wilson
> ---
> drivers/gpu/drm/i915/Kconfig.debug | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/i915/Kconfig.debug
> b/drivers/gpu/drm/i915/Kconfig.debug
> index 00786a142ff0..ad8d3cd63c9f 100644
> -
gh, CONFIG_DMA_API_DEBUG_SG is enabled in the debug configs for
> various distro kernels. Since a WARN_ON() will disable automatic problem
> reporting (and cause any CI with said option enabled to start
> complaining), we really should just fix the problem.
>
> Note that as me and Chri
Quoting Christian König (2019-08-21 13:31:45)
> @@ -528,20 +352,9 @@ void dma_resv_prune_fences(struct dma_resv *obj)
> dma_fence_put(fence);
> }
>
> - list = dma_resv_get_list(obj);
> - if (!list)
> - return;
> -
> - for (i = 0; i < list->s
A preliminary set of tests to exercise the basic dma-fence API on top of
struct dma_fence_array.
Signed-off-by: Chris Wilson
---
drivers/dma-buf/Makefile | 3 +-
drivers/dma-buf/selftests.h | 1 +
drivers/dma-buf/st-dma-fence-array.c | 392 +++
3
to contain itself (even though they have distinct
locks).
In practice, this means that each subsystem gets its own dma-fence-array
class and we can freely use dma-fence-arrays as containers within the
dmabuf core without angering lockdep.
Signed-off-by: Chris Wilson
Cc: Christian König
Cc: Daniel
Quoting Koenig, Christian (2019-08-24 20:04:43)
> Am 24.08.19 um 15:58 schrieb Chris Wilson:
> > In order to allow dma-fence-array as a generic container for fences, we
> > need to allow for it to contain other dma-fence-arrays. By giving each
> > dma-fence-array constructio
Quoting Christian König (2019-08-26 15:57:23)
> The function is supposed to give a hint even if signaling is not enabled.
>
> Signed-off-by: Christian König
> ---
> drivers/dma-buf/dma-fence-array.c | 12 +++-
> 1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/dm
Quoting Geert Uytterhoeven (2019-08-27 13:30:04)
> Hi Chris,
>
> When running the new dmabuf-selftests on two different systems, I get:
>
> dma-buf: Running sanitycheck
> dma-buf: Running dma_fence
> sizeof(dma_fence)=48
> dma-buf: Running dma_fence/sanitycheck
> dma-buf: Runn
Quoting Christian König (2019-08-05 16:45:50)
> The reservation object should be capable of handling its internal memory
> management itself. And since we search for a free slot to add the fence
> from the beginning this is actually a waste of time and only minimal helpful.
"From the beginning?" A
Quoting Christian König (2019-08-05 16:45:54)
> @@ -214,16 +214,16 @@ static __poll_t dma_buf_poll(struct file *file,
> poll_table *poll)
> return 0;
>
> retry:
> - seq = read_seqcount_begin(&resv->seq);
> rcu_read_lock();
>
> + fence_excl = rcu_dereference
Quoting Chris Wilson (2019-08-05 16:58:56)
> Quoting Christian König (2019-08-05 16:45:50)
> > The reservation object should be capable of handling its internal memory
> > management itself. And since we search for a free slot to add the fence
> > from the beginning this i
r will see a refcount==0 fence and restart, whereas by
dropping the ref later, that reader has a better chance of getting to
the end before noticing the change.
> Signed-off-by: Christian König
Reviewed-by: Chris Wilson
-Chris
___
dri-devel mailing
than my own bug... But if we accept it is worth preventing here then the
only odd one out is on a reservation_object_copy_fences() error path,
where the extra delay shouldn't be an issue.
So to double-RCU defer on reservation_object_fini() or not?
For the rest of the mechanical changes,
Reviewed-by: Chris Wilson
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
f-by: Christian König
Reviewed-by: Chris Wilson
I like keeping the reminder about the lack of pruning on idle objects :)
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
Quoting Christian König (2019-08-06 16:01:30)
> Instead of open coding the sequence loop use the new helper.
I've missed something. What reservation_object_fences()?
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesk
Quoting Christian König (2019-08-06 16:01:32)
> We can add the exclusive fence to the list after making sure we got
> a consistent state.
>
> Signed-off-by: Christian König
Reviewed-by: Chris Wilson
-Chris
___
dri-devel mailing li
erence enforces the
callers do hold rcu_read_lock.
I didn't check all the conversions, just stared at the heart of the
problem.
Reviewed-by: Chris Wilson
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
bj,
> RCU_INIT_POINTER(fobj->shared[i], fence);
> /* pointer update must be visible before we extend the shared_count */
> smp_store_mb(fobj->shared_count, count);
Yup, that's all the mb rules we need to apply for the rcu readers to see
a consistent vi
Quoting Christian König (2019-08-06 16:01:34)
> The only remaining use for this is to protect against setting a new exclusive
> fence while we grab both exclusive and shared. That can also be archived by
> looking if the exclusive fence has changed or not after completing the
> operation.
>
> Sign
Quoting Christian König (2019-08-07 13:08:38)
> Am 06.08.19 um 21:57 schrieb Chris Wilson:
> > If we add to shared-list during the read, ... Hmm, actually we should
> > return num_list, i.e.
> >
> > do {
> > *list = rcu_dereference(obj->fence);
> >
arpenter
Oops,
Reviewed-by: Chris Wilson
Thanks,
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
Quoting Chris Wilson (2019-08-07 13:32:15)
> Quoting Dan Carpenter (2019-08-07 13:28:32)
> > There were several places which check for NULL when they should have
> > been checking for IS_ERR().
> >
> > Fixes: d8af05ff38ae ("drm/i915: Allow sharing the idle-barrier
No one should be adding to the cb_list
that they don't themselves hold a reference for, this only now makes for
a much more spectacular use-after-free. :)
> Signed-off-by: Christian König
Reviewed-by: Chris Wilson
-Chris
___
dri-devel mailing
struct reservation_object_list **list,
> u32 *shared_count)
> {
> - unsigned int seq;
> -
> do {
> - seq = read_seqcount_begin(&obj->seq);
> *excl = rcu_dereference(obj->fence_excl);
>
König
Reviewed-by: Chris Wilson
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
Quoting Christian König (2019-08-07 14:53:10)
> Instead of open coding the sequence loop use the new helper.
>
> Signed-off-by: Christian König
Reviewed-by: Chris Wilson
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.
Quoting Christian König (2019-08-07 14:53:11)
> Other cores don't busy wait any more and we removed the last user of checking
> the seqno for changes. Drop updating the number for shared fences altogether.
>
> Signed-off-by: Christian König
Reviewed-by: Chris Wilson
> --
Quoting Dan Carpenter (2019-08-08 11:32:36)
> We can't free "workload" until after the printk or it's a use after
> free.
>
> Fixes: 2089a76ade90 ("drm/i915/gvt: Checking workload's gma earlier")
> Signed-off-by: Dan Carpenter
That'
Quoting Hugh Dickins (2019-08-08 16:54:16)
> On Thu, 8 Aug 2019, Al Viro wrote:
> > On Wed, Aug 07, 2019 at 08:30:02AM +0200, Christoph Hellwig wrote:
> > > On Tue, Aug 06, 2019 at 12:50:10AM -0700, Hugh Dickins wrote:
> > > > Though personally I'm averse to managing "f"objects through
> > > > "m"i
Quoting Lionel Landwerlin (2019-08-09 12:30:30)
> diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h
> index 8a5b2f8f8eb9..1ce83853f997 100644
> --- a/include/uapi/drm/drm.h
> +++ b/include/uapi/drm/drm.h
> @@ -785,6 +785,22 @@ struct drm_syncobj_timeline_array {
> __u32 pad;
> }
Quoting Lionel Landwerlin (2019-08-09 12:30:30)
> +int drm_syncobj_binary_ioctl(struct drm_device *dev, void *data,
> +struct drm_file *file_private)
> +{
> + struct drm_syncobj_binary_array *args = data;
> + struct drm_syncobj **syncobjs;
> + u32 __use
401 - 500 of 3785 matches
Mail list logo