On 27/11/2015 14:10, Chris Wilson wrote:
On Fri, Nov 27, 2015 at 01:53:34PM +, Tvrtko Ursulin wrote:
P.S. And just realised this work is competing with the scheduler
which changes all this again.
On the other hand, there are regressions to be solved before more
features.
-Chris
Had a qui
On Fri, Nov 27, 2015 at 01:53:34PM +, Tvrtko Ursulin wrote:
>
> On 27/11/15 13:01, Chris Wilson wrote:
> >On Fri, Nov 27, 2015 at 12:11:32PM +, Tvrtko Ursulin wrote:
> >>
> >>Hi,
> >>
> >>On 31/10/15 10:34, Chris Wilson wrote:
> >>>One particularly stressful scenario consists of many indep
On 27/11/15 13:01, Chris Wilson wrote:
On Fri, Nov 27, 2015 at 12:11:32PM +, Tvrtko Ursulin wrote:
Hi,
On 31/10/15 10:34, Chris Wilson wrote:
One particularly stressful scenario consists of many independent tasks
all competing for GPU time and waiting upon the results (e.g. realtime
tran
On Fri, Nov 27, 2015 at 01:01:12PM +, Chris Wilson wrote:
> On Fri, Nov 27, 2015 at 12:11:32PM +, Tvrtko Ursulin wrote:
> >
> > Hi,
> >
> > On 31/10/15 10:34, Chris Wilson wrote:
> > >One particularly stressful scenario consists of many independent tasks
> > >all competing for GPU time an
On Fri, Nov 27, 2015 at 12:11:32PM +, Tvrtko Ursulin wrote:
>
> Hi,
>
> On 31/10/15 10:34, Chris Wilson wrote:
> >One particularly stressful scenario consists of many independent tasks
> >all competing for GPU time and waiting upon the results (e.g. realtime
> >transcoding of many, many strea
Hi,
On 31/10/15 10:34, Chris Wilson wrote:
One particularly stressful scenario consists of many independent tasks
all competing for GPU time and waiting upon the results (e.g. realtime
transcoding of many, many streams). One bottleneck in particular is that
each client waits on its own results,
want us to experiment with?
Dmitry.
-Original Message-
From: Chris Wilson [mailto:ch...@chris-wilson.co.uk]
Sent: Wednesday, November 4, 2015 5:48 PM
To: Gong, Zhipeng
Cc: intel-gfx@lists.freedesktop.org; Rogozhkin, Dmitry V
Subject: Re: [Intel-gfx] [PATCH] RFC drm/i915: Slaughter the
On Wed, Nov 04, 2015 at 01:20:33PM +, Gong, Zhipeng wrote:
>
>
> > -Original Message-
> > From: Chris Wilson [mailto:ch...@chris-wilson.co.uk]
> > Sent: Wednesday, November 04, 2015 5:54 PM
> > On Wed, Nov 04, 2015 at 06:19:33AM +, Gong, Zhipeng wrote:
> > > > From: Chris Wilson [
On Wed, Nov 04, 2015 at 06:19:33AM +, Gong, Zhipeng wrote:
> > From: Chris Wilson [mailto:ch...@chris-wilson.co.uk]
> > On Tue, Nov 03, 2015 at 01:31:22PM +, Gong, Zhipeng wrote:
> > >
> > > > From: Chris Wilson [mailto:ch...@chris-wilson.co.uk]
> > > >
> > > > Do you also have a relative p
> From: Chris Wilson [mailto:ch...@chris-wilson.co.uk]
> On Tue, Nov 03, 2015 at 01:31:22PM +, Gong, Zhipeng wrote:
> >
> > > From: Chris Wilson [mailto:ch...@chris-wilson.co.uk]
> > >
> > > Do you also have a relative perf statistics like op/s we can compare
> > > to make sure we aren't just s
On Tue, Nov 03, 2015 at 01:31:22PM +, Gong, Zhipeng wrote:
>
> > From: Chris Wilson [mailto:ch...@chris-wilson.co.uk]
> >
> > Do you also have a relative perf statistics like op/s we can compare to make
> > sure we aren't just stalling the whole system?
> >
> Could you please provide the com
> From: Chris Wilson [mailto:ch...@chris-wilson.co.uk]
>
> Do you also have a relative perf statistics like op/s we can compare to make
> sure we aren't just stalling the whole system?
>
Could you please provide the commands about how to check it?
>
> How much cpu time is left in the i915_wait_
On Tue, Nov 03, 2015 at 10:03:19AM +, Chris Wilson wrote:
> On Tue, Nov 03, 2015 at 03:06:36AM +, Gong, Zhipeng wrote:
> > It seems that there are some gaps in the patch and first patch.
> > Like there is no this line in the first patch.
> > if (req->ring->seqno_barrier)
>
> Ah, th
On Tue, Nov 03, 2015 at 03:06:36AM +, Gong, Zhipeng wrote:
> It seems that there are some gaps in the patch and first patch.
> Like there is no this line in the first patch.
> if (req->ring->seqno_barrier)
Ah, that was in the context I hope...
> I have tried to apply this patch. And h
ts.freedesktop.org; Rogozhkin, Dmitry V
> Subject: Re: [Intel-gfx] [PATCH] RFC drm/i915: Slaughter the thundering
> i915_wait_request herd
>
> On Mon, Nov 02, 2015 at 03:28:22PM +, Chris Wilson wrote:
> > That should keep the worker alive for a further 10 jiffies, hopefully
> &
On Mon, Nov 02, 2015 at 03:28:22PM +, Chris Wilson wrote:
> That should keep the worker alive for a further 10 jiffies, hopefully
> long enough for the next wait to occur. The cost is that it keeps the
> interrupt asserted (and to avoid that requires a little rearrangment and
> probably a dedic
On Mon, Nov 02, 2015 at 02:57:47PM +, Gong, Zhipeng wrote:
> > -Original Message-
> > From: Chris Wilson [mailto:ch...@chris-wilson.co.uk]
> > Sent: Monday, November 02, 2015 10:16 PM
> > To: Gong, Zhipeng
> > Cc: intel-gfx@lists.freedesktop.org; Rogozhkin, Dmitry V
> > Subject: Re: [PA
On Mon, Nov 02, 2015 at 02:00:47PM +, Gong, Zhipeng wrote:
> Attach the perf data for BDW async1 and async5 with or without patch.
Hmm, I can see it is the i915_spin_request() consuming the time, but I
was hoping to get the callgraph so I could see where the call to
i915_wait_request was orig
On Mon, Nov 02, 2015 at 11:26:29AM +, Gong, Zhipeng wrote:
> > -Original Message-
> > From: Chris Wilson [mailto:ch...@chris-wilson.co.uk]
> > Sent: Monday, November 02, 2015 5:59 PM
> > To: Gong, Zhipeng
> > Cc: intel-gfx@lists.freedesktop.org; Rogozhkin, Dmitry V
> > Subject: Re: [PAT
> -Original Message-
> From: Chris Wilson [mailto:ch...@chris-wilson.co.uk]
> Sent: Monday, November 02, 2015 5:59 PM
> To: Gong, Zhipeng
> Cc: intel-gfx@lists.freedesktop.org; Rogozhkin, Dmitry V
> Subject: Re: [PATCH] RFC drm/i915: Slaughter the thundering
> i915_wait_request herd
>
> On
Yeah, very likely. I wonder, how easy is to negotiate issue with inter-ring
synchronization on BDW in the expectation of KMD Scheduler from John Harrison?
-Original Message-
From: Chris Wilson [mailto:ch...@chris-wilson.co.uk]
Sent: Monday, November 2, 2015 12:53 PM
To: Gong, Zhipeng
Cc:
On Mon, Nov 02, 2015 at 05:39:54AM +, Gong, Zhipeng wrote:
> Chris-
>
> The patch cannot be applied on the latest drm-intel-nightly directly.
> I modified it a little bit to make it applied.
> The patch can help much in HSW, but a little bit in BDW.
> The test is to transcode 26 streams, whic
On Mon, Nov 02, 2015 at 05:39:54AM +, Gong, Zhipeng wrote:
> Chris-
>
> The patch cannot be applied on the latest drm-intel-nightly directly.
> I modified it a little bit to make it applied.
> The patch can help much in HSW, but a little bit in BDW.
> The test is to transcode 26 streams, whic
Chris-
The patch cannot be applied on the latest drm-intel-nightly directly.
I modified it a little bit to make it applied.
The patch can help much in HSW, but a little bit in BDW.
The test is to transcode 26 streams, which creates 244 threads.
CPU util | w/o patch | w/ patch
---
One particularly stressful scenario consists of many independent tasks
all competing for GPU time and waiting upon the results (e.g. realtime
transcoding of many, many streams). One bottleneck in particular is that
each client waits on its own results, but every client is woken up after
every batch
25 matches
Mail list logo