On 08/12/15 14:33, Chris Wilson wrote:
On Tue, Dec 08, 2015 at 02:03:39PM +, Tvrtko Ursulin wrote:
On 08/12/15 10:44, Chris Wilson wrote:
On Mon, Dec 07, 2015 at 03:08:28PM +, Tvrtko Ursulin wrote:
Equally, why wouldn't we wake up all waiters for which the requests
have been completed?
On Tue, Dec 08, 2015 at 02:03:39PM +, Tvrtko Ursulin wrote:
> On 08/12/15 10:44, Chris Wilson wrote:
> >On Mon, Dec 07, 2015 at 03:08:28PM +, Tvrtko Ursulin wrote:
> >>Equally, why wouldn't we wake up all waiters for which the requests
> >>have been completed?
> >
> >Because we no longer tr
On 08/12/15 10:44, Chris Wilson wrote:
On Mon, Dec 07, 2015 at 03:08:28PM +, Tvrtko Ursulin wrote:
Hi,
On 03/12/15 16:22, Chris Wilson wrote:
One particularly stressful scenario consists of many independent tasks
all competing for GPU time and waiting upon the results (e.g. realtime
tran
On Mon, Dec 07, 2015 at 03:08:28PM +, Tvrtko Ursulin wrote:
>
> Hi,
>
> On 03/12/15 16:22, Chris Wilson wrote:
> >One particularly stressful scenario consists of many independent tasks
> >all competing for GPU time and waiting upon the results (e.g. realtime
> >transcoding of many, many strea
Hi,
On 03/12/15 16:22, Chris Wilson wrote:
One particularly stressful scenario consists of many independent tasks
all competing for GPU time and waiting upon the results (e.g. realtime
transcoding of many, many streams). One bottleneck in particular is that
each client waits on its own results,
One particularly stressful scenario consists of many independent tasks
all competing for GPU time and waiting upon the results (e.g. realtime
transcoding of many, many streams). One bottleneck in particular is that
each client waits on its own results, but every client is woken up after
every batch