Quoting Chris Wilson (2018-08-08 19:53:33)
> Quoting Tvrtko Ursulin (2018-08-08 13:40:41)
> >
> > On 07/08/2018 16:02, Chris Wilson wrote:
> > > Quoting Tvrtko Ursulin (2018-08-07 10:08:28)
> > >>
> > >> On 07/08/2018 08:29, Chris Wilson wrote:
> > >>> + /*
> > >>> + * The active request
Quoting Tvrtko Ursulin (2018-08-08 13:40:41)
>
> On 07/08/2018 16:02, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2018-08-07 10:08:28)
> >>
> >> On 07/08/2018 08:29, Chris Wilson wrote:
> >>> + /*
> >>> + * The active request is now effectively the start of a new client
> >>> + *
On 07/08/2018 16:02, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2018-08-07 10:08:28)
On 07/08/2018 08:29, Chris Wilson wrote:
+ /*
+ * The active request is now effectively the start of a new client
+ * stream, so give it the equivalent small priority bump to prevent
+ * i
Quoting Tvrtko Ursulin (2018-08-07 10:08:28)
>
> On 07/08/2018 08:29, Chris Wilson wrote:
> > + /*
> > + * The active request is now effectively the start of a new client
> > + * stream, so give it the equivalent small priority bump to prevent
> > + * it being gazumped a second
On 07/08/2018 08:29, Chris Wilson wrote:
Taken from an idea used for FQ_CODEL, we give the first request of a
new request flows a small priority boost. These flows are likely to
correspond with short, interactive tasks and so be more latency sensitive
than the longer free running queues. As soon
Taken from an idea used for FQ_CODEL, we give the first request of a
new request flows a small priority boost. These flows are likely to
correspond with short, interactive tasks and so be more latency sensitive
than the longer free running queues. As soon as the client has more than
one request in