On Sat, 29 Oct 2011 11:35:13 +0200
Daniel Vetter <dan...@ffwll.ch> wrote:

> On Fri, Oct 28, 2011 at 10:55:27PM -0700, Ben Widawsky wrote:
> > There is already a list of requests outstanding for a given client.
> > Keeping a count is easy, and will give some information necessary to
> > enable a more fair throttling scheme.
> > 
> > For now a client is uniquely identified by its file descriptor, however
> > this may change in the future with new process APIs.
> > 
> > Signed-off-by: Ben Widawsky <b...@bwidawsk.net>
> > ---
> >  drivers/gpu/drm/i915/i915_drv.h |    1 +
> >  drivers/gpu/drm/i915/i915_gem.c |    8 ++++++++
> >  2 files changed, 9 insertions(+), 0 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_drv.h 
> > b/drivers/gpu/drm/i915/i915_drv.h
> > index 06a37f4..a251d22 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.h
> > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > @@ -919,6 +919,7 @@ struct drm_i915_file_private {
> >     struct {
> >             struct spinlock lock;
> >             struct list_head request_list;
> > +           atomic_t outstanding_requests;
> 
> Here's your bikeshed:
> 
> Is the spinlock not sufficient to protect the count? I'm asking because
> atomic_ts are pretty hard to extend to more fancy scheme (e.g. taking
> actual gpu time into account or comparing this with other processes
> outstanding_request to make better decisions).
> -Daniel

In a previous version, I set the oustanding count to 0 without holding the lock
in i915_gem_release. In what I sent to the list, it seems the atomic isn't
actually needed.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to