hi, chris,

On Fri, Mar 10, 2017 at 05:17:17PM +0000, Chris Wilson wrote:
> On Thu, Mar 09, 2017 at 07:27:24PM +0800, changbin...@intel.com wrote:
> > From: Changbin Du <changbin...@intel.com>
> > 
> > GVTg has introduced the context status notifier to schedule the GVTg
> > workload. At that time, the notifier is bound to GVTg context only,
> > so GVTg is not aware of host workloads.
> > 
> > Now we are going to improve GVTg's guest workload scheduler policy,
> > and add Guc emulation support for new Gen graphics. Both these two
> > features require acknowledgment for all contexts running on hardware.
> > (But will not alter host workload.) So here try to make some change.
> > 
> > The change is simple:
> >   1. Move the context status notifier head from i915_gem_context to
> >      intel_engine_cs. Which means there is a notifier head per engine
> >      instead of per context. Execlist driver still call notifier for
> >      each context sched-in/out events of current engine.
> >   2. At GVTg side, it binds a notifier_block for each physical engine
> >      at GVTg initialization period. Then GVTg can hear all context
> >      status events.
> > 
> > In this patch, GVTg do nothing for host context event, but later
> > will add a function there. But in any case, the notifier callback is
> > a noop if this is no active vGPU.
> > 
> > Since intel_gvt_init() is called at early initialization stage and
> > require the status notifier head has been initiated, I initiate it in
> > intel_engine_setup().
> > 
> > Signed-off-by: Changbin Du <changbin...@intel.com>
> Reviewed-by: Chris Wilson <ch...@chris-wilson.co.uk>
> 
> I presume you will apply this via gvt?
>
Sure, I'll sync with zhenyu about this. Then update patch with below
'bonus newline' fixed. Thanks.

> > diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c 
> > b/drivers/gpu/drm/i915/gvt/scheduler.c
> > index d3a56c9..64875ec 100644
> > --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> > +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> > @@ -127,15 +127,14 @@ static int populate_shadow_context(struct 
> > intel_vgpu_workload *workload)
> >     return 0;
> >  }
> >  
> > +
> 
> Bonus newline
> 
> >  static int shadow_context_status_change(struct notifier_block *nb,
> >             unsigned long action, void *data)
> >  {
> > -   struct intel_vgpu *vgpu = container_of(nb,
> > -                   struct intel_vgpu, shadow_ctx_notifier_block);
> > -   struct drm_i915_gem_request *req =
> > -           (struct drm_i915_gem_request *)data;
> > -   struct intel_gvt_workload_scheduler *scheduler =
> > -           &vgpu->gvt->scheduler;
> > +   struct drm_i915_gem_request *req = (struct drm_i915_gem_request *)data;
> > +   struct intel_gvt *gvt = container_of(nb, struct intel_gvt,
> > +                           shadow_ctx_notifier_block[req->engine->id]);
> > +   struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler;
> >     struct intel_vgpu_workload *workload =
> >             scheduler->current_workload[req->engine->id];
> >  
> 
> -- 
> Chris Wilson, Intel Open Source Technology Centre

-- 
Thanks,
Changbin Du

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to