On 2022-10-14 04:15, Christian König wrote:
> Setting this flag on a scheduler fence prevents pipelining of jobs
> depending on this fence. In other words we always insert a full CPU
> round trip before dependen jobs are pushed to the pipeline.

"dependent"

> 
> Signed-off-by: Christian König <christian.koe...@amd.com>
> CC: sta...@vger.kernel.org # 5.19+
> ---
>  drivers/gpu/drm/scheduler/sched_entity.c | 3 ++-
>  include/drm/gpu_scheduler.h              | 9 +++++++++
>  2 files changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c 
> b/drivers/gpu/drm/scheduler/sched_entity.c
> index 191c56064f19..43d337d8b153 100644
> --- a/drivers/gpu/drm/scheduler/sched_entity.c
> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> @@ -385,7 +385,8 @@ static bool drm_sched_entity_add_dependency_cb(struct 
> drm_sched_entity *entity)
>       }
>  
>       s_fence = to_drm_sched_fence(fence);
> -     if (s_fence && s_fence->sched == sched) {
> +     if (s_fence && s_fence->sched == sched &&
> +         !test_bit(DRM_SCHED_FENCE_DONT_PIPELINE, &fence->flags)) {
>  
>               /*
>                * Fence is from the same scheduler, only need to wait for
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 0fca8f38bee4..f01d14b231ed 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -32,6 +32,15 @@
>  
>  #define MAX_WAIT_SCHED_ENTITY_Q_EMPTY msecs_to_jiffies(1000)
>  
> +/**
> + * DRM_SCHED_FENCE_DONT_PIPELINE - Prefent dependency pipelining

"Prevent"

> + *
> + * Setting this flag on a scheduler fence prevents pipelining of jobs 
> depending
> + * on this fence. In other words we always insert a full CPU round trip 
> before
> + * dependen jobs are pushed to the hw queue.

"dependent"

> + */
> +#define DRM_SCHED_FENCE_DONT_PIPELINE        DMA_FENCE_FLAG_USER_BITS
> +
>  struct drm_gem_object;
>  
>  struct drm_gpu_scheduler;

With those corrections,

Acked-by: Luben Tuikov <luben.tui...@amd.com>

Regards,
Luben

Reply via email to