Hi Zack-

> On Mar 18, 2022, at 1:43 PM, Zack Rusin <z...@kde.org> wrote:
> 
> From: Zack Rusin <za...@vmware.com>
> 
> vmw_move assumed that buffers to be moved would always be
> vmw_buffer_object's but after introduction of new placement for mob
> pages that's no longer the case.
> The resulting invalid read didn't have any practical consequences
> because the memory isn't used unless the object actually is a
> vmw_buffer_object.
> Fix it by moving the cast to the spot where the results are used.
> 
> Signed-off-by: Zack Rusin <za...@vmware.com>
> Fixes: f6be23264bba ("drm/vmwgfx: Introduce a new placement for MOB page 
> tables")
> Reported-by: Chuck Lever III <chuck.le...@oracle.com>
> Reviewed-by: Martin Krastev <krast...@vmware.com>

After applying this patch, I am not able to reproduce the
KASAN splat I reported earlier this week.

Tested-by: Chuck Lever <chuck.le...@oracle.com>


> ---
> drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 14 ++++++--------
> 1 file changed, 6 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c 
> b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
> index 708899ba2102..6542f1498651 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
> @@ -859,22 +859,21 @@ void vmw_query_move_notify(struct ttm_buffer_object *bo,
>       struct ttm_device *bdev = bo->bdev;
>       struct vmw_private *dev_priv;
> 
> -
>       dev_priv = container_of(bdev, struct vmw_private, bdev);
> 
>       mutex_lock(&dev_priv->binding_mutex);
> 
> -     dx_query_mob = container_of(bo, struct vmw_buffer_object, base);
> -     if (!dx_query_mob || !dx_query_mob->dx_query_ctx) {
> -             mutex_unlock(&dev_priv->binding_mutex);
> -             return;
> -     }
> -
>       /* If BO is being moved from MOB to system memory */
>       if (new_mem->mem_type == TTM_PL_SYSTEM &&
>           old_mem->mem_type == VMW_PL_MOB) {
>               struct vmw_fence_obj *fence;
> 
> +             dx_query_mob = container_of(bo, struct vmw_buffer_object, base);
> +             if (!dx_query_mob || !dx_query_mob->dx_query_ctx) {
> +                     mutex_unlock(&dev_priv->binding_mutex);
> +                     return;
> +             }
> +
>               (void) vmw_query_readback_all(dx_query_mob);
>               mutex_unlock(&dev_priv->binding_mutex);
> 
> @@ -888,7 +887,6 @@ void vmw_query_move_notify(struct ttm_buffer_object *bo,
>               (void) ttm_bo_wait(bo, false, false);
>       } else
>               mutex_unlock(&dev_priv->binding_mutex);
> -
> }
> 
> /**
> -- 
> 2.32.0
> 

--
Chuck Lever



Reply via email to