[PATCH v6 15/16] drm: drm_crc: fix a kernel-doc markup

2021-01-14 Thread Mauro Carvalho Chehab
A function has a different name between their prototype
and its kernel-doc markup:

../include/drm/drm_crtc.h:1257: warning: expecting prototype for 
drm_crtc_alloc_with_planes(). Prototype was for drmm_crtc_alloc_with_planes() 
instead

Signed-off-by: Mauro Carvalho Chehab 
---
 include/drm/drm_crtc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h
index 540e2e43ec93..13eeba2a750a 100644
--- a/include/drm/drm_crtc.h
+++ b/include/drm/drm_crtc.h
@@ -1232,7 +1232,7 @@ void *__drmm_crtc_alloc_with_planes(struct drm_device 
*dev,
const char *name, ...);
 
 /**
- * drm_crtc_alloc_with_planes - Allocate and initialize a new CRTC object with
+ * drmm_crtc_alloc_with_planes - Allocate and initialize a new CRTC object with
  *specified primary and cursor planes.
  * @dev: DRM device
  * @type: the type of the struct which contains struct &drm_crtc
-- 
2.29.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v6 00/16] Fix several bad kernel-doc markups

2021-01-14 Thread Mauro Carvalho Chehab
Hi Jon,

This series have three parts:

1)  10 remaining fixup patches from the series I sent back on Dec, 1st:

   parport: fix a kernel-doc markup
   rapidio: fix kernel-doc a markup
   fs: fix kernel-doc markups
   pstore/zone: fix a kernel-doc markup
   firmware: stratix10-svc: fix kernel-doc markups
   connector: fix a kernel-doc markup
   lib/crc7: fix a kernel-doc markup
   memblock: fix kernel-doc markups
   w1: fix a kernel-doc markup
   selftests: kselftest_harness.h: partially fix kernel-doc markups

2) The patch adding the new check to ensure that the kernel-doc
   markup will be used for the right declaration;

3) 5 additional patches, produced against next-20210114 with new
   problems detected after the original series: 

 net: tip: fix a couple kernel-doc markups
 net: cfg80211: fix a kerneldoc markup
 reset: core: fix a kernel-doc markup
 drm: drm_crc: fix a kernel-doc markup
 platform/surface: aggregator: fix a kernel-doc markup

It probably makes sense to merge at least the first 11 patches
via the doc tree, as they should apply cleanly there, and
having the last 5 patches merged via each maintainer's tree.

-

Kernel-doc has always be limited to a probably bad documented
rule:

The kernel-doc markups should appear *imediatelly before* the
function or data structure that it documents.

On other words, if a C file would contain something like this:

/**
 * foo - function foo
 * @args: foo args
 */
static inline void bar(int args);

/**
 * bar - function bar
 * @args: foo args
 */
static inline void foo(void *args);


The output (in ReST format) will be:

.. c:function:: void bar (int args)

   function foo

**Parameters**

``int args``
  foo args


.. c:function:: void foo (void *args)

   function bar

**Parameters**

``void *args``
  foo args

Which is clearly a wrong result.  Before this changeset, 
not even a warning is produced on such cases.

As placing such markups just before the documented
data is a common practice, on most cases this is fine.

However, as patches touch things, identifiers may be
renamed, and people may forget to update the kernel-doc
markups to follow such changes.

This has been happening for quite a while, as there are
lots of files with kernel-doc problems.

This series address those issues and add a file at the
end that will enforce that the identifier will match the
kernel-doc markup, avoiding this problem from
keep happening as time goes by.

This series is based on current upstream tree.

@maintainers: feel free to pick the patches and
apply them directly on your trees, as all patches on 
this series are independent from the other ones.

--

v6:
  - rebased on the top of next-20210114 and added a few extra fixes

v5:
  - The completion.h patch was replaced by another one which drops
an obsolete macro;
  - Some typos got fixed and review tags got added;
  - Dropped patches that were already merged at linux-next.

v4:

  - Patches got rebased and got some acks.

Mauro Carvalho Chehab (16):
  parport: fix a kernel-doc markup
  rapidio: fix kernel-doc a markup
  fs: fix kernel-doc markups
  pstore/zone: fix a kernel-doc markup
  firmware: stratix10-svc: fix kernel-doc markups
  connector: fix a kernel-doc markup
  lib/crc7: fix a kernel-doc markup
  memblock: fix kernel-doc markups
  w1: fix a kernel-doc markup
  selftests: kselftest_harness.h: partially fix kernel-doc markups
  scripts: kernel-doc: validate kernel-doc markup with the actual names
  net: tip: fix a couple kernel-doc markups
  net: cfg80211: fix a kerneldoc markup
  reset: core: fix a kernel-doc markup
  drm: drm_crc: fix a kernel-doc markup
  platform/surface: aggregator: fix a kernel-doc markup

 drivers/parport/share.c   |  2 +-
 .../surface/aggregator/ssh_request_layer.c|  2 +-
 drivers/rapidio/rio.c |  2 +-
 drivers/reset/core.c  |  4 +-
 fs/dcache.c   | 73 ++-
 fs/inode.c|  4 +-
 fs/pstore/zone.c  |  2 +-
 fs/seq_file.c |  5 +-
 fs/super.c| 12 +--
 include/drm/drm_crtc.h|  2 +-
 include/linux/connector.h |  2 +-
 .../firmware/intel/stratix10-svc-client.h | 10 +--
 include/linux/memblock.h  |  4 +-
 include/linux/parport.h   | 31 
 include/linux/w1.h|  2 +-
 include/net/cfg80211.h|  2 +-
 lib/crc7.c|  2 +-
 net/tipc/link.c   |  2 +-
 net/tipc/node.c   |  2 +-
 scripts/kernel-doc| 62 
 tools/te

[PATCH] drm: Include in drm_cache.c

2021-01-14 Thread Thomas Zimmermann
The function drm_need_swiotbl() needs mem_encrypt_active() from
. The include got lost when refactoring the
code recently.

Signed-off-by: Thomas Zimmermann 
Fixes: 3abc66706385 ("drm: Implement drm_need_swiotlb() in drm_cache.c")
Reported-by: kernel test robot 
Cc: Thomas Zimmermann 
Cc: Christian König 
Cc: Daniel Vetter 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: David Airlie 
Cc: Daniel Vetter 
Cc: dri-devel@lists.freedesktop.org
---
 drivers/gpu/drm/drm_cache.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c
index 49551a7fa22f..79a50ef1250f 100644
--- a/drivers/gpu/drm/drm_cache.c
+++ b/drivers/gpu/drm/drm_cache.c
@@ -30,6 +30,7 @@
 
 #include 
 #include 
+#include 
 #include 
 
 #include 
-- 
2.29.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v6 15/16] drm: drm_crc: fix a kernel-doc markup

2021-01-14 Thread Simon Ser
On Thursday, January 14th, 2021 at 9:04 AM, Mauro Carvalho Chehab 
 wrote:

> A function has a different name between their prototype
> and its kernel-doc markup:
>
>   ../include/drm/drm_crtc.h:1257: warning: expecting prototype for 
> drm_crtc_alloc_with_planes(). Prototype was for drmm_crtc_alloc_with_planes() 
> instead
>
> Signed-off-by: Mauro Carvalho Chehab 

Acked-by: Simon Ser 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v4 4/7] dt-bindings: display: imx: hdmi: Convert binding to YAML

2021-01-14 Thread Philipp Zabel
On Thu, 2021-01-14 at 08:44 +0200, Laurent Pinchart wrote:
> Convert the i.MX6 HDMI TX text binding to YAML.
> 
> Signed-off-by: Laurent Pinchart 
> Reviewed-by: Rob Herring 
> ---
> Changes since v3:
> 
> - Use port instead of port-base
> 
> Changes since v1:
> 
> - Only specify maxItems for clocks
> - Drop reg and interrupts as they're checked in the base schema
> - Rebase on top of OF graph schema, dropped redundant properties
> - Fix identation for enum entries
> - Drop clock-names items, use maxItems only
> ---
>  .../bindings/display/imx/fsl,imx6-hdmi.yaml   | 126 ++
>  .../devicetree/bindings/display/imx/hdmi.txt  |  65 -
>  2 files changed, 126 insertions(+), 65 deletions(-)
>  create mode 100644 
> Documentation/devicetree/bindings/display/imx/fsl,imx6-hdmi.yaml
>  delete mode 100644 Documentation/devicetree/bindings/display/imx/hdmi.txt
> 
> diff --git a/Documentation/devicetree/bindings/display/imx/fsl,imx6-hdmi.yaml 
> b/Documentation/devicetree/bindings/display/imx/fsl,imx6-hdmi.yaml
> new file mode 100644
> index ..af7fe9c4d196
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/display/imx/fsl,imx6-hdmi.yaml
> @@ -0,0 +1,126 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/display/imx/fsl,imx6-hdmi.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Freescale i.MX6 DWC HDMI TX Encoder
> +
> +maintainers:
> +  - Philipp Zabel 

Acked-by: Philipp Zabel 

regards
Philipp
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: Include in drm_cache.c

2021-01-14 Thread Christian König

Am 14.01.21 um 09:05 schrieb Thomas Zimmermann:

The function drm_need_swiotbl() needs mem_encrypt_active() from
. The include got lost when refactoring the
code recently.

Signed-off-by: Thomas Zimmermann 
Fixes: 3abc66706385 ("drm: Implement drm_need_swiotlb() in drm_cache.c")
Reported-by: kernel test robot 
Cc: Thomas Zimmermann 
Cc: Christian König 
Cc: Daniel Vetter 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: David Airlie 
Cc: Daniel Vetter 
Cc: dri-devel@lists.freedesktop.org


You are typing faster than me :)

Reviewed-by: Christian König 


---
  drivers/gpu/drm/drm_cache.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c
index 49551a7fa22f..79a50ef1250f 100644
--- a/drivers/gpu/drm/drm_cache.c
+++ b/drivers/gpu/drm/drm_cache.c
@@ -30,6 +30,7 @@
  
  #include 

  #include 
+#include 
  #include 
  
  #include 


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: Include in drm_cache.c

2021-01-14 Thread Thomas Zimmermann

Hi

Am 14.01.21 um 09:23 schrieb Christian König:

Am 14.01.21 um 09:05 schrieb Thomas Zimmermann:

The function drm_need_swiotbl() needs mem_encrypt_active() from
. The include got lost when refactoring the
code recently.

Signed-off-by: Thomas Zimmermann 
Fixes: 3abc66706385 ("drm: Implement drm_need_swiotlb() in drm_cache.c")
Reported-by: kernel test robot 
Cc: Thomas Zimmermann 
Cc: Christian König 
Cc: Daniel Vetter 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: David Airlie 
Cc: Daniel Vetter 
Cc: dri-devel@lists.freedesktop.org


You are typing faster than me :)

Reviewed-by: Christian König 


Pushed to -misc-next. Thanks for the fast reply.

Best regards
Thomas




---
  drivers/gpu/drm/drm_cache.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c
index 49551a7fa22f..79a50ef1250f 100644
--- a/drivers/gpu/drm/drm_cache.c
+++ b/drivers/gpu/drm/drm_cache.c
@@ -30,6 +30,7 @@
  #include 
  #include 
+#include 
  #include 
  #include 


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



OpenPGP_signature
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/vc4: Initialize vc4_drm_driver with CMA helper defaults

2021-01-14 Thread Thomas Zimmermann
The function vc4_prime_import_sg_table() is an otherwise empty wrapper
around CMA's drm_gem_cma_prime_import_sg_table(). Removing it in favor
of the latter allows to initialize vc4_drm_driver with CMA's initializer
macro.

Signed-off-by: Thomas Zimmermann 
---
 drivers/gpu/drm/vc4/vc4_bo.c  | 14 --
 drivers/gpu/drm/vc4/vc4_drv.c |  7 +--
 drivers/gpu/drm/vc4/vc4_drv.h |  3 ---
 3 files changed, 1 insertion(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 28e48ef2d295..fddaeb0b09c1 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -738,20 +738,6 @@ static const struct drm_gem_object_funcs 
vc4_gem_object_funcs = {
.vm_ops = &vc4_vm_ops,
 };
 
-struct drm_gem_object *
-vc4_prime_import_sg_table(struct drm_device *dev,
- struct dma_buf_attachment *attach,
- struct sg_table *sgt)
-{
-   struct drm_gem_object *obj;
-
-   obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
-   if (IS_ERR(obj))
-   return obj;
-
-   return obj;
-}
-
 static int vc4_grab_bin_bo(struct vc4_dev *vc4, struct vc4_file *vc4file)
 {
int ret;
diff --git a/drivers/gpu/drm/vc4/vc4_drv.c b/drivers/gpu/drm/vc4/vc4_drv.c
index d9b3bba7f2b7..556ad0f02a0d 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.c
+++ b/drivers/gpu/drm/vc4/vc4_drv.c
@@ -180,12 +180,7 @@ static struct drm_driver vc4_drm_driver = {
 
.gem_create_object = vc4_create_object,
 
-   .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
-   .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
-   .gem_prime_import_sg_table = vc4_prime_import_sg_table,
-   .gem_prime_mmap = drm_gem_prime_mmap,
-
-   .dumb_create = vc4_dumb_create,
+   DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc4_dumb_create),
 
.ioctls = vc4_drm_ioctls,
.num_ioctls = ARRAY_SIZE(vc4_drm_ioctls),
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index 0d9c0ecc4769..a7500716cf3f 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -801,9 +801,6 @@ int vc4_get_hang_state_ioctl(struct drm_device *dev, void 
*data,
 struct drm_file *file_priv);
 int vc4_label_bo_ioctl(struct drm_device *dev, void *data,
   struct drm_file *file_priv);
-struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
-struct dma_buf_attachment 
*attach,
-struct sg_table *sgt);
 int vc4_bo_cache_init(struct drm_device *dev);
 int vc4_bo_inc_usecnt(struct vc4_bo *bo);
 void vc4_bo_dec_usecnt(struct vc4_bo *bo);
-- 
2.29.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Intel-gfx] [PATCH] drm-buf: Add debug option

2021-01-14 Thread Daniel Vetter
On Wed, Jan 13, 2021 at 10:08 PM Chris Wilson  wrote:
> Quoting Daniel Vetter (2021-01-13 20:50:11)
> > On Wed, Jan 13, 2021 at 4:43 PM Chris Wilson  
> > wrote:
> > >
> > > Quoting Daniel Vetter (2021-01-13 14:06:04)
> > > > We have too many people abusing the struct page they can get at but
> > > > really shouldn't in importers. Aside from that the backing page might
> > > > simply not exist (for dynamic p2p mappings) looking at it and using it
> > > > e.g. for mmap can also wreak the page handling of the exporter
> > > > completely. Importers really must go through the proper interface like
> > > > dma_buf_mmap for everything.
> > >
> > > If the exporter doesn't want to expose the struct page, why are they
> > > setting it in the exported sg_table?
> >
> > You need to store it somewhere, otherwise the dma-api doesn't work.
> > Essentially this achieves clearing/resetting the struct page pointer,
> > without additional allocations somewhere, or tons of driver changes
> > (since presumably the driver does keep track of the struct page
> > somewhere too).
>
> Only for mapping, and that's before the export -- if there's even a
> struct page to begin with.
>
> > Also as long as we have random importers looking at struct page we
> > can't just remove it, or crashes everywhere. So it has to be some
> > debug option you can disable.
>
> Totally agreed that nothing generic can rely on pages being transported
> via dma-buf, and memfd is there if you do want a suitable transport. The
> one I don't know about is dma-buf heap, do both parties there consent to
> transport pages via the dma-buf? i.e. do they have special cases for
> import/export between heaps?

heaps shouldn't be any different wrt the interface exposed to
importers. Adding John just in case I missed something.

I think the only problem we have is that the first import for ttm
simply pulled out the struct page and ignored the sgtable otherwise,
then that copypasted to places and we're still have some of that left.
Although it's a lot better. So largely the problem is importers being
a bit silly.

I also think I should change the defaulty y to default y if
DMA_API_DEBUG or something like that, to make sure it's actually
enabled often enough.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 00/31] Rid W=1 warnings from Video

2021-01-14 Thread Jani Nikula
On Wed, 13 Jan 2021, Lee Jones  wrote:
> On Wed, 13 Jan 2021, Sam Ravnborg wrote:
>
>> Hi Lee,
>> 
>> On Wed, Jan 13, 2021 at 02:49:38PM +, Lee Jones wrote:
>> > This set is part of a larger effort attempting to clean-up W=1
>> > kernel builds, which are currently overwhelmingly riddled with
>> > niggly little warnings.
>> > 
>> > This patch-set clears all of the W=1 warnings currently residing
>> > in drivers/video.
>> 
>> I am sorry to say that I expect most of your nice patches to clash
>> with patches that is already present in drm-misc-next.
>> 
>> drivers/video/ are warning free with W=1 in drm-misc-next today.
>> 
>> I do not know why drm-misc-next is not yet pullled into linux-next.
>
> Well that kinda sucks.  What are the chances of that?
>
> Most of my patches fix issues that have been there for years!

We auto-update the for-linux-next and for-linux-next-fixes branches, and
they seem to be up-to-date [1].

How recent are the fixes, maybe because of this: [2]?

BR,
Jani.


[1] https://cgit.freedesktop.org/drm/drm-misc
[2] http://lore.kernel.org/r/20210114113107.62210...@canb.auug.org.au

-- 
Jani Nikula, Intel Open Source Graphics Center
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 00/31] Rid W=1 warnings from Video

2021-01-14 Thread Daniel Vetter
On Thu, Jan 14, 2021 at 10:04 AM Jani Nikula
 wrote:
>
> On Wed, 13 Jan 2021, Lee Jones  wrote:
> > On Wed, 13 Jan 2021, Sam Ravnborg wrote:
> >
> >> Hi Lee,
> >>
> >> On Wed, Jan 13, 2021 at 02:49:38PM +, Lee Jones wrote:
> >> > This set is part of a larger effort attempting to clean-up W=1
> >> > kernel builds, which are currently overwhelmingly riddled with
> >> > niggly little warnings.
> >> >
> >> > This patch-set clears all of the W=1 warnings currently residing
> >> > in drivers/video.
> >>
> >> I am sorry to say that I expect most of your nice patches to clash
> >> with patches that is already present in drm-misc-next.
> >>
> >> drivers/video/ are warning free with W=1 in drm-misc-next today.
> >>
> >> I do not know why drm-misc-next is not yet pullled into linux-next.
> >
> > Well that kinda sucks.  What are the chances of that?
> >
> > Most of my patches fix issues that have been there for years!

I planned to go through them all today, let's see what's still needed.

> We auto-update the for-linux-next and for-linux-next-fixes branches, and
> they seem to be up-to-date [1].

It only happened last week instead of right after -rc1 due to some
confusion, but it should have been in linux-next for a few days
already.

> How recent are the fixes, maybe because of this: [2]?
>
> BR,
> Jani.
>
>
> [1] https://cgit.freedesktop.org/drm/drm-misc
> [2] http://lore.kernel.org/r/20210114113107.62210...@canb.auug.org.au

Patch for that just got committted, so this shouldn't be too big a
window for drm-misc-next to be excluded should have been very small.
-Daniel

>
> --
> Jani Nikula, Intel Open Source Graphics Center
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3 1/3] drm/amd/display: Add module parameter for freesync video mode

2021-01-14 Thread Pekka Paalanen
On Mon, 4 Jan 2021 16:07:58 -0500
Aurabindo Pillai  wrote:

> [Why&How]
> Adds a module parameter to enable experimental freesync video mode modeset
> optimization. Enabling this mode allows the driver to skip a full modeset when
> freesync compatible modes are requested by the userspace. This paramters also
> adds some standard modes based on the connected monitor's VRR range.
> 
> Signed-off-by: Aurabindo Pillai 
> Acked-by: Christian König 
> Reviewed-by: Shashank Sharma 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h |  1 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 12 
>  2 files changed, 13 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 100a431f0792..12b13a90eddf 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -177,6 +177,7 @@ extern int amdgpu_gpu_recovery;
>  extern int amdgpu_emu_mode;
>  extern uint amdgpu_smu_memory_pool_size;
>  extern uint amdgpu_dc_feature_mask;
> +extern uint amdgpu_exp_freesync_vid_mode;
>  extern uint amdgpu_dc_debug_mask;
>  extern uint amdgpu_dm_abm_level;
>  extern struct amdgpu_mgpu_info mgpu_info;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> index b48d7a3c2a11..2badbc8b2294 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> @@ -158,6 +158,7 @@ int amdgpu_mes;
>  int amdgpu_noretry = -1;
>  int amdgpu_force_asic_type = -1;
>  int amdgpu_tmz;
> +uint amdgpu_exp_freesync_vid_mode;
>  int amdgpu_reset_method = -1; /* auto */
>  int amdgpu_num_kcq = -1;
>  
> @@ -786,6 +787,17 @@ module_param_named(abmlevel, amdgpu_dm_abm_level, uint, 
> 0444);
>  MODULE_PARM_DESC(tmz, "Enable TMZ feature (-1 = auto, 0 = off (default), 1 = 
> on)");
>  module_param_named(tmz, amdgpu_tmz, int, 0444);
>  
> +/**
> + * DOC: experimental_freesync_video (uint)
> + * Enabled the optimization to adjust front porch timing to achieve seamless 
> mode change experience
> + * when setting a freesync supported mode for which full modeset is not 
> needed.
> + * The default value: 0 (off).
> + */
> +MODULE_PARM_DESC(
> + freesync_video,
> + "Enable freesync modesetting optimization feature (0 = off (default), 1 
> = on)");
> +module_param_named(freesync_video, amdgpu_exp_freesync_vid_mode, uint, 0444);
> +
>  /**
>   * DOC: reset_method (int)
>   * GPU reset method (-1 = auto (default), 0 = legacy, 1 = mode0, 2 = mode1, 
> 3 = mode2, 4 = baco)

Hi,

please document somewhere that ends up in git history (commit message,
code comments, description of the parameter would be the best but maybe
there isn't enough space?) what Christian König explained in

https://lists.freedesktop.org/archives/dri-devel/2020-December/291254.html

that this is a stop-gap feature intended to be removed as soon as
possible (when a better solution comes up, which could be years).

So far I have not seen a single mention of this intention in your patch
submissions, and I think it is very important to make known.

I also did not see an explanation of why this instead of manufacturing
these video modes in userspace (an idea mentioned by Christian in the
referenced email). I think that too should be part of a commit message.


Thanks,
pq


pgpjIbXLnNVSq.pgp
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Intel-gfx] [PATCH] drm-buf: Add debug option

2021-01-14 Thread Chris Wilson
Quoting Daniel Vetter (2021-01-14 09:02:57)
> On Wed, Jan 13, 2021 at 10:08 PM Chris Wilson  
> wrote:
> > Quoting Daniel Vetter (2021-01-13 20:50:11)
> > > On Wed, Jan 13, 2021 at 4:43 PM Chris Wilson  
> > > wrote:
> > > >
> > > > Quoting Daniel Vetter (2021-01-13 14:06:04)
> > > > > We have too many people abusing the struct page they can get at but
> > > > > really shouldn't in importers. Aside from that the backing page might
> > > > > simply not exist (for dynamic p2p mappings) looking at it and using it
> > > > > e.g. for mmap can also wreak the page handling of the exporter
> > > > > completely. Importers really must go through the proper interface like
> > > > > dma_buf_mmap for everything.
> > > >
> > > > If the exporter doesn't want to expose the struct page, why are they
> > > > setting it in the exported sg_table?
> > >
> > > You need to store it somewhere, otherwise the dma-api doesn't work.
> > > Essentially this achieves clearing/resetting the struct page pointer,
> > > without additional allocations somewhere, or tons of driver changes
> > > (since presumably the driver does keep track of the struct page
> > > somewhere too).
> >
> > Only for mapping, and that's before the export -- if there's even a
> > struct page to begin with.
> >
> > > Also as long as we have random importers looking at struct page we
> > > can't just remove it, or crashes everywhere. So it has to be some
> > > debug option you can disable.
> >
> > Totally agreed that nothing generic can rely on pages being transported
> > via dma-buf, and memfd is there if you do want a suitable transport. The
> > one I don't know about is dma-buf heap, do both parties there consent to
> > transport pages via the dma-buf? i.e. do they have special cases for
> > import/export between heaps?
> 
> heaps shouldn't be any different wrt the interface exposed to
> importers. Adding John just in case I missed something.
> 
> I think the only problem we have is that the first import for ttm
> simply pulled out the struct page and ignored the sgtable otherwise,
> then that copypasted to places and we're still have some of that left.
> Although it's a lot better. So largely the problem is importers being
> a bit silly.
> 
> I also think I should change the defaulty y to default y if
> DMA_API_DEBUG or something like that, to make sure it's actually
> enabled often enough.

It felt overly draconian, but other than the open question of dma-buf
heaps (which I realise that we need some CI coverage for), I can't
think of a good reason to argue for hiding a struct page transport
within dma-buf.

The only other problem I see with the implementation is that there's
nothing that says that each dmabuf->ops->map_dma_buf() returns a new
sg_table, so we may end up undoing the xor. Or should each dma-buf
return a fresh dma-mapping for iommu isolation?
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD

2021-01-14 Thread Daniel Vetter
On Thu, Jan 14, 2021 at 4:27 AM Jerome Glisse  wrote:
>
> On Wed, Jan 13, 2021 at 09:31:11PM +0100, Daniel Vetter wrote:
> > On Wed, Jan 13, 2021 at 5:56 PM Jerome Glisse  wrote:
> > > On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
> > > > On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> > > > > Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> > > > > > On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> > > > > >> This is the first version of our HMM based shared virtual memory 
> > > > > >> manager
> > > > > >> for KFD. There are still a number of known issues that we're 
> > > > > >> working through
> > > > > >> (see below). This will likely lead to some pretty significant 
> > > > > >> changes in
> > > > > >> MMU notifier handling and locking on the migration code paths. So 
> > > > > >> don't
> > > > > >> get hung up on those details yet.
> > > > > >>
> > > > > >> But I think this is a good time to start getting feedback. We're 
> > > > > >> pretty
> > > > > >> confident about the ioctl API, which is both simple and extensible 
> > > > > >> for the
> > > > > >> future. (see patches 4,16) The user mode side of the API can be 
> > > > > >> found here:
> > > > > >> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> > > > > >>
> > > > > >> I'd also like another pair of eyes on how we're interfacing with 
> > > > > >> the GPU VM
> > > > > >> code in amdgpu_vm.c (see patches 12,13), retry page fault handling 
> > > > > >> (24,25),
> > > > > >> and some retry IRQ handling changes (32).
> > > > > >>
> > > > > >>
> > > > > >> Known issues:
> > > > > >> * won't work with IOMMU enabled, we need to dma_map all pages 
> > > > > >> properly
> > > > > >> * still working on some race conditions and random bugs
> > > > > >> * performance is not great yet
> > > > > > Still catching up, but I think there's another one for your list:
> > > > > >
> > > > > >  * hmm gpu context preempt vs page fault handling. I've had a short
> > > > > >discussion about this one with Christian before the holidays, 
> > > > > > and also
> > > > > >some private chats with Jerome. It's nasty since no easy fix, 
> > > > > > much less
> > > > > >a good idea what's the best approach here.
> > > > >
> > > > > Do you have a pointer to that discussion or any more details?
> > > >
> > > > Essentially if you're handling an hmm page fault from the gpu, you can
> > > > deadlock by calling dma_fence_wait on a (chain of, possibly) other 
> > > > command
> > > > submissions or compute contexts with dma_fence_wait. Which deadlocks if
> > > > you can't preempt while you have that page fault pending. Two solutions:
> > > >
> > > > - your hw can (at least for compute ctx) preempt even when a page fault 
> > > > is
> > > >   pending
> > > >
> > > > - lots of screaming in trying to come up with an alternate solution. 
> > > > They
> > > >   all suck.
> > > >
> > > > Note that the dma_fence_wait is hard requirement, because we need that 
> > > > for
> > > > mmu notifiers and shrinkers, disallowing that would disable dynamic 
> > > > memory
> > > > management. Which is the current "ttm is self-limited to 50% of system
> > > > memory" limitation Christian is trying to lift. So that's really not
> > > > a restriction we can lift, at least not in upstream where we need to 
> > > > also
> > > > support old style hardware which doesn't have page fault support and
> > > > really has no other option to handle memory management than
> > > > dma_fence_wait.
> > > >
> > > > Thread was here:
> > > >
> > > > https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=e...@mail.gmail.com/
> > > >
> > > > There's a few ways to resolve this (without having preempt-capable
> > > > hardware), but they're all supremely nasty.
> > > > -Daniel
> > > >
> > >
> > > I had a new idea, i wanted to think more about it but have not yet,
> > > anyway here it is. Adding a new callback to dma fence which ask the
> > > question can it dead lock ? Any time a GPU driver has pending page
> > > fault (ie something calling into the mm) it answer yes, otherwise
> > > no. The GPU shrinker would ask the question before waiting on any
> > > dma-fence and back of if it gets yes. Shrinker can still try many
> > > dma buf object for which it does not get a yes on associated fence.
> >
> > Having that answer on a given fence isn't enough, you still need to
> > forward that information through the entire dependency graph, across
> > drivers. That's the hard part, since that dependency graph is very
> > implicit in the code, and we'd need to first roll it out across all
> > drivers.
>
> Here i am saying do not wait on fence for which you are not sure.
> Only wait on fence for which you are 100% certain you can not dead
> lock. So if you can never be sure on dma fence then never wait on
> dma-fence in the shrinker. However most driver should have enough
> information in their shrinker to 

[PATCH] drm/amd/display: Simplify bool comparison

2021-01-14 Thread Yang Li
Fix the following coccicheck warning:
./drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c:1228:9-20:
WARNING: Comparison to bool

Reported-by: Abaci Robot 
Signed-off-by: Yang Li 
---
 drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c 
b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
index 319dec5..3827501 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
@@ -1219,13 +1219,13 @@ static bool CalculatePrefetchSchedule(
dml_print("DML: prefetch_bw_equ: %f\n", 
prefetch_bw_equ);
 
if (prefetch_bw_equ > 0) {
-   if (GPUVMEnable == true) {
+   if (GPUVMEnable) {
Tvm_equ = dml_max3(*Tno_bw + 
PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_equ, 
Tvm_trips, LineTime / 4);
} else {
Tvm_equ = LineTime / 4;
}
 
-   if ((GPUVMEnable == true || myPipe->DCCEnable 
== true)) {
+   if ((GPUVMEnable || myPipe->DCCEnable)) {
Tr0_equ = dml_max4(
(MetaRowByte + 
PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_equ,
Tr0_trips,
-- 
1.8.3.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/amd/display: Simplify bool comparison

2021-01-14 Thread Yang Li
Fix the following coccicheck warning:
./drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c:5121:14-38:
WARNING: Comparison to bool

Reported-by: Abaci Robot 
Signed-off-by: Yang Li 
---
 .../amd/display/dc/dml/dcn21/display_mode_vba_21.c | 44 +++---
 1 file changed, 22 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c 
b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
index 86ff24d..0bcec11 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
@@ -5121,48 +5121,48 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct 
display_mode_lib *mode_l
for (j = 0; j < 2; j++) {
enum dm_validation_status status = DML_VALIDATION_OK;
 
-   if (mode_lib->vba.ScaleRatioAndTapsSupport != true) {
+   if (!mode_lib->vba.ScaleRatioAndTapsSupport) {
status = DML_FAIL_SCALE_RATIO_TAP;
-   } else if 
(mode_lib->vba.SourceFormatPixelAndScanSupport != true) {
+   } else if 
(!mode_lib->vba.SourceFormatPixelAndScanSupport) {
status = DML_FAIL_SOURCE_PIXEL_FORMAT;
-   } else if (locals->ViewportSizeSupport[i][0] != true) {
+   } else if (!locals->ViewportSizeSupport[i][0]) {
status = DML_FAIL_VIEWPORT_SIZE;
-   } else if (locals->DIOSupport[i] != true) {
+   } else if (!locals->DIOSupport[i]) {
status = DML_FAIL_DIO_SUPPORT;
-   } else if (locals->NotEnoughDSCUnits[i] != false) {
+   } else if (locals->NotEnoughDSCUnits[i]) {
status = DML_FAIL_NOT_ENOUGH_DSC;
-   } else if (locals->DSCCLKRequiredMoreThanSupported[i] 
!= false) {
+   } else if (locals->DSCCLKRequiredMoreThanSupported[i]) {
status = DML_FAIL_DSC_CLK_REQUIRED;
-   } else if (locals->ROBSupport[i][0] != true) {
+   } else if (!locals->ROBSupport[i][0]) {
status = DML_FAIL_REORDERING_BUFFER;
-   } else if (locals->DISPCLK_DPPCLK_Support[i][j] != 
true) {
+   } else if (!locals->DISPCLK_DPPCLK_Support[i][j]) {
status = DML_FAIL_DISPCLK_DPPCLK;
-   } else if (locals->TotalAvailablePipesSupport[i][j] != 
true) {
+   } else if (!locals->TotalAvailablePipesSupport[i][j]) {
status = DML_FAIL_TOTAL_AVAILABLE_PIPES;
-   } else if (mode_lib->vba.NumberOfOTGSupport != true) {
+   } else if (!mode_lib->vba.NumberOfOTGSupport) {
status = DML_FAIL_NUM_OTG;
-   } else if (mode_lib->vba.WritebackModeSupport != true) {
+   } else if (!mode_lib->vba.WritebackModeSupport) {
status = DML_FAIL_WRITEBACK_MODE;
-   } else if (mode_lib->vba.WritebackLatencySupport != 
true) {
+   } else if (!mode_lib->vba.WritebackLatencySupport) {
status = DML_FAIL_WRITEBACK_LATENCY;
-   } else if 
(mode_lib->vba.WritebackScaleRatioAndTapsSupport != true) {
+   } else if 
(!mode_lib->vba.WritebackScaleRatioAndTapsSupport) {
status = DML_FAIL_WRITEBACK_SCALE_RATIO_TAP;
-   } else if (mode_lib->vba.CursorSupport != true) {
+   } else if (!mode_lib->vba.CursorSupport) {
status = DML_FAIL_CURSOR_SUPPORT;
-   } else if (mode_lib->vba.PitchSupport != true) {
+   } else if (!mode_lib->vba.PitchSupport) {
status = DML_FAIL_PITCH_SUPPORT;
-   } else if 
(locals->TotalVerticalActiveBandwidthSupport[i][0] != true) {
+   } else if 
(!locals->TotalVerticalActiveBandwidthSupport[i][0]) {
status = DML_FAIL_TOTAL_V_ACTIVE_BW;
-   } else if (locals->PTEBufferSizeNotExceeded[i][j] != 
true) {
+   } else if (!locals->PTEBufferSizeNotExceeded[i][j]) {
status = DML_FAIL_PTE_BUFFER_SIZE;
-   } else if (mode_lib->vba.NonsupportedDSCInputBPC != 
false) {
+   } else if (mode_lib->vba.NonsupportedDSCInputBPC) {
status = DML_FAIL_DSC_INPUT_BPC;
-   } else if ((mode_lib->vba.HostVMEnable != false
-   

[PATCH] drm/ttm: use compound pages for THP

2021-01-14 Thread David Stevens
Compound page metadata is necessary for page reference counting to work
properly on pages besides the head page. Without it, put_page
corresponding to the last outstanding get_page call on a tail page will
end up freeing that page, even if the bo still references the page.

Signed-off-by: David Stevens 
---
 drivers/gpu/drm/ttm/ttm_pool.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index 7b2f60616750..09239b93dc2c 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -83,7 +83,6 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool 
*pool, gfp_t gfp_flags,
gfp_flags |= GFP_TRANSHUGE_LIGHT | __GFP_NORETRY |
__GFP_KSWAPD_RECLAIM;
gfp_flags &= ~__GFP_MOVABLE;
-   gfp_flags &= ~__GFP_COMP;
}
 
if (!pool->use_dma_alloc) {
-- 
2.30.0.284.gd98b1dd5eaa7-goog

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/hisilicon: Fix build error

2021-01-14 Thread Tian Tao
Fix the following errors:
divers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c:
In function ‘hibmc_hw_map’:
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c:213:25:
error: ‘dev’ undeclared (first use in this function);

Fixes: 4d4dad21cc7bee "drm/hibmc: Remove references to struct drm_device.pdev"
Signed-off-by: Tian Tao 
---
 drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c 
b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
index 0a2edc7..abd6250 100644
--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
+++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
@@ -210,7 +210,7 @@ static void hibmc_hw_config(struct hibmc_drm_private *priv)
 
 static int hibmc_hw_map(struct hibmc_drm_private *priv)
 {
-   struct pci_dev *pdev = dev->pdev;
+   struct drm_device *dev = &priv->dev;
struct pci_dev *pdev = to_pci_dev(dev->dev);
resource_size_t addr, size, ioaddr, iosize;
 
-- 
2.7.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH RFC] drm/vc4: hdmi: Avoid ASoC error messages on startup

2021-01-14 Thread Maxime Ripard
Hi Stefan,

On Sat, Jan 09, 2021 at 12:41:57PM +0100, Stefan Wahren wrote:
> Hi Maxime,
> 
> Am 29.12.20 um 16:36 schrieb Stefan Wahren:
> > During startup of Raspberry Pi 4 there seems to be a race between
> > VC4 probing and Pulseaudio trying to open its PCM device:
> >
> > ASoC: error at snd_soc_dai_startup on fef05700.hdmi: -19
> >
> > Avoid these errors by returning EPROBE_DEFER in this situation.
> >
> > Signed-off-by: Stefan Wahren 
> 
> should i resend without RFC?

Yeah, it looks reasonable enough to me :)

I'd like to get Mark's opinion before merging though

Maxime


signature.asc
Description: PGP signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH V3] dt-bindings: gpu: Convert v3d to json-schema

2021-01-14 Thread Maxime Ripard
Hi,

On Sat, Jan 09, 2021 at 11:50:32AM +0100, Stefan Wahren wrote:
> This converts the v3d bindings to yaml format.
> 
> Signed-off-by: Stefan Wahren 
> ---
> 
> Changes in V3:
> - drop redundant maxItems in case we already have items defined
> - fix order of reg-names enum
> - tag required items in description
> - add reg-names to required properties
> - drop clock-names
> 
>  .../devicetree/bindings/gpu/brcm,bcm-v3d.txt   | 33 --
>  .../devicetree/bindings/gpu/brcm,bcm-v3d.yaml  | 72 
> ++
>  2 files changed, 72 insertions(+), 33 deletions(-)
>  delete mode 100644 Documentation/devicetree/bindings/gpu/brcm,bcm-v3d.txt
>  create mode 100644 Documentation/devicetree/bindings/gpu/brcm,bcm-v3d.yaml
> 
> diff --git a/Documentation/devicetree/bindings/gpu/brcm,bcm-v3d.txt 
> b/Documentation/devicetree/bindings/gpu/brcm,bcm-v3d.txt
> deleted file mode 100644
> index b2df82b..000
> --- a/Documentation/devicetree/bindings/gpu/brcm,bcm-v3d.txt
> +++ /dev/null
> @@ -1,33 +0,0 @@
> -Broadcom V3D GPU
> -
> -Only the Broadcom V3D 3.x and newer GPUs are covered by this binding.
> -For V3D 2.x, see brcm,bcm-vc4.txt.
> -
> -Required properties:
> -- compatible:Should be "brcm,7268-v3d" or "brcm,7278-v3d"
> -- reg:   Physical base addresses and lengths of the register 
> areas
> -- reg-names: Names for the register areas.  The "hub" and "core0"
> -   register areas are always required.  The "gca" register area
> -   is required if the GCA cache controller is present.  The
> -   "bridge" register area is required if an external reset
> -   controller is not present.
> -- interrupts:The interrupt numbers.  The first interrupt is for the 
> hub,
> -   while the following interrupts are separate interrupt lines
> -   for the cores (if they don't share the hub's interrupt).
> -   See bindings/interrupt-controller/interrupts.txt
> -
> -Optional properties:
> -- clocks:The core clock the unit runs on
> -- resets:The reset line for v3d, if not using a mapping of the bridge
> -   See bindings/reset/reset.txt
> -
> -v3d {
> - compatible = "brcm,7268-v3d";
> - reg = <0xf1204000 0x100>,
> -   <0xf120 0x4000>,
> -   <0xf1208000 0x4000>,
> -   <0xf1204100 0x100>;
> - reg-names = "bridge", "hub", "core0", "gca";
> - interrupts = <0 78 4>,
> -  <0 77 4>;
> -};
> diff --git a/Documentation/devicetree/bindings/gpu/brcm,bcm-v3d.yaml 
> b/Documentation/devicetree/bindings/gpu/brcm,bcm-v3d.yaml
> new file mode 100644
> index 000..3b543d4
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/gpu/brcm,bcm-v3d.yaml
> @@ -0,0 +1,72 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/gpu/brcm,bcm-v3d.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Broadcom V3D GPU Bindings
> +
> +maintainers:
> +  - Eric Anholt 
> +  - Nicolas Saenz Julienne 
> +
> +properties:
> +  $nodename:
> +pattern: '^gpu@[a-f0-9]+$'
> +
> +  compatible:
> +enum:
> +  - brcm,7268-v3d
> +  - brcm,7278-v3d
> +
> +  reg:
> +items:
> +  - description: hub register (required)
> +  - description: core0 register (required)
> +  - description: GCA cache controller register (if GCA controller 
> present)
> +  - description: bridge register (if no external reset controller)
> +minItems: 2

maxItems will be set to 2 in this case, while it would be 4 I guess?

Looks fine otherwise

Maxime


signature.asc
Description: PGP signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 14/31] video: fbdev: aty: mach64_ct: Remove some set but unused variables

2021-01-14 Thread Daniël Mantione

Hi,

If I remember well, the removed lines have to do with the VGA/accelerator 
mode of the chip. The current driver always runs the chip in accelerator 
mode. Suppose you would want to support high resolution hardware text 
modes with the driver (fbdev bpp=0), then you would need to switch the 
chip into VGA mode mode and then the removed lines become relevant.


I did some experiments with this when I was working on the driver, but 
because the documentation was silent about the behaviour of extended 
CRTC registers in VGA mode, I failed to make hardware text modes to work 
properly.


The #if 0 was there so code was already there in case me or someone else 
would pick it up again.


Best regards,

Daniël Mantione

Op Wed, 13 Jan 2021, schreef Lee Jones:


Fixes the following W=1 kernel build warning(s):

drivers/video/fbdev/aty/mach64_ct.c: In function ‘aty_init_pll_ct’:
drivers/video/fbdev/aty/mach64_ct.c:405:46: warning: variable ‘vga_dsp_on_off’ 
set but not used [-Wunused-but-set-variable]
drivers/video/fbdev/aty/mach64_ct.c:405:30: warning: variable ‘vga_dsp_config’ 
set but not used [-Wunused-but-set-variable]
drivers/video/fbdev/aty/mach64_ct.c:405:18: warning: variable ‘dsp_on_off’ set 
but not used [-Wunused-but-set-variable]

Cc: daniel.manti...@freepascal.org
Cc: dri-devel@lists.freedesktop.org
Cc: linux-fb...@vger.kernel.org
Signed-off-by: Lee Jones 
---
drivers/video/fbdev/aty/mach64_ct.c | 19 ++-
1 file changed, 2 insertions(+), 17 deletions(-)

diff --git a/drivers/video/fbdev/aty/mach64_ct.c 
b/drivers/video/fbdev/aty/mach64_ct.c
index f87cc81f4fa2b..23eececa1e9d7 100644
--- a/drivers/video/fbdev/aty/mach64_ct.c
+++ b/drivers/video/fbdev/aty/mach64_ct.c
@@ -402,7 +402,7 @@ static int aty_init_pll_ct(const struct fb_info *info, 
union aty_pll *pll)
struct atyfb_par *par = (struct atyfb_par *) info->par;
u8 mpost_div, xpost_div, sclk_post_div_real;
u32 q, memcntl, trp;
-   u32 dsp_config, dsp_on_off, vga_dsp_config, vga_dsp_on_off;
+   u32 dsp_config;
#ifdef DEBUG
int pllmclk, pllsclk;
#endif
@@ -488,25 +488,10 @@ static int aty_init_pll_ct(const struct fb_info *info, 
union aty_pll *pll)

/* Allow BIOS to override */
dsp_config = aty_ld_le32(DSP_CONFIG, par);
-   dsp_on_off = aty_ld_le32(DSP_ON_OFF, par);
-   vga_dsp_config = aty_ld_le32(VGA_DSP_CONFIG, par);
-   vga_dsp_on_off = aty_ld_le32(VGA_DSP_ON_OFF, par);

if (dsp_config)
pll->ct.dsp_loop_latency = (dsp_config & DSP_LOOP_LATENCY) >> 
16;
-#if 0
-   FIXME: is it relevant for us?
-   if ((!dsp_on_off && !M64_HAS(RESET_3D)) ||
-   ((dsp_on_off == vga_dsp_on_off) &&
-   (!dsp_config || !((dsp_config ^ vga_dsp_config) & 
DSP_XCLKS_PER_QW {
-   vga_dsp_on_off &= VGA_DSP_OFF;
-   vga_dsp_config &= VGA_DSP_XCLKS_PER_QW;
-   if (ATIDivide(vga_dsp_on_off, vga_dsp_config, 5, 1) > 24)
-   pll->ct.fifo_size = 32;
-   else
-   pll->ct.fifo_size = 24;
-   }
-#endif
+
/* Exit if the user does not want us to tamper with the clock
rates of her chip. */
if (par->mclk_per == 0) {
--
2.25.1
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/panfrost: Use delayed timer as default in devfreq profile

2021-01-14 Thread Lukasz Luba




On 1/13/21 3:35 PM, Steven Price wrote:

On 05/01/2021 16:41, Lukasz Luba wrote:

Devfreq framework supports 2 modes for monitoring devices.
Use delayed timer as default instead of deferrable timer
in order to monitor the GPU status regardless of CPU idle.

Signed-off-by: Lukasz Luba 


Looks reasonable to me.

Reviewed-by: Steven Price 

I'll apply to drm-misc-next.


Thank you Steven!

Regards,
Lukasz
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 04/21] gpu: host1x: Remove cancelled waiters immediately

2021-01-14 Thread Dmitry Osipenko
13.01.2021 01:20, Mikko Perttunen пишет:
> On 1/13/21 12:07 AM, Dmitry Osipenko wrote:
>> 11.01.2021 16:00, Mikko Perttunen пишет:
>>> -void host1x_intr_put_ref(struct host1x *host, unsigned int id, void
>>> *ref)
>>> +void host1x_intr_put_ref(struct host1x *host, unsigned int id, void
>>> *ref,
>>> + bool flush)
>>>   {
>>>   struct host1x_waitlist *waiter = ref;
>>>   struct host1x_syncpt *syncpt;
>>>   -    while (atomic_cmpxchg(&waiter->state, WLS_PENDING,
>>> WLS_CANCELLED) ==
>>> -   WLS_REMOVED)
>>> -    schedule();
>>> +    atomic_cmpxchg(&waiter->state, WLS_PENDING, WLS_CANCELLED);
>>>     syncpt = host->syncpt + id;
>>> -    (void)process_wait_list(host, syncpt,
>>> -    host1x_syncpt_load(host->syncpt + id));
>>> +
>>> +    spin_lock(&syncpt->intr.lock);
>>> +    if (atomic_cmpxchg(&waiter->state, WLS_CANCELLED, WLS_HANDLED) ==
>>> +    WLS_CANCELLED) {
>>> +    list_del(&waiter->list);
>>> +    kref_put(&waiter->refcount, waiter_release);
>>> +    }
>>> +    spin_unlock(&syncpt->intr.lock);
>>> +
>>> +    if (flush) {
>>> +    /* Wait until any concurrently executing handler has
>>> finished. */
>>> +    while (atomic_read(&waiter->state) != WLS_HANDLED)
>>> +    cpu_relax();
>>> +    }
>>
>> A busy-loop shouldn't be used in kernel unless there is a very good
>> reason. The wait_event() should be used instead.
>>
>> But please don't hurry to update this patch, we may need or want to
>> retire the host1x-waiter and then these all waiter-related patches won't
>> be needed.
>>
> 
> Yes, we should improve the intr code to remove all this complexity. But
> let's merge this first to get a functional baseline and do larger design
> changes in follow-up patches.
> 
> It is cumbersome for me to develop further series (of which I have
> several under work and planning) with this baseline series not being
> merged. The uncertainty on the approval of the UAPI design also makes it
> hard to know whether it makes sense for me to work on top of this code
> or not, so I'd like to focus on what's needed to get this merged instead
> of further redesign of the driver at this time.

Is this patch (and some others) necessary for the new UAPI? If not,
could we please narrow down the patches to the minimum that is needed
for trying out the new UAPI?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 0/3] drm/sun4i: de2/de3: CSC improvements

2021-01-14 Thread Maxime Ripard
On Sun, Jan 10, 2021 at 09:19:44PM +0100, Jernej Skrabec wrote:
> This short series reworks CSC handling to remove duplicated constants
> (patch 1 and 2) and adds BT2020 encoding to DE3 (patch 3).
> 
> Please take a look.

Applied, thanks
Maxime


signature.asc
Description: PGP signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/amd/display: Simplify bool comparison

2021-01-14 Thread Yang Li
Fix the following coccicheck warning:
./drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c:3141:30-39:
WARNING: Comparison to bool

Reported-by: Abaci Robot 
Signed-off-by: Yang Li 
---
 drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c 
b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
index 45f0289..f33e3de 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
@@ -3138,7 +3138,7 @@ static void CalculateFlipSchedule(
4.0 * (TimeForFetchingMetaPTEImmediateFlip / 
LineTime + 0.125),
1) / 4.0;
 
-   if ((GPUVMEnable == true || DCCEnable == true)) {
+   if ((GPUVMEnable || DCCEnable)) {
mode_lib->vba.ImmediateFlipBW[0] = 
BandwidthAvailableForImmediateFlip
* ImmediateFlipBytes / 
TotImmediateFlipBytes;
TimeForFetchingRowInVBlankImmediateFlip = dml_max(
-- 
1.8.3.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v3 6/7] drm/msm/a5xx: Disable flat shading optimization

2021-01-14 Thread AngeloGioacchino Del Regno
From: Konrad Dybcio 

Port over the command from downstream to prevent undefined
behaviour.

Signed-off-by: Konrad Dybcio 
Signed-off-by: AngeloGioacchino Del Regno 

---
 drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c 
b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
index 24ab51bb5a01..23fc851756de 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
@@ -791,6 +791,9 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
adreno_is_a540(adreno_gpu))
gpu_write(gpu, REG_A5XX_UCHE_DBG_ECO_CNTL_2, regbit);
 
+   /* Disable All flat shading optimization (ALLFLATOPTDIS) */
+   gpu_rmw(gpu, REG_A5XX_VPC_DBG_ECO_CNTL, 0, (1 << 10));
+
/* Protect registers from the CP */
gpu_write(gpu, REG_A5XX_CP_PROTECT_CNTL, 0x0007);
 
-- 
2.29.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v3 3/7] drm/msm/a5xx: Add support for Adreno 508, 509, 512 GPUs

2021-01-14 Thread AngeloGioacchino Del Regno
The Adreno 508/509/512 GPUs are stripped versions of the Adreno
5xx found in the mid-end SoCs such as SDM630, SDM636, SDM660 and
SDA variants; these SoCs are usually provided with ZAP firmwares,
but they have no available GPMU.

Signed-off-by: AngeloGioacchino Del Regno 

Tested-by: Martin Botka 
Reviewed-by: Jordan Crouse 
---
 drivers/gpu/drm/msm/adreno/a5xx_gpu.c  | 172 ++---
 drivers/gpu/drm/msm/adreno/a5xx_power.c|   4 +-
 drivers/gpu/drm/msm/adreno/adreno_device.c |  52 +++
 drivers/gpu/drm/msm/adreno/adreno_gpu.h|  15 ++
 4 files changed, 223 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c 
b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
index 8c96fc0fc1b7..04ffd84c1190 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
@@ -222,7 +222,7 @@ static void a5xx_submit(struct msm_gpu *gpu, struct 
msm_gem_submit *submit)
a5xx_preempt_trigger(gpu);
 }
 
-static const struct {
+static const struct adreno_five_hwcg_regs {
u32 offset;
u32 value;
 } a5xx_hwcg[] = {
@@ -318,16 +318,124 @@ static const struct {
{REG_A5XX_RBBM_CLOCK_DELAY_TSE_RAS_RBBM, 0x4000},
{REG_A5XX_RBBM_CLOCK_DELAY_GPC, 0x0200},
{REG_A5XX_RBBM_CLOCK_DELAY_VFD, 0x}
+}, a50x_hwcg[] = {
+   {REG_A5XX_RBBM_CLOCK_CNTL_SP0, 0x0222},
+   {REG_A5XX_RBBM_CLOCK_CNTL2_SP0, 0x0220},
+   {REG_A5XX_RBBM_CLOCK_HYST_SP0, 0xF3CF},
+   {REG_A5XX_RBBM_CLOCK_DELAY_SP0, 0x0080},
+   {REG_A5XX_RBBM_CLOCK_CNTL_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL2_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL3_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_HYST_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_HYST2_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_HYST3_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_DELAY_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_DELAY2_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_DELAY3_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL2_UCHE, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL3_UCHE, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL4_UCHE, 0x0022},
+   {REG_A5XX_RBBM_CLOCK_CNTL_UCHE, 0x},
+   {REG_A5XX_RBBM_CLOCK_HYST_UCHE, 0x00F4},
+   {REG_A5XX_RBBM_CLOCK_DELAY_UCHE, 0x0002},
+   {REG_A5XX_RBBM_CLOCK_CNTL_RB0, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL2_RB0, 0x0022},
+   {REG_A5XX_RBBM_CLOCK_CNTL_CCU0, 0x0000},
+   {REG_A5XX_RBBM_CLOCK_CNTL_RAC, 0x0552},
+   {REG_A5XX_RBBM_CLOCK_CNTL2_RAC, 0x0050},
+   {REG_A5XX_RBBM_CLOCK_HYST_RB_CCU0, 0x04040404},
+   {REG_A5XX_RBBM_CLOCK_HYST_RAC, 0x07444044},
+   {REG_A5XX_RBBM_CLOCK_DELAY_RB_CCU_L1_0, 0x0002},
+   {REG_A5XX_RBBM_CLOCK_DELAY_RAC, 0x00010011},
+   {REG_A5XX_RBBM_CLOCK_CNTL_TSE_RAS_RBBM, 0x0422},
+   {REG_A5XX_RBBM_CLOCK_MODE_GPC, 0x0222},
+   {REG_A5XX_RBBM_CLOCK_MODE_VFD, 0x},
+   {REG_A5XX_RBBM_CLOCK_HYST_TSE_RAS_RBBM, 0x},
+   {REG_A5XX_RBBM_CLOCK_HYST_GPC, 0x04104004},
+   {REG_A5XX_RBBM_CLOCK_HYST_VFD, 0x},
+   {REG_A5XX_RBBM_CLOCK_DELAY_HLSQ, 0x},
+   {REG_A5XX_RBBM_CLOCK_DELAY_TSE_RAS_RBBM, 0x4000},
+   {REG_A5XX_RBBM_CLOCK_DELAY_GPC, 0x0200},
+   {REG_A5XX_RBBM_CLOCK_DELAY_VFD, 0x},
+}, a512_hwcg[] = {
+   {REG_A5XX_RBBM_CLOCK_CNTL_SP0, 0x0222},
+   {REG_A5XX_RBBM_CLOCK_CNTL_SP1, 0x0222},
+   {REG_A5XX_RBBM_CLOCK_CNTL2_SP0, 0x0220},
+   {REG_A5XX_RBBM_CLOCK_CNTL2_SP1, 0x0220},
+   {REG_A5XX_RBBM_CLOCK_HYST_SP0, 0xF3CF},
+   {REG_A5XX_RBBM_CLOCK_HYST_SP1, 0xF3CF},
+   {REG_A5XX_RBBM_CLOCK_DELAY_SP0, 0x0080},
+   {REG_A5XX_RBBM_CLOCK_DELAY_SP1, 0x0080},
+   {REG_A5XX_RBBM_CLOCK_CNTL_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL_TP1, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL2_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL2_TP1, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL3_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL3_TP1, 0x},
+   {REG_A5XX_RBBM_CLOCK_HYST_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_HYST_TP1, 0x},
+   {REG_A5XX_RBBM_CLOCK_HYST2_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_HYST2_TP1, 0x},
+   {REG_A5XX_RBBM_CLOCK_HYST3_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_HYST3_TP1, 0x},
+   {REG_A5XX_RBBM_CLOCK_DELAY_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_DELAY_TP1, 0x},
+   {REG_A5XX_RBBM_CLOCK_DELAY2_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_DELAY2_TP1, 0x},
+   {REG_A5XX_RBBM_CLOCK_DELAY3_TP0, 0x},
+   {REG_A5XX_RBBM_CLOCK_DELAY3_TP1, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL_UCHE, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL2_UCHE, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL3_UCHE, 0x},
+   {REG_A5XX_RBBM_CLOCK_CNTL4_UCHE, 0x0022},
+   {

[PATCH v4] drm/sun4i: tcon: fix inverted DCLK polarity

2021-01-14 Thread Giulio Benetti
From: Giulio Benetti 

During commit 88bc4178568b ("drm: Use new
DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE flags") DRM_BUS_FLAG_*
macros have been changed to avoid ambiguity but just because of this
ambiguity previous DRM_BUS_FLAG_PIXDATA_(POS/NEG)EDGE were used meaning
_SAMPLE_ not _DRIVE_. This leads to DLCK inversion and need to fix but
instead of swapping phase values, let's adopt an easier approach Maxime
suggested:
It turned out that bit 26 of SUN4I_TCON0_IO_POL_REG is dedicated to
invert DCLK polarity and this makes things really easier than before. So
let's handle DCLK polarity by adding SUN4I_TCON0_IO_POL_DCLK_POSITIVE as
bit 26 and activating according to bus_flags the same way it is done for
all the other signals polarity.

Fixes: 88bc4178568b ("drm: Use new DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE 
flags")
Suggested-by: Maxime Ripard 
Signed-off-by: Giulio Benetti 
---
V2->V3:
- squash 2 patches into 1
V3->V4:
- add SUN4I_TCON0_IO_POL_DCLK_POSITIVE to regmap_update_bits()
---
 drivers/gpu/drm/sun4i/sun4i_tcon.c | 21 ++---
 drivers/gpu/drm/sun4i/sun4i_tcon.h |  1 +
 2 files changed, 3 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c 
b/drivers/gpu/drm/sun4i/sun4i_tcon.c
index eaaf5d70e352..6e454d316852 100644
--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
+++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
@@ -569,30 +569,13 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon 
*tcon,
if (info->bus_flags & DRM_BUS_FLAG_DE_LOW)
val |= SUN4I_TCON0_IO_POL_DE_NEGATIVE;
 
-   /*
-* On A20 and similar SoCs, the only way to achieve Positive Edge
-* (Rising Edge), is setting dclk clock phase to 2/3(240°).
-* By default TCON works in Negative Edge(Falling Edge),
-* this is why phase is set to 0 in that case.
-* Unfortunately there's no way to logically invert dclk through
-* IO_POL register.
-* The only acceptable way to work, triple checked with scope,
-* is using clock phase set to 0° for Negative Edge and set to 240°
-* for Positive Edge.
-* On A33 and similar SoCs there would be a 90° phase option,
-* but it divides also dclk by 2.
-* Following code is a way to avoid quirks all around TCON
-* and DOTCLOCK drivers.
-*/
if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE)
-   clk_set_phase(tcon->dclk, 240);
-
-   if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)
-   clk_set_phase(tcon->dclk, 0);
+   val |= SUN4I_TCON0_IO_POL_DCLK_POSITIVE;
 
regmap_update_bits(tcon->regs, SUN4I_TCON0_IO_POL_REG,
   SUN4I_TCON0_IO_POL_HSYNC_POSITIVE |
   SUN4I_TCON0_IO_POL_VSYNC_POSITIVE |
+  SUN4I_TCON0_IO_POL_DCLK_POSITIVE |
   SUN4I_TCON0_IO_POL_DE_NEGATIVE,
   val);
 
diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.h 
b/drivers/gpu/drm/sun4i/sun4i_tcon.h
index cfbf4e6c1679..0ce71d10a31b 100644
--- a/drivers/gpu/drm/sun4i/sun4i_tcon.h
+++ b/drivers/gpu/drm/sun4i/sun4i_tcon.h
@@ -113,6 +113,7 @@
 #define SUN4I_TCON0_IO_POL_REG 0x88
 #define SUN4I_TCON0_IO_POL_DCLK_PHASE(phase)   ((phase & 3) << 28)
 #define SUN4I_TCON0_IO_POL_DE_NEGATIVE BIT(27)
+#define SUN4I_TCON0_IO_POL_DCLK_POSITIVE   BIT(26)
 #define SUN4I_TCON0_IO_POL_HSYNC_POSITIVE  BIT(25)
 #define SUN4I_TCON0_IO_POL_VSYNC_POSITIVE  BIT(24)
 
-- 
2.25.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3] drm/sun4i: de2: Reimplement plane z position setting logic

2021-01-14 Thread Maxime Ripard
Hi,

On Wed, Jan 06, 2021 at 09:46:30PM +0100, Jernej Skrabec wrote:
> From: Roman Stratiienko 
> 
> To set blending channel order register software needs to know state and
> position of each channel, which impossible at plane commit stage.
> 
> Move this procedure to atomic_flush stage, where all necessary information
> is available.

Expanding a bit on what the blending order register is supposed to be
doing, why it's impossible at the plane commit, and why atomic_flush is
a better option would be nice

> Fixes: f88c5ee77496 ("drm/sun4i: Implement zpos for DE2")
> Fixes: d8b3f454dab4 ("drm/sun4i: sun8i: Avoid clearing blending order at each 
> atomic commit")
> Signed-off-by: Roman Stratiienko 
> [rebased, addressed comments]
> Signed-off-by: Jernej Skrabec 
> ---
>  drivers/gpu/drm/sun4i/sun8i_mixer.c| 57 +-
>  drivers/gpu/drm/sun4i/sun8i_mixer.h|  5 +++
>  drivers/gpu/drm/sun4i/sun8i_ui_layer.c | 42 +++
>  drivers/gpu/drm/sun4i/sun8i_vi_layer.c | 42 +++
>  4 files changed, 64 insertions(+), 82 deletions(-)
> 
> diff --git a/drivers/gpu/drm/sun4i/sun8i_mixer.c 
> b/drivers/gpu/drm/sun4i/sun8i_mixer.c
> index 5b42cf25cc86..d2153b10b08d 100644
> --- a/drivers/gpu/drm/sun4i/sun8i_mixer.c
> +++ b/drivers/gpu/drm/sun4i/sun8i_mixer.c
> @@ -250,6 +250,50 @@ int sun8i_mixer_drm_format_to_hw(u32 format, u32 
> *hw_format)
>  
>  static void sun8i_mixer_commit(struct sunxi_engine *engine)
>  {
> + struct sun8i_mixer *mixer = engine_to_sun8i_mixer(engine);
> + int channel_by_zpos[SUN8I_MIXER_MAX_CHANNELS];
> + u32 base = sun8i_blender_base(mixer);
> + u32 route = 0, pipe_ctl = 0;
> + unsigned int channel_count;
> + int i, j;
> +
> + channel_count = mixer->cfg->vi_num + mixer->cfg->ui_num;
> +
> + DRM_DEBUG_DRIVER("Update blender routing\n");
> +
> + for (i = 0; i < SUN8I_MIXER_MAX_CHANNELS; i++)
> + channel_by_zpos[i] = -1;
> +
> + for (i = 0; i < channel_count; i++) {
> + int zpos = mixer->channel_zpos[i];

Why do we need the channel_zpos in the mixer structure, this looks
related to the state itself, so we should store it into a custom state
structure

> + if (zpos >= 0 && zpos < channel_count)
> + channel_by_zpos[zpos] = i;
> + }
> +
> + j = 0;
> + for (i = 0; i < channel_count; i++) {
> + int ch = channel_by_zpos[i];
> +
> + if (ch >= 0) {
> + pipe_ctl |= SUN8I_MIXER_BLEND_PIPE_CTL_EN(j);
> + route |= ch << SUN8I_MIXER_BLEND_ROUTE_PIPE_SHIFT(j);
> + j++;
> + }
> + }

Similarly, having a comment somewhere to explain that algorithm would be
nice.

> + /*
> +  * Set fill color of bottom plane to black. Generally not needed
> +  * except when VI plane is at bottom (zpos = 0) and enabled.
> +  */
> + pipe_ctl |= SUN8I_MIXER_BLEND_PIPE_CTL_FC_EN(0);
> +
> + regmap_write(mixer->engine.regs,
> +  SUN8I_MIXER_BLEND_PIPE_CTL(base), pipe_ctl);
> +
> + regmap_write(mixer->engine.regs,
> +  SUN8I_MIXER_BLEND_ROUTE(base), route);
> +
>   DRM_DEBUG_DRIVER("Committing changes\n");
>  
>   regmap_write(engine->regs, SUN8I_MIXER_GLOBAL_DBUFF,
> @@ -479,23 +523,16 @@ static int sun8i_mixer_bind(struct device *dev, struct 
> device *master,
>   regmap_write(mixer->engine.regs, SUN8I_MIXER_BLEND_BKCOLOR(base),
>SUN8I_MIXER_BLEND_COLOR_BLACK);
>  
> - /*
> -  * Set fill color of bottom plane to black. Generally not needed
> -  * except when VI plane is at bottom (zpos = 0) and enabled.
> -  */
> - regmap_write(mixer->engine.regs, SUN8I_MIXER_BLEND_PIPE_CTL(base),
> -  SUN8I_MIXER_BLEND_PIPE_CTL_FC_EN(0));
>   regmap_write(mixer->engine.regs, SUN8I_MIXER_BLEND_ATTR_FCOLOR(base, 0),
>SUN8I_MIXER_BLEND_COLOR_BLACK);
>  
>   plane_cnt = mixer->cfg->vi_num + mixer->cfg->ui_num;
> - for (i = 0; i < plane_cnt; i++)
> + for (i = 0; i < plane_cnt; i++) {
> + mixer->channel_zpos[i] = -1;
>   regmap_write(mixer->engine.regs,
>SUN8I_MIXER_BLEND_MODE(base, i),
>SUN8I_MIXER_BLEND_MODE_DEF);
> -
> - regmap_update_bits(mixer->engine.regs, SUN8I_MIXER_BLEND_PIPE_CTL(base),
> -SUN8I_MIXER_BLEND_PIPE_CTL_EN_MSK, 0);
> + }
>  
>   return 0;
>  
> diff --git a/drivers/gpu/drm/sun4i/sun8i_mixer.h 
> b/drivers/gpu/drm/sun4i/sun8i_mixer.h
> index 7576b523fdbb..7b378d6e4dd9 100644
> --- a/drivers/gpu/drm/sun4i/sun8i_mixer.h
> +++ b/drivers/gpu/drm/sun4i/sun8i_mixer.h
> @@ -12,6 +12,8 @@
>  
>  #include "sunxi_engine.h"
>  
> +#define SUN8I_MIXER_MAX_CHANNELS 5
> +
>  #define SUN8I_MIXER_SIZE(w, h)   (((h) - 1) << 16 | ((w) 
> - 1))
>  #define SUN8I_MIXER_COORD(x, y)  

memory leak in fbcon_set_font

2021-01-14 Thread syzbot
Hello,

syzbot found the following issue on:

HEAD commit:e609571b Merge tag 'nfs-for-5.11-2' of git://git.linux-nfs..
git tree:   upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=165261e0d0
kernel config:  https://syzkaller.appspot.com/x/.config?x=850b6de5f8959443
dashboard link: https://syzkaller.appspot.com/bug?extid=2f2c18881a450f22d1bf
compiler:   gcc (GCC) 10.1.0-syz 20200507
syz repro:  https://syzkaller.appspot.com/x/repro.syz?x=16ab20c750
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=1008b770d0

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+2f2c18881a450f22d...@syzkaller.appspotmail.com

BUG: memory leak
unreferenced object 0x88811813ea00 (size 512):
  comm "syz-executor939", pid 10246, jiffies 4294971847 (age 34.510s)
  hex dump (first 32 bytes):
b0 55 1f 9b 00 00 00 00 00 01 00 00 06 00 00 00  .U..
11 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
  backtrace:
[<062fad90>] kmalloc include/linux/slab.h:557 [inline]
[<062fad90>] fbcon_set_font+0x128/0x370 
drivers/video/fbdev/core/fbcon.c:2454
[] con_font_set drivers/tty/vt/vt.c:4667 [inline]
[] con_font_op+0x497/0x740 drivers/tty/vt/vt.c:4711
[] vt_io_ioctl drivers/tty/vt/vt_ioctl.c:596 [inline]
[] vt_ioctl+0xeab/0x19d0 drivers/tty/vt/vt_ioctl.c:817
[<369331c6>] tty_ioctl+0x6c3/0xc40 drivers/tty/tty_io.c:2658
[] vfs_ioctl fs/ioctl.c:48 [inline]
[] __do_sys_ioctl fs/ioctl.c:753 [inline]
[] __se_sys_ioctl fs/ioctl.c:739 [inline]
[] __x64_sys_ioctl+0xfc/0x140 fs/ioctl.c:739
[<705a3959>] do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
[] entry_SYSCALL_64_after_hwframe+0x44/0xa9

BUG: memory leak
unreferenced object 0x88811813ea00 (size 512):
  comm "syz-executor939", pid 10246, jiffies 4294971847 (age 36.030s)
  hex dump (first 32 bytes):
b0 55 1f 9b 00 00 00 00 00 01 00 00 06 00 00 00  .U..
11 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
  backtrace:
[<062fad90>] kmalloc include/linux/slab.h:557 [inline]
[<062fad90>] fbcon_set_font+0x128/0x370 
drivers/video/fbdev/core/fbcon.c:2454
[] con_font_set drivers/tty/vt/vt.c:4667 [inline]
[] con_font_op+0x497/0x740 drivers/tty/vt/vt.c:4711
[] vt_io_ioctl drivers/tty/vt/vt_ioctl.c:596 [inline]
[] vt_ioctl+0xeab/0x19d0 drivers/tty/vt/vt_ioctl.c:817
[<369331c6>] tty_ioctl+0x6c3/0xc40 drivers/tty/tty_io.c:2658
[] vfs_ioctl fs/ioctl.c:48 [inline]
[] __do_sys_ioctl fs/ioctl.c:753 [inline]
[] __se_sys_ioctl fs/ioctl.c:739 [inline]
[] __x64_sys_ioctl+0xfc/0x140 fs/ioctl.c:739
[<705a3959>] do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
[] entry_SYSCALL_64_after_hwframe+0x44/0xa9

BUG: memory leak
unreferenced object 0x88811813ea00 (size 512):
  comm "syz-executor939", pid 10246, jiffies 4294971847 (age 37.550s)
  hex dump (first 32 bytes):
b0 55 1f 9b 00 00 00 00 00 01 00 00 06 00 00 00  .U..
11 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
  backtrace:
[<062fad90>] kmalloc include/linux/slab.h:557 [inline]
[<062fad90>] fbcon_set_font+0x128/0x370 
drivers/video/fbdev/core/fbcon.c:2454
[] con_font_set drivers/tty/vt/vt.c:4667 [inline]
[] con_font_op+0x497/0x740 drivers/tty/vt/vt.c:4711
[] vt_io_ioctl drivers/tty/vt/vt_ioctl.c:596 [inline]
[] vt_ioctl+0xeab/0x19d0 drivers/tty/vt/vt_ioctl.c:817
[<369331c6>] tty_ioctl+0x6c3/0xc40 drivers/tty/tty_io.c:2658
[] vfs_ioctl fs/ioctl.c:48 [inline]
[] __do_sys_ioctl fs/ioctl.c:753 [inline]
[] __se_sys_ioctl fs/ioctl.c:739 [inline]
[] __x64_sys_ioctl+0xfc/0x140 fs/ioctl.c:739
[<705a3959>] do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
[] entry_SYSCALL_64_after_hwframe+0x44/0xa9

BUG: memory leak
unreferenced object 0x88811813ea00 (size 512):
  comm "syz-executor939", pid 10246, jiffies 4294971847 (age 37.630s)
  hex dump (first 32 bytes):
b0 55 1f 9b 00 00 00 00 00 01 00 00 06 00 00 00  .U..
11 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
  backtrace:
[<062fad90>] kmalloc include/linux/slab.h:557 [inline]
[<062fad90>] fbcon_set_font+0x128/0x370 
drivers/video/fbdev/core/fbcon.c:2454
[] con_font_set drivers/tty/vt/vt.c:4667 [inline

Re: [PATCH v5 1/6] mfd: rt4831: Adds support for Richtek RT4831 core

2021-01-14 Thread ChiYuan Huang
Lee Jones  於 2021年1月13日 週三 下午8:21寫道:
>
> On Thu, 17 Dec 2020, cy_huang wrote:
>
> > From: ChiYuan Huang 
> >
> > This adds support Richtek RT4831 core. It includes four channel WLED driver
> > and Display Bias Voltage outputs.
> >
> > Signed-off-by: ChiYuan Huang 
> > ---
> > since v5
> > - Rename file name from rt4831-core.c to rt4831.c
> > - Change RICHTEK_VID to RICHTEK_VENDOR_ID.
> > - Change gpio_desc nameing from 'enable' to 'enable_gpio' in probe.
> > - Change variable 'val' to the meaningful name 'chip_id'.
> > - Refine the error log when vendor id is not matched.
> > - Remove of_match_ptr.
> >
> > since v2
> > - Refine Kconfig descriptions.
> > - Add copyright.
> > - Refine error logs in probe.
> > - Refine comment lines in remove and shutdown.
> > ---
> >  drivers/mfd/Kconfig  |  10 +
> >  drivers/mfd/Makefile |   1 +
> >  drivers/mfd/rt4831.c | 124 
> > +++
> >  3 files changed, 135 insertions(+)
> >  create mode 100644 drivers/mfd/rt4831.c
> >
> > diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
> > index 8b99a13..dfb2640 100644
> > --- a/drivers/mfd/Kconfig
> > +++ b/drivers/mfd/Kconfig
> > @@ -1088,6 +1088,16 @@ config MFD_RDC321X
> > southbridge which provides access to GPIOs and Watchdog using the
> > southbridge PCI device configuration space.
> >
> > +config MFD_RT4831
> > + tristate "Richtek RT4831 four channel WLED and Display Bias Voltage"
> > + depends on I2C
> > + select MFD_CORE
> > + select REGMAP_I2C
> > + help
> > +   This enables support for the Richtek RT4831 that includes 4 channel
> > +   WLED driving and Display Bias Voltage. It's commonly used to provide
> > +   power to the LCD display and LCD backlight.
> > +
> >  config MFD_RT5033
> >   tristate "Richtek RT5033 Power Management IC"
> >   depends on I2C
> > diff --git a/drivers/mfd/Makefile b/drivers/mfd/Makefile
> > index 1780019..28d247b 100644
> > --- a/drivers/mfd/Makefile
> > +++ b/drivers/mfd/Makefile
> > @@ -235,6 +235,7 @@ obj-$(CONFIG_MFD_MENF21BMC)   += menf21bmc.o
> >  obj-$(CONFIG_MFD_HI6421_PMIC)+= hi6421-pmic-core.o
> >  obj-$(CONFIG_MFD_HI655X_PMIC)   += hi655x-pmic.o
> >  obj-$(CONFIG_MFD_DLN2)   += dln2.o
> > +obj-$(CONFIG_MFD_RT4831) += rt4831.o
> >  obj-$(CONFIG_MFD_RT5033) += rt5033.o
> >  obj-$(CONFIG_MFD_SKY81452)   += sky81452.o
> >
> > diff --git a/drivers/mfd/rt4831.c b/drivers/mfd/rt4831.c
> > new file mode 100644
> > index ..2bf8364
> > --- /dev/null
> > +++ b/drivers/mfd/rt4831.c
> > @@ -0,0 +1,124 @@
> > +// SPDX-License-Identifier: GPL-2.0+
> > +/*
> > + * Copyright (c) 2020 Richtek Technology Corp.
>
> Nit: If you respin this, please bump the date.
>
Okay.
> > + * Author: ChiYuan Huang 
> > + */
> > +
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +
> > +#define RT4831_REG_REVISION  0x01
> > +#define RT4831_REG_ENABLE0x08
> > +#define RT4831_REG_I2CPROT   0x15
> > +
> > +#define RICHTEK_VENDOR_ID0x03
> > +#define RT4831_VID_MASK  GENMASK(1, 0)
> > +#define RT4831_RESET_MASKBIT(7)
> > +#define RT4831_I2CSAFETMR_MASK   BIT(0)
> > +
> > +static const struct mfd_cell rt4831_subdevs[] = {
> > + OF_MFD_CELL("rt4831-backlight", NULL, NULL, 0, 0, 
> > "richtek,rt4831-backlight"),
> > + MFD_CELL_NAME("rt4831-regulator")
> > +};
> > +
> > +static bool rt4831_is_accessible_reg(struct device *dev, unsigned int reg)
> > +{
> > + if (reg >= RT4831_REG_REVISION && reg <= RT4831_REG_I2CPROT)
> > + return true;
> > + return false;
> > +}
> > +
> > +static const struct regmap_config rt4831_regmap_config = {
> > + .reg_bits = 8,
> > + .val_bits = 8,
> > + .max_register = RT4831_REG_I2CPROT,
> > +
> > + .readable_reg = rt4831_is_accessible_reg,
> > + .writeable_reg = rt4831_is_accessible_reg,
> > +};
> > +
> > +static int rt4831_probe(struct i2c_client *client)
> > +{
> > + struct gpio_desc *enable_gpio;
> > + struct regmap *regmap;
> > + unsigned int chip_id;
> > + int ret;
> > +
> > + enable_gpio = devm_gpiod_get_optional(&client->dev, "enable", 
> > GPIOD_OUT_HIGH);
> > + if (IS_ERR(enable_gpio)) {
> > + dev_err(&client->dev, "Failed to get 'enable' GPIO\n");
> > + return PTR_ERR(enable_gpio);
> > + }
> > +
> > + regmap = devm_regmap_init_i2c(client, &rt4831_regmap_config);
> > + if (IS_ERR(regmap)) {
> > + dev_err(&client->dev, "Failed to initialize regmap\n");
> > + return PTR_ERR(regmap);
> > + }
> > +
> > + ret = regmap_read(regmap, RT4831_REG_REVISION, &chip_id);
> > + if (ret) {
> > + dev_err(&client->dev, "Failed to get H/W revision\n");
> > + return ret;
> > + }
> > +
> > + if ((chip_id & RT4831_VID_MASK) != RICHTEK_VENDOR_ID) {
> > + dev_err(&client->dev, "Chip vendor ID 0x%02x not matched\

Re: [PATCH] drm/ttm: use compound pages for THP

2021-01-14 Thread David Stevens
On Wed, Jan 13, 2021 at 5:58 PM Christian König
 wrote:
>
> Am 13.01.21 um 09:47 schrieb David Stevens:
> > Compound page metadata is necessary for page reference counting to work
> > properly on pages besides the head page. Without it, put_page
> > corresponding to the last outstanding get_page call on a tail page will
> > end up freeing that page, even if the bo still references the page.
>
> NAK, I should add a comment since I can't count any more how many times
> people came up with this.
>
> Calling put_page() on a TTM allocated page is strictly illegal in the
> first place since they are not reference counted.
>
> Please don't tell me somebody once more tried to mmap() pages from a
> DMA-buf created sg_table into a process address space?

I ran into this on a system using the currently in development
virtio-gpu blob resources [1]. The implementation passes dma buffers
allocated by the host gpu to KVM_SET_USER_MEMORY_REGION, so the guest
can directly access the buffers without the need for an intermediate
copy.

[1] 
https://patchwork.kernel.org/project/dri-devel/cover/20200814024000.2485-1-gurchetansi...@chromium.org/

> Regards,
> Christian.
>
> >
> > Signed-off-by: David Stevens 
> > ---
> >   drivers/gpu/drm/ttm/ttm_pool.c | 1 -
> >   1 file changed, 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> > index 7b2f60616750..09239b93dc2c 100644
> > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > @@ -83,7 +83,6 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool 
> > *pool, gfp_t gfp_flags,
> >   gfp_flags |= GFP_TRANSHUGE_LIGHT | __GFP_NORETRY |
> >   __GFP_KSWAPD_RECLAIM;
> >   gfp_flags &= ~__GFP_MOVABLE;
> > - gfp_flags &= ~__GFP_COMP;
> >   }
> >
> >   if (!pool->use_dma_alloc) {
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v3 1/2] dma-fence: allow signaling drivers to set fence timestamp

2021-01-14 Thread Veera Sundaram Sankaran
Some drivers have hardware capability to get the precise HW timestamp
of certain events based on which the fences are triggered. The delta
between the event HW timestamp & current HW reference timestamp can
be used to calculate the timestamp in kernel's CLOCK_MONOTONIC time
domain. This allows it to set accurate timestamp factoring out any
software and IRQ latencies. Add a timestamp variant of fence signal
function, dma_fence_signal_timestamp to allow drivers to update the
precise timestamp for fences.

Changes in v2:
- Add a new fence signal variant instead of modifying fence struct

Changes in v3:
- Add timestamp domain information to commit-text and
dma_fence_signal_timestamp documentation

Signed-off-by: Veera Sundaram Sankaran 
---
 drivers/dma-buf/dma-fence.c | 70 -
 include/linux/dma-fence.h   |  3 ++
 2 files changed, 66 insertions(+), 7 deletions(-)

diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 7475e09..b83e9fa 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -312,22 +312,25 @@ void __dma_fence_might_wait(void)
 
 
 /**
- * dma_fence_signal_locked - signal completion of a fence
+ * dma_fence_signal_timestamp_locked - signal completion of a fence
  * @fence: the fence to signal
+ * @timestamp: fence signal timestamp in kernel's CLOCK_MONOTONIC time domain
  *
  * Signal completion for software callbacks on a fence, this will unblock
  * dma_fence_wait() calls and run all the callbacks added with
  * dma_fence_add_callback(). Can be called multiple times, but since a fence
  * can only go from the unsignaled to the signaled state and not back, it will
- * only be effective the first time.
+ * only be effective the first time. Set the timestamp provided as the fence
+ * signal timestamp.
  *
- * Unlike dma_fence_signal(), this function must be called with &dma_fence.lock
- * held.
+ * Unlike dma_fence_signal_timestamp(), this function must be called with
+ * &dma_fence.lock held.
  *
  * Returns 0 on success and a negative error value when @fence has been
  * signalled already.
  */
-int dma_fence_signal_locked(struct dma_fence *fence)
+int dma_fence_signal_timestamp_locked(struct dma_fence *fence,
+   ktime_t timestamp)
 {
struct dma_fence_cb *cur, *tmp;
struct list_head cb_list;
@@ -341,7 +344,7 @@ int dma_fence_signal_locked(struct dma_fence *fence)
/* Stash the cb_list before replacing it with the timestamp */
list_replace(&fence->cb_list, &cb_list);
 
-   fence->timestamp = ktime_get();
+   fence->timestamp = timestamp;
set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
trace_dma_fence_signaled(fence);
 
@@ -352,6 +355,59 @@ int dma_fence_signal_locked(struct dma_fence *fence)
 
return 0;
 }
+EXPORT_SYMBOL(dma_fence_signal_timestamp_locked);
+
+/**
+ * dma_fence_signal_timestamp - signal completion of a fence
+ * @fence: the fence to signal
+ * @timestamp: fence signal timestamp in kernel's CLOCK_MONOTONIC time domain
+ *
+ * Signal completion for software callbacks on a fence, this will unblock
+ * dma_fence_wait() calls and run all the callbacks added with
+ * dma_fence_add_callback(). Can be called multiple times, but since a fence
+ * can only go from the unsignaled to the signaled state and not back, it will
+ * only be effective the first time. Set the timestamp provided as the fence
+ * signal timestamp.
+ *
+ * Returns 0 on success and a negative error value when @fence has been
+ * signalled already.
+ */
+int dma_fence_signal_timestamp(struct dma_fence *fence, ktime_t timestamp)
+{
+   unsigned long flags;
+   int ret;
+
+   if (!fence)
+   return -EINVAL;
+
+   spin_lock_irqsave(fence->lock, flags);
+   ret = dma_fence_signal_timestamp_locked(fence, timestamp);
+   spin_unlock_irqrestore(fence->lock, flags);
+
+   return ret;
+}
+EXPORT_SYMBOL(dma_fence_signal_timestamp);
+
+/**
+ * dma_fence_signal_locked - signal completion of a fence
+ * @fence: the fence to signal
+ *
+ * Signal completion for software callbacks on a fence, this will unblock
+ * dma_fence_wait() calls and run all the callbacks added with
+ * dma_fence_add_callback(). Can be called multiple times, but since a fence
+ * can only go from the unsignaled to the signaled state and not back, it will
+ * only be effective the first time.
+ *
+ * Unlike dma_fence_signal(), this function must be called with &dma_fence.lock
+ * held.
+ *
+ * Returns 0 on success and a negative error value when @fence has been
+ * signalled already.
+ */
+int dma_fence_signal_locked(struct dma_fence *fence)
+{
+   return dma_fence_signal_timestamp_locked(fence, ktime_get());
+}
 EXPORT_SYMBOL(dma_fence_signal_locked);
 
 /**
@@ -379,7 +435,7 @@ int dma_fence_signal(struct dma_fence *fence)
tmp = dma_fence_begin_signalling();
 
spin_lock_irqsave(fence->lock, flags);
-   ret = dma_fence_signal

[PATCH v3 5/7] drm/msm/a5xx: Fix VPC protect value in gpu_write()

2021-01-14 Thread AngeloGioacchino Del Regno
From: Konrad Dybcio 

The upstream API for some reason uses logbase2 instead of
just passing the argument as-is, whereas downstream CAF
kernel does the latter.

Hence, a mistake has been made when porting:
4 is the value that's supposed to be passed, but
log2(4) = 2. Changing the value to 16 (= 2^4) fixes
the issue.

Signed-off-by: Konrad Dybcio 
Signed-off-by: AngeloGioacchino Del Regno 

---
 drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c 
b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
index 66980f4cd93e..24ab51bb5a01 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
@@ -821,7 +821,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
 
/* VPC */
gpu_write(gpu, REG_A5XX_CP_PROTECT(14), ADRENO_PROTECT_RW(0xE68, 8));
-   gpu_write(gpu, REG_A5XX_CP_PROTECT(15), ADRENO_PROTECT_RW(0xE70, 4));
+   gpu_write(gpu, REG_A5XX_CP_PROTECT(15), ADRENO_PROTECT_RW(0xE70, 16));
 
/* UCHE */
gpu_write(gpu, REG_A5XX_CP_PROTECT(16), ADRENO_PROTECT_RW(0xE80, 16));
-- 
2.29.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v3 4/7] drm/msm/a5xx: Reset VBIF before PC only on A510 and A530

2021-01-14 Thread AngeloGioacchino Del Regno
Resetting the VBIF before power collapse is done to avoid getting
bogus FIFO entries during the suspend sequence or subsequent resume,
but this is doable only on Adreno 510 and Adreno 530, as the other
units will tendentially lock up.
Especially on Adreno 508, the GPU will show lockups and very bad
slownesses after processing the first frame.

Avoiding to execute the RBBM SW Reset before suspend will stop the
lockup issue from happening on at least Adreno 508/509/512.

Signed-off-by: AngeloGioacchino Del Regno 

Reviewed-by: Jordan Crouse 
---
 drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c 
b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
index 04ffd84c1190..66980f4cd93e 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
@@ -1360,10 +1360,12 @@ static int a5xx_pm_suspend(struct msm_gpu *gpu)
 
/*
 * Reset the VBIF before power collapse to avoid issue with FIFO
-* entries
+* entries on Adreno A510 and A530 (the others will tend to lock up)
 */
-   gpu_write(gpu, REG_A5XX_RBBM_BLOCK_SW_RESET_CMD, 0x003C);
-   gpu_write(gpu, REG_A5XX_RBBM_BLOCK_SW_RESET_CMD, 0x);
+   if (adreno_is_a510(adreno_gpu) || adreno_is_a530(adreno_gpu)) {
+   gpu_write(gpu, REG_A5XX_RBBM_BLOCK_SW_RESET_CMD, 0x003C);
+   gpu_write(gpu, REG_A5XX_RBBM_BLOCK_SW_RESET_CMD, 0x);
+   }
 
ret = msm_gpu_pm_suspend(gpu);
if (ret)
-- 
2.29.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v2 2/2] drm/msm/dp: unplug interrupt missed after irq_hpd handler

2021-01-14 Thread Kuogee Hsieh
There is HPD unplug interrupts missed at scenario of an irq_hpd
followed by unplug interrupts with around 10 ms in between.
Since both AUX_SW_RESET and DP_SW_RESET clear pending HPD interrupts,
irq_hpd handler should not issues either aux or sw reset to avoid
following unplug interrupt be cleared accidentally. This patch
also postpone handling of irq_hpd until connected state if it
happened at connection pending state.

Changes in V2:
-- add postpone handling of irq_hpd until connected state
-- check DP_TRAINING_1 instead of DP_TRAINING_NONE

Signed-off-by: Kuogee Hsieh 
---
 drivers/gpu/drm/msm/dp/dp_aux.c |  7 ---
 drivers/gpu/drm/msm/dp/dp_catalog.c | 24 
 drivers/gpu/drm/msm/dp/dp_ctrl.c| 15 ++-
 drivers/gpu/drm/msm/dp/dp_display.c |  7 +++
 4 files changed, 41 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/msm/dp/dp_aux.c b/drivers/gpu/drm/msm/dp/dp_aux.c
index 19b35ae..1c6e1d2 100644
--- a/drivers/gpu/drm/msm/dp/dp_aux.c
+++ b/drivers/gpu/drm/msm/dp/dp_aux.c
@@ -336,7 +336,6 @@ static ssize_t dp_aux_transfer(struct drm_dp_aux *dp_aux,
ssize_t ret;
int const aux_cmd_native_max = 16;
int const aux_cmd_i2c_max = 128;
-   int const retry_count = 5;
struct dp_aux_private *aux = container_of(dp_aux,
struct dp_aux_private, dp_aux);
 
@@ -378,12 +377,6 @@ static ssize_t dp_aux_transfer(struct drm_dp_aux *dp_aux,
ret = dp_aux_cmd_fifo_tx(aux, msg);
 
if (ret < 0) {
-   if (aux->native) {
-   aux->retry_cnt++;
-   if (!(aux->retry_cnt % retry_count))
-   dp_catalog_aux_update_cfg(aux->catalog);
-   dp_catalog_aux_reset(aux->catalog);
-   }
usleep_range(400, 500); /* at least 400us to next try */
goto unlock_exit;
}
diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c 
b/drivers/gpu/drm/msm/dp/dp_catalog.c
index 44f0c57..b1a9b1b 100644
--- a/drivers/gpu/drm/msm/dp/dp_catalog.c
+++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
@@ -190,6 +190,18 @@ int dp_catalog_aux_clear_hw_interrupts(struct dp_catalog 
*dp_catalog)
return 0;
 }
 
+/**
+ * dp_catalog_aux_reset() - reset AUX controller
+ *
+ * @aux: DP catalog structure
+ *
+ * return: void
+ *
+ * This function reset AUX controller
+ *
+ * NOTE: reset AUX controller will also clear any pending HPD related 
interrupts
+ * 
+ */
 void dp_catalog_aux_reset(struct dp_catalog *dp_catalog)
 {
u32 aux_ctrl;
@@ -483,6 +495,18 @@ int dp_catalog_ctrl_set_pattern(struct dp_catalog 
*dp_catalog,
return 0;
 }
 
+/**
+ * dp_catalog_ctrl_reset() - reset DP controller
+ *
+ * @dp_catalog: DP catalog structure
+ *
+ * return: void
+ *
+ * This function reset the DP controller
+ *
+ * NOTE: reset DP controller will also clear any pending HPD related interrupts
+ * 
+ */
 void dp_catalog_ctrl_reset(struct dp_catalog *dp_catalog)
 {
u32 sw_reset;
diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
index e3462f5..5ac155d 100644
--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
+++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
@@ -1296,7 +1296,8 @@ static int dp_ctrl_setup_main_link(struct dp_ctrl_private 
*ctrl,
 * transitioned to PUSH_IDLE. In order to start transmitting
 * a link training pattern, we have to first do soft reset.
 */
-   dp_catalog_ctrl_reset(ctrl->catalog);
+   if (*training_step == DP_TRAINING_1)
+   dp_catalog_ctrl_reset(ctrl->catalog);
 
ret = dp_ctrl_link_train(ctrl, cr, training_step);
 
@@ -1491,15 +1492,18 @@ static int dp_ctrl_deinitialize_mainlink(struct 
dp_ctrl_private *ctrl)
return 0;
 }
 
+static void dp_ctrl_link_idle_reset(struct dp_ctrl_private *ctrl)
+{
+   dp_ctrl_push_idle(&ctrl->dp_ctrl);
+   dp_catalog_ctrl_reset(ctrl->catalog);
+}
+
 static int dp_ctrl_link_maintenance(struct dp_ctrl_private *ctrl)
 {
int ret = 0;
struct dp_cr_status cr;
int training_step = DP_TRAINING_NONE;
 
-   dp_ctrl_push_idle(&ctrl->dp_ctrl);
-   dp_catalog_ctrl_reset(ctrl->catalog);
-
ctrl->dp_ctrl.pixel_rate = ctrl->panel->dp_mode.drm_mode.clock;
 
ret = dp_ctrl_setup_main_link(ctrl, &cr, &training_step);
@@ -1626,6 +1630,7 @@ void dp_ctrl_handle_sink_request(struct dp_ctrl *dp_ctrl)
 
if (sink_request & DP_TEST_LINK_TRAINING) {
dp_link_send_test_response(ctrl->link);
+   dp_ctrl_link_idle_reset(ctrl);
if (dp_ctrl_link_maintenance(ctrl)) {
DRM_ERROR("LM failed: TEST_LINK_TRAINING\n");
return;
@@ -1679,7 +1684,7 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
break;
}
 
-   training_step = DP_TRAINING_NONE;
+   training_step = DP_TRAINING_1;
rc = dp_ctrl_setup_main_link(

Re: [PATCH V3] dt-bindings: gpu: Convert v3d to json-schema

2021-01-14 Thread Maxime Ripard
On Wed, Jan 13, 2021 at 01:53:38PM +0100, Stefan Wahren wrote:
> Hi Maxime,
> 
> Am 13.01.21 um 10:15 schrieb Maxime Ripard:
> > Hi,
> >
> > On Sat, Jan 09, 2021 at 11:50:32AM +0100, Stefan Wahren wrote:
> >> This converts the v3d bindings to yaml format.
> >>
> >> Signed-off-by: Stefan Wahren 
> >> ---
> ...
> >> diff --git a/Documentation/devicetree/bindings/gpu/brcm,bcm-v3d.yaml 
> >> b/Documentation/devicetree/bindings/gpu/brcm,bcm-v3d.yaml
> >> new file mode 100644
> >> index 000..3b543d4
> >> --- /dev/null
> >> +++ b/Documentation/devicetree/bindings/gpu/brcm,bcm-v3d.yaml
> >> @@ -0,0 +1,72 @@
> >> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> >> +%YAML 1.2
> >> +---
> >> +$id: http://devicetree.org/schemas/gpu/brcm,bcm-v3d.yaml#
> >> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> >> +
> >> +title: Broadcom V3D GPU Bindings
> >> +
> >> +maintainers:
> >> +  - Eric Anholt 
> >> +  - Nicolas Saenz Julienne 
> >> +
> >> +properties:
> >> +  $nodename:
> >> +pattern: '^gpu@[a-f0-9]+$'
> >> +
> >> +  compatible:
> >> +enum:
> >> +  - brcm,7268-v3d
> >> +  - brcm,7278-v3d
> >> +
> >> +  reg:
> >> +items:
> >> +  - description: hub register (required)
> >> +  - description: core0 register (required)
> >> +  - description: GCA cache controller register (if GCA controller 
> >> present)
> >> +  - description: bridge register (if no external reset controller)
> >> +minItems: 2
> > maxItems will be set to 2 in this case, while it would be 4 I guess?
> 
> This confuses me. Based on this patch [1] by Rob, i would assume that
> maxItems is derived from item list length.
> 
> [1] -
> https://lists.freedesktop.org/archives/dri-devel/2020-December/292309.html

Yeah, you're right

My understanding was that maxItems was set to whatever minItems was if
maxItems was missing, but dt-validate also checks for whether it's a
list and will fill it like you said:

https://github.com/devicetree-org/dt-schema/blob/master/dtschema/lib.py#L258

Maxime


signature.asc
Description: PGP signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v2 1/2] drm/msm/dp: return fail when both link lane and rate are 0 at dpcd read

2021-01-14 Thread Kuogee Hsieh
Some dongle set both link lane and rate to 0 during dpcd receiver
capability read if there is no monitor attache to this dongle.
Therefore return fail to prevent driver from trying to populate
monitor further.

Changes in V2:
-- split this patch into two. Move postpone irq_handle into next patch
-- add Fixes tag

Fixes: 78f94fbb6122 ("drm/msm/dp: fix connect/disconnect handled at irq_hpd")

Signed-off-by: Kuogee Hsieh 
---
 drivers/gpu/drm/msm/dp/dp_panel.c | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c 
b/drivers/gpu/drm/msm/dp/dp_panel.c
index 97dca3e..d1780bc 100644
--- a/drivers/gpu/drm/msm/dp/dp_panel.c
+++ b/drivers/gpu/drm/msm/dp/dp_panel.c
@@ -167,12 +167,18 @@ int dp_panel_read_sink_caps(struct dp_panel *dp_panel,
panel = container_of(dp_panel, struct dp_panel_private, dp_panel);
 
rc = dp_panel_read_dpcd(dp_panel);
+   if (rc) {
+   DRM_ERROR("read dpcd failed %d\n", rc);
+   return rc;
+   }
+
bw_code = drm_dp_link_rate_to_bw_code(dp_panel->link_info.rate);
-   if (rc || !is_link_rate_valid(bw_code) ||
+   if (!is_link_rate_valid(bw_code) ||
!is_lane_count_valid(dp_panel->link_info.num_lanes) ||
(bw_code > dp_panel->max_bw_code)) {
-   DRM_ERROR("read dpcd failed %d\n", rc);
-   return rc;
+   DRM_ERROR("Illegal link rate=%d lane=%d\n", 
dp_panel->link_info.rate,
+   dp_panel->link_info.num_lanes);
+   return -EINVAL;
}
 
if (dp_panel->dfp_present) {
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH RFT 0/2] drm/vc4: Clear the display list on startup

2021-01-14 Thread Maxime Ripard
Hi Ryutaroh, Lucas,

On Wed, Jan 13, 2021 at 09:51:59AM +0900, Ryutaroh Matsumoto wrote:
> Hi Lucas,
> 
> > week-end, so I cannot test before early next week. However I'm Ccing
> > Ryutaroh Matsumoto who could also reproduce it. Maybe he is in a better
> > position to test this (@Ryutaroh: I bounced the patches to you).
> 
> Should I apply PATCH RFT 0--2/2 to
> https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.10.7.tar.xz
> or some branch of someone's git repo?
> 
> On my Raspberry Pi 4B 8GB model, all versioins up to upstream 5.10.6
> fail to boot on USB MSD due to changes to the USB handling by raspi
> introduced in 5.10 series, and failed to boot from an SD card up to 5.10.3
> or so. No working WiFi connections neither on my raspi 4b.
> 
> I am unsure if I can test the patch reliably. But I will try it within a few 
> days.

We just had another lead on this one and that patch doesn't seem to
prove useful to fix it, so it's probably better to hold off the testing
at this point

Thanks!
Maxime


signature.asc
Description: PGP signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v3 7/7] drm/msm/a5xx: Disable UCHE global filter

2021-01-14 Thread AngeloGioacchino Del Regno
From: Konrad Dybcio 

Port over the command from downstream to prevent undefined
behaviour.

Signed-off-by: Konrad Dybcio 
Signed-off-by: AngeloGioacchino Del Regno 

---
 drivers/gpu/drm/msm/adreno/a5xx.xml.h | 2 ++
 drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +++
 2 files changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/msm/adreno/a5xx.xml.h 
b/drivers/gpu/drm/msm/adreno/a5xx.xml.h
index 346cc6ff3a36..7b9fcfe95c04 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx.xml.h
+++ b/drivers/gpu/drm/msm/adreno/a5xx.xml.h
@@ -2367,6 +2367,8 @@ static inline uint32_t A5XX_VSC_RESOLVE_CNTL_Y(uint32_t 
val)
 
 #define REG_A5XX_UCHE_ADDR_MODE_CNTL   0x0e80
 
+#define REG_A5XX_UCHE_MODE_CNTL
0x0e81
+
 #define REG_A5XX_UCHE_SVM_CNTL 0x0e82
 
 #define REG_A5XX_UCHE_WRITE_THRU_BASE_LO   0x0e87
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c 
b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
index 23fc851756de..7e553d3efeb2 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
@@ -754,6 +754,9 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
adreno_is_a512(adreno_gpu))
gpu_rmw(gpu, REG_A5XX_RB_DBG_ECO_CNTL, 0, (1 << 9));
 
+   /* Disable UCHE global filter as SP can invalidate/flush independently 
*/
+   gpu_write(gpu, REG_A5XX_UCHE_MODE_CNTL, BIT(29));
+
/* Enable USE_RETENTION_FLOPS */
gpu_write(gpu, REG_A5XX_CP_CHICKEN_DBG, 0x0200);
 
-- 
2.29.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/msm/dp: unplug interrupt missed after irq_hpd handler

2021-01-14 Thread khsieh

On 2021-01-11 11:54, Stephen Boyd wrote:

Quoting Kuogee Hsieh (2021-01-07 12:30:25)

There is HPD unplug interrupts missed at scenario of an irq_hpd
followed by unplug interrupts with around 10 ms in between.
Since both AUX_SW_RESET and DP_SW_RESET clear pending HPD interrupts,
irq_hpd handler should not issues either aux or sw reset to avoid
following unplug interrupt be cleared accidentally.


So the problem is that we're resetting the DP aux phy in the middle of
the HPD state machine transitioning states?


yes, after reset aux, hw clear pending hpd interrupts


Signed-off-by: Kuogee Hsieh 
---
diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c 
b/drivers/gpu/drm/msm/dp/dp_catalog.c

index 44f0c57..9c0ce98 100644
--- a/drivers/gpu/drm/msm/dp/dp_catalog.c
+++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
@@ -190,6 +190,18 @@ int dp_catalog_aux_clear_hw_interrupts(struct 
dp_catalog *dp_catalog)

return 0;
 }

+/**
+ * dp_catalog_aux_reset() - reset AUX controller
+ *
+ * @aux: DP catalog structure
+ *
+ * return: void
+ *
+ * This function reset AUX controller
+ *
+ * NOTE: reset AUX controller will also clear any pending HPD related 
interrupts

+ *
+ */
 void dp_catalog_aux_reset(struct dp_catalog *dp_catalog)
 {
u32 aux_ctrl;
@@ -483,6 +495,18 @@ int dp_catalog_ctrl_set_pattern(struct dp_catalog 
*dp_catalog,

return 0;
 }

+/**
+ * dp_catalog_ctrl_reset() - reset DP controller
+ *
+ * @aux: DP catalog structure


It's called dp_catalog though.

registers access are through dp_catalog_



+ *
+ * return: void
+ *
+ * This function reset DP controller


resets the


+ *
+ * NOTE: reset DP controller will also clear any pending HPD related 
interrupts

+ *
+ */
 void dp_catalog_ctrl_reset(struct dp_catalog *dp_catalog)
 {
u32 sw_reset;
diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c 
b/drivers/gpu/drm/msm/dp/dp_ctrl.c

index e3462f5..f96c415 100644
--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
+++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
@@ -1296,7 +1296,8 @@ static int dp_ctrl_setup_main_link(struct 
dp_ctrl_private *ctrl,

 * transitioned to PUSH_IDLE. In order to start transmitting
 * a link training pattern, we have to first do soft reset.
 */
-   dp_catalog_ctrl_reset(ctrl->catalog);
+   if (*training_step != DP_TRAINING_NONE)


Can we check for the positive value instead? i.e.
DP_TRAINING_1/DP_TRAINING_2


+   dp_catalog_ctrl_reset(ctrl->catalog);

ret = dp_ctrl_link_train(ctrl, cr, training_step);


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 15/21] drm/tegra: Add new UAPI to header

2021-01-14 Thread Dmitry Osipenko
13.01.2021 21:56, Mikko Perttunen пишет:
> On 1/13/21 8:14 PM, Dmitry Osipenko wrote:
>> 11.01.2021 16:00, Mikko Perttunen пишет:
>>> +struct drm_tegra_submit_buf {
>>> +    /**
>>> + * @mapping_id: [in]
>>> + *
>>> + * Identifier of the mapping to use in the submission.
>>> + */
>>> +    __u32 mapping_id;
>>
>> I'm now in process of trying out the UAPI using grate drivers and this
>> becomes the first obstacle.
>>
>> Looks like this is not going to work well for older Tegra SoCs, in
>> particular for T20, which has a small GART.
>>
>> Given that the usefulness of the partial mapping feature is very
>> questionable until it will be proven with a real userspace, we should
>> start with a dynamic mappings that are done at a time of job submission.
>>
>> DRM already should have everything necessary for creating and managing
>> caches of mappings, grate kernel driver has been using drm_mm_scan for a
>> long time now for that.
>>
>> It should be fine to support the static mapping feature, but it should
>> be done separately with the drm_mm integration, IMO.
>>
>> What do think?
>>
> 
> Can you elaborate on the requirements to be able to use GART? Are there
> any other reasons this would not work on older chips?

We have all DRM devices in a single address space on T30+, hence having
duplicated mappings for each device should be a bit wasteful.

> I think we should keep CHANNEL_MAP and mapping_ids, but if e.g. for GART
> we cannot do mapping immediately at CHANNEL_MAP time, we can just treat
> it as a "registration" call for the GEM object - potentially no-op like
> direct physical addressing is. We can then do whatever is needed at
> submit time. This way we can have the best of both worlds.

I have some thoughts now, but nothing concrete yet. Maybe we will need
to create a per-SoC ops for MM.

I'll finish with trying what we currently have to see what else is
missing and then we will decide what to do about it.

> Note that partial mappings are already not present in this version of
> the UAPI.

Oh, right :) I haven't got closely to this part of reviewing yet.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/msm/dp: postpone irq_hpd event during connection pending state

2021-01-14 Thread Stephen Boyd
Quoting khs...@codeaurora.org (2021-01-13 09:44:24)
> On 2021-01-11 11:55, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2021-01-07 12:30:24)
> >> irq_hpd event can only be executed at connected state. Therefore
> >> irq_hpd event should be postponed if it happened at connection
> >> pending state. This patch also make sure both link rate and lane
> > 
> > Why does it happen at connection pending state?
> plug in need two state to complete it.
> advance to connection pending state once link training completed and 
> sent uevent notification to frame work.
> transition to connected state after frame work provided resolution 
> timing and start transmit video panel.
> Therefore irq_hpd should not be handled if it occurred before connected 
> state.

Sure that's what's going on in the patch but you didn't answer my
question. Why does irq_hpd happen before connected state?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v5] drm/sun4i: tcon: fix inverted DCLK polarity

2021-01-14 Thread Giulio Benetti
From: Giulio Benetti 

During commit 88bc4178568b ("drm: Use new
DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE flags") DRM_BUS_FLAG_*
macros have been changed to avoid ambiguity but just because of this
ambiguity previous DRM_BUS_FLAG_PIXDATA_(POS/NEG)EDGE were used meaning
_SAMPLE_ not _DRIVE_. This leads to DLCK inversion and need to fix but
instead of swapping phase values, let's adopt an easier approach Maxime
suggested:
It turned out that bit 26 of SUN4I_TCON0_IO_POL_REG is dedicated to
invert DCLK polarity and this makes things really easier than before. So
let's handle DCLK polarity by adding SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE
as bit 26 and activating according to bus_flags the same way it is done
for all the other signals polarity.

Fixes: 88bc4178568b ("drm: Use new DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE 
flags")
Suggested-by: Maxime Ripard 
Signed-off-by: Giulio Benetti 
---
V2->V3:
- squash 2 patches into 1
V3->V4:
- add SUN4I_TCON0_IO_POL_DCLK_POSITIVE to regmap_update_bits()
V4->V5:
polarity is still wrong so:
- let's use SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE macro
  instead of _DCLK_POSITIVE(that would make sense only in realtion with DCLK)
- invert condition using _NEGEDGE instead of _POSEDGE and then matching with
  register bit SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE
- correct commit log according to V4->V5 changes
---
 drivers/gpu/drm/sun4i/sun4i_tcon.c | 21 ++---
 drivers/gpu/drm/sun4i/sun4i_tcon.h |  1 +
 2 files changed, 3 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c 
b/drivers/gpu/drm/sun4i/sun4i_tcon.c
index eaaf5d70e352..c172ccfff7e5 100644
--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
+++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
@@ -569,30 +569,13 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon 
*tcon,
if (info->bus_flags & DRM_BUS_FLAG_DE_LOW)
val |= SUN4I_TCON0_IO_POL_DE_NEGATIVE;
 
-   /*
-* On A20 and similar SoCs, the only way to achieve Positive Edge
-* (Rising Edge), is setting dclk clock phase to 2/3(240°).
-* By default TCON works in Negative Edge(Falling Edge),
-* this is why phase is set to 0 in that case.
-* Unfortunately there's no way to logically invert dclk through
-* IO_POL register.
-* The only acceptable way to work, triple checked with scope,
-* is using clock phase set to 0° for Negative Edge and set to 240°
-* for Positive Edge.
-* On A33 and similar SoCs there would be a 90° phase option,
-* but it divides also dclk by 2.
-* Following code is a way to avoid quirks all around TCON
-* and DOTCLOCK drivers.
-*/
-   if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE)
-   clk_set_phase(tcon->dclk, 240);
-
if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)
-   clk_set_phase(tcon->dclk, 0);
+   val |= SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE;
 
regmap_update_bits(tcon->regs, SUN4I_TCON0_IO_POL_REG,
   SUN4I_TCON0_IO_POL_HSYNC_POSITIVE |
   SUN4I_TCON0_IO_POL_VSYNC_POSITIVE |
+  SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGDGE |
   SUN4I_TCON0_IO_POL_DE_NEGATIVE,
   val);
 
diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.h 
b/drivers/gpu/drm/sun4i/sun4i_tcon.h
index cfbf4e6c1679..c5ac1b02482c 100644
--- a/drivers/gpu/drm/sun4i/sun4i_tcon.h
+++ b/drivers/gpu/drm/sun4i/sun4i_tcon.h
@@ -113,6 +113,7 @@
 #define SUN4I_TCON0_IO_POL_REG 0x88
 #define SUN4I_TCON0_IO_POL_DCLK_PHASE(phase)   ((phase & 3) << 28)
 #define SUN4I_TCON0_IO_POL_DE_NEGATIVE BIT(27)
+#define SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE  BIT(26)
 #define SUN4I_TCON0_IO_POL_HSYNC_POSITIVE  BIT(25)
 #define SUN4I_TCON0_IO_POL_VSYNC_POSITIVE  BIT(24)
 
-- 
2.25.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v3 2/2] drm/drm_vblank: set the dma-fence timestamp during send_vblank_event

2021-01-14 Thread Veera Sundaram Sankaran
The explicit out-fences in crtc are signaled as part of vblank event,
indicating all framebuffers present on the Atomic Commit request are
scanned out on the screen. Though the fence signal and the vblank event
notification happens at the same time, triggered by the same hardware
vsync event, the timestamp set in both are different. With drivers
supporting precise vblank timestamp the difference between the two
timestamps would be even higher. This might have an impact on use-mode
frameworks using these fence timestamps for purposes other than simple
buffer usage. For instance, the Android framework [1] uses the
retire-fences as an alternative to vblank when frame-updates are in
progress. Set the fence timestamp during send vblank event using a new
drm_send_event_timestamp_locked variant to avoid discrepancies.

[1] https://android.googlesource.com/platform/frameworks/native/+/master/
services/surfaceflinger/Scheduler/Scheduler.cpp#397

Changes in v2:
- Use drm_send_event_timestamp_locked to update fence timestamp
- add more information to commit text

Changes in v3:
- use same backend helper function for variants of drm_send_event to
avoid code duplications

Signed-off-by: Veera Sundaram Sankaran 
---
 drivers/gpu/drm/drm_file.c   | 71 
 drivers/gpu/drm/drm_vblank.c |  9 +-
 include/drm/drm_file.h   |  3 ++
 3 files changed, 70 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
index 0ac4566..b8348ca 100644
--- a/drivers/gpu/drm/drm_file.c
+++ b/drivers/gpu/drm/drm_file.c
@@ -775,20 +775,19 @@ void drm_event_cancel_free(struct drm_device *dev,
 EXPORT_SYMBOL(drm_event_cancel_free);
 
 /**
- * drm_send_event_locked - send DRM event to file descriptor
+ * drm_send_event_helper - send DRM event to file descriptor
  * @dev: DRM device
  * @e: DRM event to deliver
+ * @timestamp: timestamp to set for the fence event in kernel's CLOCK_MONOTONIC
+ * time domain
  *
- * This function sends the event @e, initialized with drm_event_reserve_init(),
- * to its associated userspace DRM file. Callers must already hold
- * &drm_device.event_lock, see drm_send_event() for the unlocked version.
- *
- * Note that the core will take care of unlinking and disarming events when the
- * corresponding DRM file is closed. Drivers need not worry about whether the
- * DRM file for this event still exists and can call this function upon
- * completion of the asynchronous work unconditionally.
+ * This helper function sends the event @e, initialized with
+ * drm_event_reserve_init(), to its associated userspace DRM file.
+ * The timestamp variant of dma_fence_signal is used when the caller
+ * sends a valid timestamp.
  */
-void drm_send_event_locked(struct drm_device *dev, struct drm_pending_event *e)
+void drm_send_event_helper(struct drm_device *dev,
+   struct drm_pending_event *e, ktime_t timestamp)
 {
assert_spin_locked(&dev->event_lock);
 
@@ -799,7 +798,10 @@ void drm_send_event_locked(struct drm_device *dev, struct 
drm_pending_event *e)
}
 
if (e->fence) {
-   dma_fence_signal(e->fence);
+   if (timestamp)
+   dma_fence_signal_timestamp(e->fence, timestamp);
+   else
+   dma_fence_signal(e->fence);
dma_fence_put(e->fence);
}
 
@@ -814,6 +816,51 @@ void drm_send_event_locked(struct drm_device *dev, struct 
drm_pending_event *e)
wake_up_interruptible_poll(&e->file_priv->event_wait,
EPOLLIN | EPOLLRDNORM);
 }
+
+/**
+ * drm_send_event_timestamp_locked - send DRM event to file descriptor
+ * @dev: DRM device
+ * @e: DRM event to deliver
+ * @timestamp: timestamp to set for the fence event in kernel's CLOCK_MONOTONIC
+ * time domain
+ *
+ * This function sends the event @e, initialized with drm_event_reserve_init(),
+ * to its associated userspace DRM file. Callers must already hold
+ * &drm_device.event_lock.
+ *
+ * Note that the core will take care of unlinking and disarming events when the
+ * corresponding DRM file is closed. Drivers need not worry about whether the
+ * DRM file for this event still exists and can call this function upon
+ * completion of the asynchronous work unconditionally.
+ */
+void drm_send_event_timestamp_locked(struct drm_device *dev,
+   struct drm_pending_event *e, ktime_t timestamp)
+{
+   WARN_ON(!timestamp);
+
+   drm_send_event_helper(dev, e, timestamp);
+
+}
+EXPORT_SYMBOL(drm_send_event_timestamp_locked);
+
+/**
+ * drm_send_event_locked - send DRM event to file descriptor
+ * @dev: DRM device
+ * @e: DRM event to deliver
+ *
+ * This function sends the event @e, initialized with drm_event_reserve_init(),
+ * to its associated userspace DRM file. Callers must already hold
+ * &drm_device.event_lock, see drm_send_event() for the unlocked version.
+ *
+ * Note that the core will take care of unlinkin

[PATCH v3 0/7] Add support for Adreno 508/509/512

2021-01-14 Thread AngeloGioacchino Del Regno
In this patch series, we are adding support for lower end Adreno 5
series GPUs, such as A508, A509 and A512 that we have found in the
Qualcomm SDM630, SDM636 and SDM660 SoCs.

On a note, adding support for these three units, also adds 99% of
the required "things" for another two GPUs, A505 and A506 but, even
if adding them requires literally two lines of code, noone of us has
got any SoC equipped with these ones hence we wouldn't be able to
test. Even though there is basically no reason for them to not work
correctly, kernel side, I chose to avoid adding the two "magic" lines.

Anyway, this patchset also addresses some issues that we've found in
the A5XX part of the Adreno driver, regarding a logic mistake in one
of the VPC protect values and a forced overwrite of the register named
A5XX_PC_DBG_ECO_CNTL, forcing the setting of vtxFifo and primFifo
thresholds that was valid only for higher end GPUs.

This patch series has been tested on the following devices:
 - Sony Xperia XA2 Ultra (SDM630 Nile Discovery)
 - Sony Xperia 10(SDM630 Ganges Kirin)
 - Sony Xperia 10 Plus   (SDM636 Ganges Mermaid)

Changes in v3:
 - Rebased on 5.11-rc3
 - Changed emails to reflect new ones
 - Tested on F(x)Tec Pro1 and Xperia XZ Premium (MSM8998)

Changes in v2:
 - Define REG_A5XX_UCHE_MODE_CNTL and fix open-coded
   REG_A5XX_VPC_DBG_ECO_CNTL in the all flat shading optimization
   disablement commit, as requested by Rob Clark.

AngeloGioacchino Del Regno (4):
  drm/msm/a5xx: Remove overwriting A5XX_PC_DBG_ECO_CNTL register
  drm/msm/a5xx: Separate A5XX_PC_DBG_ECO_CNTL write from main branch
  drm/msm/a5xx: Add support for Adreno 508, 509, 512 GPUs
  drm/msm/a5xx: Reset VBIF before PC only on A510 and A530

Konrad Dybcio (3):
  drm/msm/a5xx: Fix VPC protect value in gpu_write()
  drm/msm/a5xx: Disable flat shading optimization
  drm/msm/a5xx: Disable UCHE global filter

 drivers/gpu/drm/msm/adreno/a5xx.xml.h  |   2 +
 drivers/gpu/drm/msm/adreno/a5xx_gpu.c  | 195 ++---
 drivers/gpu/drm/msm/adreno/a5xx_power.c|   4 +-
 drivers/gpu/drm/msm/adreno/adreno_device.c |  52 ++
 drivers/gpu/drm/msm/adreno/adreno_gpu.h|  15 ++
 5 files changed, 241 insertions(+), 27 deletions(-)

-- 
2.29.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/msm/dp: unplug interrupt missed after irq_hpd handler

2021-01-14 Thread Stephen Boyd
Quoting khs...@codeaurora.org (2021-01-13 09:48:25)
> On 2021-01-11 11:54, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2021-01-07 12:30:25)
> >> There is HPD unplug interrupts missed at scenario of an irq_hpd
> >> followed by unplug interrupts with around 10 ms in between.
> >> Since both AUX_SW_RESET and DP_SW_RESET clear pending HPD interrupts,
> >> irq_hpd handler should not issues either aux or sw reset to avoid
> >> following unplug interrupt be cleared accidentally.
> > 
> > So the problem is that we're resetting the DP aux phy in the middle of
> > the HPD state machine transitioning states?
> > 
> yes, after reset aux, hw clear pending hpd interrupts
> >> 
> >> Signed-off-by: Kuogee Hsieh 
> >> ---
> >> diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c 
> >> b/drivers/gpu/drm/msm/dp/dp_catalog.c
> >> index 44f0c57..9c0ce98 100644
> >> --- a/drivers/gpu/drm/msm/dp/dp_catalog.c
> >> +++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
> >> @@ -190,6 +190,18 @@ int dp_catalog_aux_clear_hw_interrupts(struct 
> >> dp_catalog *dp_catalog)
> >> return 0;
> >>  }
> >> 
> >> +/**
> >> + * dp_catalog_aux_reset() - reset AUX controller
> >> + *
> >> + * @aux: DP catalog structure
> >> + *
> >> + * return: void
> >> + *
> >> + * This function reset AUX controller
> >> + *
> >> + * NOTE: reset AUX controller will also clear any pending HPD related 
> >> interrupts
> >> + *
> >> + */
> >>  void dp_catalog_aux_reset(struct dp_catalog *dp_catalog)
> >>  {
> >> u32 aux_ctrl;
> >> @@ -483,6 +495,18 @@ int dp_catalog_ctrl_set_pattern(struct dp_catalog 
> >> *dp_catalog,
> >> return 0;
> >>  }
> >> 
> >> +/**
> >> + * dp_catalog_ctrl_reset() - reset DP controller
> >> + *
> >> + * @aux: DP catalog structure
> > 
> > It's called dp_catalog though.
> registers access are through dp_catalog_

Agreed. The variable is not called 'aux' though, it's called
'dp_catalog'.

> > 
> >> + *
> >> + * return: void
> >> + *
> >> + * This function reset DP controller
> > 
> > resets the
> > 
> >> + *
> >> + * NOTE: reset DP controller will also clear any pending HPD related 
> >> interrupts
> >> + *
> >> + */
> >>  void dp_catalog_ctrl_reset(struct dp_catalog *dp_catalog)

Right here.

> >>  {
> >> u32 sw_reset;
> >> diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c 
> >> b/drivers/gpu/drm/msm/dp/dp_ctrl.c
> >> index e3462f5..f96c415 100644
> >> --- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
> >> +++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
> >> @@ -1296,7 +1296,8 @@ static int dp_ctrl_setup_main_link(struct 
> >> dp_ctrl_private *ctrl,
> >>  * transitioned to PUSH_IDLE. In order to start transmitting
> >>  * a link training pattern, we have to first do soft reset.
> >>  */
> >> -   dp_catalog_ctrl_reset(ctrl->catalog);
> >> +   if (*training_step != DP_TRAINING_NONE)
> > 
> > Can we check for the positive value instead? i.e.
> > DP_TRAINING_1/DP_TRAINING_2
> > 

Any answer?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 0/2] fix missing unplug interrupt problem

2021-01-14 Thread khsieh

On 2021-01-13 12:25, Stephen Boyd wrote:

Quoting Kuogee Hsieh (2021-01-13 10:59:58)

Both AUX_SW_RESET and DP_SW_RESET clear pending HPD interrupts.
Therefore irq_hpd handler should not issues either aux or sw reset
to avoid following unplug interrupt be cleared accidentally.

Kuogee Hsieh (2):
  drm/msm/dp: return fail when both link lane and rate are 0 at dpcd
read
  drm/msm/dp: unplug interrupt missed after irq_hpd handler


It won't apply to the drm msm tree. Please rebase and resend.

Both V1 two patches are picked by Rob already.
I will drop V2 patches.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v3 2/7] drm/msm/a5xx: Separate A5XX_PC_DBG_ECO_CNTL write from main branch

2021-01-14 Thread AngeloGioacchino Del Regno
The "main" if branch where we program the other registers for the
Adreno 5xx family of GPUs should not contain the PC_DBG_ECO_CNTL
register programming because this has logical similarity
differences from all the others.

A later commit will show the entire sense of this.

Signed-off-by: AngeloGioacchino Del Regno 

Reviewed-by: Jordan Crouse 
---
 drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c 
b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
index 81506d2539b0..8c96fc0fc1b7 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
@@ -609,8 +609,6 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
gpu_write(gpu, REG_A5XX_CP_MERCIU_SIZE, 0x20);
gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_2, 0x4030);
gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_1, 0x20100D0A);
-   gpu_write(gpu, REG_A5XX_PC_DBG_ECO_CNTL,
- (0x200 << 11 | 0x200 << 22));
} else {
gpu_write(gpu, REG_A5XX_CP_MEQ_THRESHOLDS, 0x40);
if (adreno_is_a530(adreno_gpu))
@@ -619,9 +617,14 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
gpu_write(gpu, REG_A5XX_CP_MERCIU_SIZE, 0x400);
gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_2, 0x8060);
gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_1, 0x40201B16);
+   }
+
+   if (adreno_is_a510(adreno_gpu))
+   gpu_write(gpu, REG_A5XX_PC_DBG_ECO_CNTL,
+ (0x200 << 11 | 0x200 << 22));
+   else
gpu_write(gpu, REG_A5XX_PC_DBG_ECO_CNTL,
  (0x400 << 11 | 0x300 << 22));
-   }
 
if (adreno_gpu->info->quirks & ADRENO_QUIRK_TWO_PASS_USE_WFI)
gpu_rmw(gpu, REG_A5XX_PC_DBG_ECO_CNTL, 0, (1 << 8));
-- 
2.29.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/msm/dp: postpone irq_hpd event during connection pending state

2021-01-14 Thread Stephen Boyd
Quoting khs...@codeaurora.org (2021-01-13 15:44:32)
> On 2021-01-13 12:22, Stephen Boyd wrote:
> > Quoting khs...@codeaurora.org (2021-01-13 09:44:24)
> >> On 2021-01-11 11:55, Stephen Boyd wrote:
> >> > Quoting Kuogee Hsieh (2021-01-07 12:30:24)
> >> >> irq_hpd event can only be executed at connected state. Therefore
> >> >> irq_hpd event should be postponed if it happened at connection
> >> >> pending state. This patch also make sure both link rate and lane
> >> >
> >> > Why does it happen at connection pending state?
> >> plug in need two state to complete it.
> >> advance to connection pending state once link training completed and
> >> sent uevent notification to frame work.
> >> transition to connected state after frame work provided resolution
> >> timing and start transmit video panel.
> >> Therefore irq_hpd should not be handled if it occurred before 
> >> connected
> >> state.
> > 
> > Sure that's what's going on in the patch but you didn't answer my
> > question. Why does irq_hpd happen before connected state?
> 
> I have no idea why it happen this way.
> during debug 
> https://partnerissuetracker.corp.google.com/issues/170598152
> I saw two different scenario
> 1) irq_hpd followed by unplug with less than 20 ms in between. this one 
> fixed by this patch set.
> 2) plug followed by irq_hpd around 300ms in between. it does not cause 
> problem. but it should be handled in order (after connected state).

Ok. So nobody understands why the hardware is acting this way and we're
papering over the problem by forcing the HPD state to be what we think
it should be? That's not great.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm: kirin: use devm_platform_ioremap_resource() to simplify code

2021-01-14 Thread Tian Tao
Use devm_platform_ioremap_resource() to simplify the code.

Signed-off-by: Tian Tao 
---
 drivers/gpu/drm/hisilicon/kirin/dw_drm_dsi.c| 5 ++---
 drivers/gpu/drm/hisilicon/kirin/kirin_drm_ade.c | 4 +---
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/hisilicon/kirin/dw_drm_dsi.c 
b/drivers/gpu/drm/hisilicon/kirin/dw_drm_dsi.c
index 00e87c2..b00d5f6 100644
--- a/drivers/gpu/drm/hisilicon/kirin/dw_drm_dsi.c
+++ b/drivers/gpu/drm/hisilicon/kirin/dw_drm_dsi.c
@@ -817,7 +817,6 @@ static int dsi_parse_dt(struct platform_device *pdev, 
struct dw_dsi *dsi)
 {
struct dsi_hw_ctx *ctx = dsi->ctx;
struct device_node *np = pdev->dev.of_node;
-   struct resource *res;
int ret;
 
/*
@@ -834,8 +833,8 @@ static int dsi_parse_dt(struct platform_device *pdev, 
struct dw_dsi *dsi)
return PTR_ERR(ctx->pclk);
}
 
-   res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
-   ctx->base = devm_ioremap_resource(&pdev->dev, res);
+   ctx->base = devm_platform_ioremap_resource(pdev, 0);
+
if (IS_ERR(ctx->base)) {
DRM_ERROR("failed to remap dsi io region\n");
return PTR_ERR(ctx->base);
diff --git a/drivers/gpu/drm/hisilicon/kirin/kirin_drm_ade.c 
b/drivers/gpu/drm/hisilicon/kirin/kirin_drm_ade.c
index aa6c53f..6892743 100644
--- a/drivers/gpu/drm/hisilicon/kirin/kirin_drm_ade.c
+++ b/drivers/gpu/drm/hisilicon/kirin/kirin_drm_ade.c
@@ -840,7 +840,6 @@ static struct drm_plane_funcs ade_plane_funcs = {
 static void *ade_hw_ctx_alloc(struct platform_device *pdev,
  struct drm_crtc *crtc)
 {
-   struct resource *res;
struct device *dev = &pdev->dev;
struct device_node *np = pdev->dev.of_node;
struct ade_hw_ctx *ctx = NULL;
@@ -852,8 +851,7 @@ static void *ade_hw_ctx_alloc(struct platform_device *pdev,
return ERR_PTR(-ENOMEM);
}
 
-   res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
-   ctx->base = devm_ioremap_resource(dev, res);
+   ctx->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(ctx->base)) {
DRM_ERROR("failed to remap ade io base\n");
return ERR_PTR(-EIO);
-- 
2.7.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/msm/dp: postpone irq_hpd event during connection pending state

2021-01-14 Thread Stephen Boyd
Quoting khs...@codeaurora.org (2021-01-13 09:44:24)
> On 2021-01-11 11:55, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2021-01-07 12:30:24)
> >> irq_hpd event can only be executed at connected state. Therefore
> >> irq_hpd event should be postponed if it happened at connection
> >> pending state. This patch also make sure both link rate and lane
> > 
> > Why does it happen at connection pending state?
> plug in need two state to complete it.
> advance to connection pending state once link training completed and 
> sent uevent notification to frame work.
> transition to connected state after frame work provided resolution 
> timing and start transmit video panel.
> Therefore irq_hpd should not be handled if it occurred before connected 
> state.
> > 
> >> are valid before start link training.
> > 
> > Can this part about link rate and lane being valid be split off into
> > another patch?
> > 
> ok, i will spilt this patch into two.
> I will merge irq_hpd event part into 2nd patch (drm/msm/dp: unplug 
> interrupt missed after irq_hpd handler).

It looks like Rob already picked this patch up

https://gitlab.freedesktop.org/drm/msm/-/commit/2b5f09cadfc576817c0450e01d454f750909b103
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/msm/dp: postpone irq_hpd event during connection pending state

2021-01-14 Thread khsieh

On 2021-01-11 11:55, Stephen Boyd wrote:

Quoting Kuogee Hsieh (2021-01-07 12:30:24)

irq_hpd event can only be executed at connected state. Therefore
irq_hpd event should be postponed if it happened at connection
pending state. This patch also make sure both link rate and lane


Why does it happen at connection pending state?

plug in need two state to complete it.
advance to connection pending state once link training completed and 
sent uevent notification to frame work.
transition to connected state after frame work provided resolution 
timing and start transmit video panel.
Therefore irq_hpd should not be handled if it occurred before connected 
state.



are valid before start link training.


Can this part about link rate and lane being valid be split off into
another patch?


ok, i will spilt this patch into two.
I will merge irq_hpd event part into 2nd patch (drm/msm/dp: unplug 
interrupt missed after irq_hpd handler).




Signed-off-by: Kuogee Hsieh 
---


Any fixes tag?


 drivers/gpu/drm/msm/dp/dp_display.c |  7 +++
 drivers/gpu/drm/msm/dp/dp_panel.c   | 12 +---
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/msm/dp/dp_display.c 
b/drivers/gpu/drm/msm/dp/dp_display.c

index 6e971d5..3bc7ed2 100644
--- a/drivers/gpu/drm/msm/dp/dp_display.c
+++ b/drivers/gpu/drm/msm/dp/dp_display.c
@@ -693,6 +693,13 @@ static int dp_irq_hpd_handle(struct 
dp_display_private *dp, u32 data)

return 0;
}

+   if (state == ST_CONNECT_PENDING) {
+   /* wait until ST_CONNECTED */
+   dp_add_event(dp, EV_IRQ_HPD_INT, 0, 1); /* delay = 1 
*/

+   mutex_unlock(&dp->event_mutex);
+   return 0;
+   }
+
ret = dp_display_usbpd_attention_cb(&dp->pdev->dev);
if (ret == -ECONNRESET) { /* cable unplugged */
dp->core_initialized = false;

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v6] drm/sun4i: tcon: fix inverted DCLK polarity

2021-01-14 Thread Giulio Benetti
From: Giulio Benetti 

During commit 88bc4178568b ("drm: Use new
DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE flags") DRM_BUS_FLAG_*
macros have been changed to avoid ambiguity but just because of this
ambiguity previous DRM_BUS_FLAG_PIXDATA_(POS/NEG)EDGE were used meaning
_SAMPLE_ not _DRIVE_. This leads to DLCK inversion and need to fix but
instead of swapping phase values, let's adopt an easier approach Maxime
suggested:
It turned out that bit 26 of SUN4I_TCON0_IO_POL_REG is dedicated to
invert DCLK polarity and this makes things really easier than before. So
let's handle DCLK polarity by adding SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE
as bit 26 and activating according to bus_flags the same way it is done
for all the other signals polarity.

Fixes: 88bc4178568b ("drm: Use new DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE 
flags")
Suggested-by: Maxime Ripard 
Signed-off-by: Giulio Benetti 
---
V2->V3:
- squash 2 patches into 1
V3->V4:
- add SUN4I_TCON0_IO_POL_DCLK_POSITIVE to regmap_update_bits() as suggested by 
Maxime
V4->V5:
polarity is still wrong so:
- let's use SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE macro
  instead of _DCLK_POSITIVE(that would make sense only in realtion with DCLK)
- invert condition using _NEGEDGE instead of _POSEDGE and then matching with
  register bit SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE
- correct commit log according to V4->V5 changes
V5->V6:
- fix typo in SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE as suggested by Marjan
---
 drivers/gpu/drm/sun4i/sun4i_tcon.c | 21 ++---
 drivers/gpu/drm/sun4i/sun4i_tcon.h |  1 +
 2 files changed, 3 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c 
b/drivers/gpu/drm/sun4i/sun4i_tcon.c
index eaaf5d70e352..6b9af4c08cd6 100644
--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
+++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
@@ -569,30 +569,13 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon 
*tcon,
if (info->bus_flags & DRM_BUS_FLAG_DE_LOW)
val |= SUN4I_TCON0_IO_POL_DE_NEGATIVE;
 
-   /*
-* On A20 and similar SoCs, the only way to achieve Positive Edge
-* (Rising Edge), is setting dclk clock phase to 2/3(240°).
-* By default TCON works in Negative Edge(Falling Edge),
-* this is why phase is set to 0 in that case.
-* Unfortunately there's no way to logically invert dclk through
-* IO_POL register.
-* The only acceptable way to work, triple checked with scope,
-* is using clock phase set to 0° for Negative Edge and set to 240°
-* for Positive Edge.
-* On A33 and similar SoCs there would be a 90° phase option,
-* but it divides also dclk by 2.
-* Following code is a way to avoid quirks all around TCON
-* and DOTCLOCK drivers.
-*/
-   if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE)
-   clk_set_phase(tcon->dclk, 240);
-
if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)
-   clk_set_phase(tcon->dclk, 0);
+   val |= SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE;
 
regmap_update_bits(tcon->regs, SUN4I_TCON0_IO_POL_REG,
   SUN4I_TCON0_IO_POL_HSYNC_POSITIVE |
   SUN4I_TCON0_IO_POL_VSYNC_POSITIVE |
+  SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE |
   SUN4I_TCON0_IO_POL_DE_NEGATIVE,
   val);
 
diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.h 
b/drivers/gpu/drm/sun4i/sun4i_tcon.h
index cfbf4e6c1679..c5ac1b02482c 100644
--- a/drivers/gpu/drm/sun4i/sun4i_tcon.h
+++ b/drivers/gpu/drm/sun4i/sun4i_tcon.h
@@ -113,6 +113,7 @@
 #define SUN4I_TCON0_IO_POL_REG 0x88
 #define SUN4I_TCON0_IO_POL_DCLK_PHASE(phase)   ((phase & 3) << 28)
 #define SUN4I_TCON0_IO_POL_DE_NEGATIVE BIT(27)
+#define SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE  BIT(26)
 #define SUN4I_TCON0_IO_POL_HSYNC_POSITIVE  BIT(25)
 #define SUN4I_TCON0_IO_POL_VSYNC_POSITIVE  BIT(24)
 
-- 
2.25.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/amdgpu: Repeat assignment to max_slave_planes

2021-01-14 Thread ZhiJie.Zhang
Signed-off-by: ZhiJie.Zhang 
---
 drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c 
b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
index 3f63822b8e28..9a86d43a6233 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
@@ -1272,7 +1272,6 @@ static bool underlay_create(struct dc_context *ctx, 
struct resource_pool *pool)
 
/* update the public caps to indicate an underlay is available */
ctx->dc->caps.max_slave_planes = 1;
-   ctx->dc->caps.max_slave_planes = 1;
 
return true;
 }
-- 
2.29.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 0/2] fix missing unplug interrupt problem

2021-01-14 Thread Stephen Boyd
Quoting Kuogee Hsieh (2021-01-13 10:59:58)
> Both AUX_SW_RESET and DP_SW_RESET clear pending HPD interrupts.
> Therefore irq_hpd handler should not issues either aux or sw reset
> to avoid following unplug interrupt be cleared accidentally.
> 
> Kuogee Hsieh (2):
>   drm/msm/dp: return fail when both link lane and rate are 0 at dpcd
> read
>   drm/msm/dp: unplug interrupt missed after irq_hpd handler

It won't apply to the drm msm tree. Please rebase and resend.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5] drm/sun4i: tcon: fix inverted DCLK polarity

2021-01-14 Thread Giulio Benetti

Hi Marjan,

On 1/14/21 8:58 AM, Marjan Pascolo wrote:

Hi Giulio,

You did a typo

Il 13/01/2021 17:05, Giulio Benetti ha scritto:

From: Giulio Benetti 

During commit 88bc4178568b ("drm: Use new
DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE flags") DRM_BUS_FLAG_*
macros have been changed to avoid ambiguity but just because of this
ambiguity previous DRM_BUS_FLAG_PIXDATA_(POS/NEG)EDGE were used meaning
_SAMPLE_ not _DRIVE_. This leads to DLCK inversion and need to fix but
instead of swapping phase values, let's adopt an easier approach Maxime
suggested:
It turned out that bit 26 of SUN4I_TCON0_IO_POL_REG is dedicated to
invert DCLK polarity and this makes things really easier than before. So
let's handle DCLK polarity by adding SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE
as bit 26 and activating according to bus_flags the same way it is done
for all the other signals polarity.

Fixes: 88bc4178568b ("drm: Use new DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE 
flags")
Suggested-by: Maxime Ripard 
Signed-off-by: Giulio Benetti 
---
V2->V3:
- squash 2 patches into 1
V3->V4:
- add SUN4I_TCON0_IO_POL_DCLK_POSITIVE to regmap_update_bits()
V4->V5:
polarity is still wrong so:
- let's use SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE macro
instead of _DCLK_POSITIVE(that would make sense only in realtion with DCLK)
- invert condition using _NEGEDGE instead of _POSEDGE and then matching with
register bit SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE
- correct commit log according to V4->V5 changes
---
   drivers/gpu/drm/sun4i/sun4i_tcon.c | 21 ++---
   drivers/gpu/drm/sun4i/sun4i_tcon.h |  1 +
   2 files changed, 3 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c 
b/drivers/gpu/drm/sun4i/sun4i_tcon.c
index eaaf5d70e352..c172ccfff7e5 100644
--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
+++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
@@ -569,30 +569,13 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon 
*tcon,
if (info->bus_flags & DRM_BUS_FLAG_DE_LOW)
val |= SUN4I_TCON0_IO_POL_DE_NEGATIVE;
   
-	/*

-* On A20 and similar SoCs, the only way to achieve Positive Edge
-* (Rising Edge), is setting dclk clock phase to 2/3(240°).
-* By default TCON works in Negative Edge(Falling Edge),
-* this is why phase is set to 0 in that case.
-* Unfortunately there's no way to logically invert dclk through
-* IO_POL register.
-* The only acceptable way to work, triple checked with scope,
-* is using clock phase set to 0° for Negative Edge and set to 240°
-* for Positive Edge.
-* On A33 and similar SoCs there would be a 90° phase option,
-* but it divides also dclk by 2.
-* Following code is a way to avoid quirks all around TCON
-* and DOTCLOCK drivers.
-*/
-   if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE)
-   clk_set_phase(tcon->dclk, 240);
-
if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)
-   clk_set_phase(tcon->dclk, 0);
+   val |= SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE;
   
   	regmap_update_bits(tcon->regs, SUN4I_TCON0_IO_POL_REG,

   SUN4I_TCON0_IO_POL_HSYNC_POSITIVE |
   SUN4I_TCON0_IO_POL_VSYNC_POSITIVE |

Here Below you missed an 'E'


yes, thank you, going to send v6.

Best regards
--
Giulio Benetti
Benetti Engineering sas


+  SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGDGE |
   SUN4I_TCON0_IO_POL_DE_NEGATIVE,
   val);
   
diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.h b/drivers/gpu/drm/sun4i/sun4i_tcon.h

index cfbf4e6c1679..c5ac1b02482c 100644
--- a/drivers/gpu/drm/sun4i/sun4i_tcon.h
+++ b/drivers/gpu/drm/sun4i/sun4i_tcon.h
@@ -113,6 +113,7 @@
   #define SUN4I_TCON0_IO_POL_REG   0x88
   #define SUN4I_TCON0_IO_POL_DCLK_PHASE(phase) ((phase & 3) << 28)
   #define SUN4I_TCON0_IO_POL_DE_NEGATIVE   BIT(27)
+#define SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE  BIT(26)
   #define SUN4I_TCON0_IO_POL_HSYNC_POSITIVEBIT(25)
   #define SUN4I_TCON0_IO_POL_VSYNC_POSITIVEBIT(24)
   


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/amd/display: Simplify bool comparison

2021-01-14 Thread Yang Li
Fix the following coccicheck warning:
./drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c:580:23-31:
WARNING: Comparison to bool

Reported-by: Abaci Robot 
Signed-off-by: Yang Li 
---
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c 
b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c
index 4d3f7d5..904c2d2 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c
@@ -577,7 +577,7 @@ void dpp1_power_on_degamma_lut(
struct dcn10_dpp *dpp = TO_DCN10_DPP(dpp_base);
 
REG_SET(CM_MEM_PWR_CTRL, 0,
-   SHARED_MEM_PWR_DIS, power_on == true ? 0:1);
+   SHARED_MEM_PWR_DIS, power_on ? 0:1);
 
 }
 
-- 
1.8.3.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 0/2] fix missing unplug interrupt problem

2021-01-14 Thread Stephen Boyd
Quoting khs...@codeaurora.org (2021-01-13 15:52:37)
> On 2021-01-13 12:25, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2021-01-13 10:59:58)
> >> Both AUX_SW_RESET and DP_SW_RESET clear pending HPD interrupts.
> >> Therefore irq_hpd handler should not issues either aux or sw reset
> >> to avoid following unplug interrupt be cleared accidentally.
> >> 
> >> Kuogee Hsieh (2):
> >>   drm/msm/dp: return fail when both link lane and rate are 0 at dpcd
> >> read
> >>   drm/msm/dp: unplug interrupt missed after irq_hpd handler
> > 
> > It won't apply to the drm msm tree. Please rebase and resend.
> Both V1 two patches are picked by Rob already.
> I will drop V2 patches.

I only see the first patch, not the second one. Rob?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v3 1/7] drm/msm/a5xx: Remove overwriting A5XX_PC_DBG_ECO_CNTL register

2021-01-14 Thread AngeloGioacchino Del Regno
The PC_DBG_ECO_CNTL register on the Adreno A5xx family gets
programmed to some different values on a per-model basis.
At least, this is what we intend to do here;

Unfortunately, though, this register is being overwritten with a
static magic number, right after applying the GPU-specific
configuration (including the GPU-specific quirks) and that is
effectively nullifying the efforts.

Let's remove the redundant and wrong write to the PC_DBG_ECO_CNTL
register in order to retain the wanted configuration for the
target GPU.

Signed-off-by: AngeloGioacchino Del Regno 

Reviewed-by: Jordan Crouse 
---
 drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c 
b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
index a5af223eaf50..81506d2539b0 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
@@ -626,8 +626,6 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
if (adreno_gpu->info->quirks & ADRENO_QUIRK_TWO_PASS_USE_WFI)
gpu_rmw(gpu, REG_A5XX_PC_DBG_ECO_CNTL, 0, (1 << 8));
 
-   gpu_write(gpu, REG_A5XX_PC_DBG_ECO_CNTL, 0xc0200100);
-
/* Enable USE_RETENTION_FLOPS */
gpu_write(gpu, REG_A5XX_CP_CHICKEN_DBG, 0x0200);
 
-- 
2.29.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH -next] phy: mediatek: Mark mtk_mipi_tx_driver with static keyword

2021-01-14 Thread Chunfeng Yun
On Tue, 2021-01-12 at 09:38 +0800, Zou Wei wrote:
> Fix the following sparse warning:
> 
> drivers/phy/mediatek/phy-mtk-mipi-dsi.c:237:24: warning: symbol 
> 'mtk_mipi_tx_driver' was not declared. Should it be static?
> 
> Signed-off-by: Zou Wei 
> ---
>  drivers/phy/mediatek/phy-mtk-mipi-dsi.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/phy/mediatek/phy-mtk-mipi-dsi.c 
> b/drivers/phy/mediatek/phy-mtk-mipi-dsi.c
> index 18c4812..eeb357b 100644
> --- a/drivers/phy/mediatek/phy-mtk-mipi-dsi.c
> +++ b/drivers/phy/mediatek/phy-mtk-mipi-dsi.c
> @@ -234,7 +234,7 @@ static const struct of_device_id mtk_mipi_tx_match[] = {
>   { },
>  };
>  
> -struct platform_driver mtk_mipi_tx_driver = {
> +static struct platform_driver mtk_mipi_tx_driver = {
>   .probe = mtk_mipi_tx_probe,
>   .remove = mtk_mipi_tx_remove,
>   .driver = {

Acked-by: Chunfeng Yun 

Thanks

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 15/21] drm/tegra: Add new UAPI to header

2021-01-14 Thread Dmitry Osipenko
11.01.2021 16:00, Mikko Perttunen пишет:
> +struct drm_tegra_submit_buf {
> + /**
> +  * @mapping_id: [in]
> +  *
> +  * Identifier of the mapping to use in the submission.
> +  */
> + __u32 mapping_id;

I'm now in process of trying out the UAPI using grate drivers and this
becomes the first obstacle.

Looks like this is not going to work well for older Tegra SoCs, in
particular for T20, which has a small GART.

Given that the usefulness of the partial mapping feature is very
questionable until it will be proven with a real userspace, we should
start with a dynamic mappings that are done at a time of job submission.

DRM already should have everything necessary for creating and managing
caches of mappings, grate kernel driver has been using drm_mm_scan for a
long time now for that.

It should be fine to support the static mapping feature, but it should
be done separately with the drm_mm integration, IMO.

What do think?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/msm/dp: postpone irq_hpd event during connection pending state

2021-01-14 Thread khsieh

On 2021-01-13 12:22, Stephen Boyd wrote:

Quoting khs...@codeaurora.org (2021-01-13 09:44:24)

On 2021-01-11 11:55, Stephen Boyd wrote:
> Quoting Kuogee Hsieh (2021-01-07 12:30:24)
>> irq_hpd event can only be executed at connected state. Therefore
>> irq_hpd event should be postponed if it happened at connection
>> pending state. This patch also make sure both link rate and lane
>
> Why does it happen at connection pending state?
plug in need two state to complete it.
advance to connection pending state once link training completed and
sent uevent notification to frame work.
transition to connected state after frame work provided resolution
timing and start transmit video panel.
Therefore irq_hpd should not be handled if it occurred before 
connected

state.


Sure that's what's going on in the patch but you didn't answer my
question. Why does irq_hpd happen before connected state?


I have no idea why it happen this way.
during debug 
https://partnerissuetracker.corp.google.com/issues/170598152

I saw two different scenario
1) irq_hpd followed by unplug with less than 20 ms in between. this one 
fixed by this patch set.
2) plug followed by irq_hpd around 300ms in between. it does not cause 
problem. but it should be handled in order (after connected state).

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/msm/dp: unplug interrupt missed after irq_hpd handler

2021-01-14 Thread khsieh

On 2021-01-13 12:23, Stephen Boyd wrote:

Quoting khs...@codeaurora.org (2021-01-13 09:48:25)

On 2021-01-11 11:54, Stephen Boyd wrote:
> Quoting Kuogee Hsieh (2021-01-07 12:30:25)
>> There is HPD unplug interrupts missed at scenario of an irq_hpd
>> followed by unplug interrupts with around 10 ms in between.
>> Since both AUX_SW_RESET and DP_SW_RESET clear pending HPD interrupts,
>> irq_hpd handler should not issues either aux or sw reset to avoid
>> following unplug interrupt be cleared accidentally.
>
> So the problem is that we're resetting the DP aux phy in the middle of
> the HPD state machine transitioning states?
>
yes, after reset aux, hw clear pending hpd interrupts
>>
>> Signed-off-by: Kuogee Hsieh 
>> ---
>> diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c
>> b/drivers/gpu/drm/msm/dp/dp_catalog.c
>> index 44f0c57..9c0ce98 100644
>> --- a/drivers/gpu/drm/msm/dp/dp_catalog.c
>> +++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
>> @@ -190,6 +190,18 @@ int dp_catalog_aux_clear_hw_interrupts(struct
>> dp_catalog *dp_catalog)
>> return 0;
>>  }
>>
>> +/**
>> + * dp_catalog_aux_reset() - reset AUX controller
>> + *
>> + * @aux: DP catalog structure
>> + *
>> + * return: void
>> + *
>> + * This function reset AUX controller
>> + *
>> + * NOTE: reset AUX controller will also clear any pending HPD related
>> interrupts
>> + *
>> + */
>>  void dp_catalog_aux_reset(struct dp_catalog *dp_catalog)
>>  {
>> u32 aux_ctrl;
>> @@ -483,6 +495,18 @@ int dp_catalog_ctrl_set_pattern(struct dp_catalog
>> *dp_catalog,
>> return 0;
>>  }
>>
>> +/**
>> + * dp_catalog_ctrl_reset() - reset DP controller
>> + *
>> + * @aux: DP catalog structure
>
> It's called dp_catalog though.
registers access are through dp_catalog_


Agreed. The variable is not called 'aux' though, it's called
'dp_catalog'.


>
>> + *
>> + * return: void
>> + *
>> + * This function reset DP controller
>
> resets the
>
>> + *
>> + * NOTE: reset DP controller will also clear any pending HPD related
>> interrupts
>> + *
>> + */
>>  void dp_catalog_ctrl_reset(struct dp_catalog *dp_catalog)


Right here.


fixed at v2



>>  {
>> u32 sw_reset;
>> diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c
>> b/drivers/gpu/drm/msm/dp/dp_ctrl.c
>> index e3462f5..f96c415 100644
>> --- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
>> +++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
>> @@ -1296,7 +1296,8 @@ static int dp_ctrl_setup_main_link(struct
>> dp_ctrl_private *ctrl,
>>  * transitioned to PUSH_IDLE. In order to start transmitting
>>  * a link training pattern, we have to first do soft reset.
>>  */
>> -   dp_catalog_ctrl_reset(ctrl->catalog);
>> +   if (*training_step != DP_TRAINING_NONE)
>
> Can we check for the positive value instead? i.e.
> DP_TRAINING_1/DP_TRAINING_2
>


Any answer?


fixed at v2
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3] drm/sun4i: tcon: fix inverted DCLK polarity

2021-01-14 Thread Maxime Ripard
Hi,

On Mon, Jan 11, 2021 at 06:46:16PM +0100, Giulio Benetti wrote:
> From: Giulio Benetti 
> 
> During commit 88bc4178568b ("drm: Use new
> DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE flags") DRM_BUS_FLAG_*
> macros have been changed to avoid ambiguity but just because of this
> ambiguity previous DRM_BUS_FLAG_PIXDATA_(POS/NEG)EDGE were used meaning
> _SAMPLE_ not _DRIVE_. This leads to DLCK inversion and need to fix but
> instead of swapping phase values, let's adopt an easier approach Maxime
> suggested:
> It turned out that bit 26 of SUN4I_TCON0_IO_POL_REG is dedicated to
> invert DCLK polarity and this makes things really easier than before. So
> let's handle DCLK polarity by adding SUN4I_TCON0_IO_POL_DCLK_POSITIVE as
> bit 26 and activating according to bus_flags the same way it is done for
> all the other signals polarity.
> 
> Fixes: 88bc4178568b ("drm: Use new 
> DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE flags")
> Suggested-by: Maxime Ripard 
> Signed-off-by: Giulio Benetti 
> ---
>  drivers/gpu/drm/sun4i/sun4i_tcon.c | 20 +---
>  drivers/gpu/drm/sun4i/sun4i_tcon.h |  1 +
>  2 files changed, 2 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c 
> b/drivers/gpu/drm/sun4i/sun4i_tcon.c
> index eaaf5d70e352..30171ccd87e5 100644
> --- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
> +++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
> @@ -569,26 +569,8 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon 
> *tcon,
>   if (info->bus_flags & DRM_BUS_FLAG_DE_LOW)
>   val |= SUN4I_TCON0_IO_POL_DE_NEGATIVE;
>  
> - /*
> -  * On A20 and similar SoCs, the only way to achieve Positive Edge
> -  * (Rising Edge), is setting dclk clock phase to 2/3(240°).
> -  * By default TCON works in Negative Edge(Falling Edge),
> -  * this is why phase is set to 0 in that case.
> -  * Unfortunately there's no way to logically invert dclk through
> -  * IO_POL register.
> -  * The only acceptable way to work, triple checked with scope,
> -  * is using clock phase set to 0° for Negative Edge and set to 240°
> -  * for Positive Edge.
> -  * On A33 and similar SoCs there would be a 90° phase option,
> -  * but it divides also dclk by 2.
> -  * Following code is a way to avoid quirks all around TCON
> -  * and DOTCLOCK drivers.
> -  */
>   if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE)
> - clk_set_phase(tcon->dclk, 240);
> -
> - if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)
> - clk_set_phase(tcon->dclk, 0);
> + val |= SUN4I_TCON0_IO_POL_DCLK_POSITIVE;
>  
>   regmap_update_bits(tcon->regs, SUN4I_TCON0_IO_POL_REG,
>  SUN4I_TCON0_IO_POL_HSYNC_POSITIVE |

You need to add SUN4I_TCON0_IO_POL_DCLK_POSITIVE to the mask you're
going to change here too

Maxime


signature.asc
Description: PGP signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5] drm/sun4i: tcon: fix inverted DCLK polarity

2021-01-14 Thread Marjan Pascolo

Hi Giulio,

You did a typo

Il 13/01/2021 17:05, Giulio Benetti ha scritto:

From: Giulio Benetti 

During commit 88bc4178568b ("drm: Use new
DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE flags") DRM_BUS_FLAG_*
macros have been changed to avoid ambiguity but just because of this
ambiguity previous DRM_BUS_FLAG_PIXDATA_(POS/NEG)EDGE were used meaning
_SAMPLE_ not _DRIVE_. This leads to DLCK inversion and need to fix but
instead of swapping phase values, let's adopt an easier approach Maxime
suggested:
It turned out that bit 26 of SUN4I_TCON0_IO_POL_REG is dedicated to
invert DCLK polarity and this makes things really easier than before. So
let's handle DCLK polarity by adding SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE
as bit 26 and activating according to bus_flags the same way it is done
for all the other signals polarity.

Fixes: 88bc4178568b ("drm: Use new DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE 
flags")
Suggested-by: Maxime Ripard 
Signed-off-by: Giulio Benetti 
---
V2->V3:
- squash 2 patches into 1
V3->V4:
- add SUN4I_TCON0_IO_POL_DCLK_POSITIVE to regmap_update_bits()
V4->V5:
polarity is still wrong so:
- let's use SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE macro
   instead of _DCLK_POSITIVE(that would make sense only in realtion with DCLK)
- invert condition using _NEGEDGE instead of _POSEDGE and then matching with
   register bit SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE
- correct commit log according to V4->V5 changes
---
  drivers/gpu/drm/sun4i/sun4i_tcon.c | 21 ++---
  drivers/gpu/drm/sun4i/sun4i_tcon.h |  1 +
  2 files changed, 3 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c 
b/drivers/gpu/drm/sun4i/sun4i_tcon.c
index eaaf5d70e352..c172ccfff7e5 100644
--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
+++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
@@ -569,30 +569,13 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon 
*tcon,
if (info->bus_flags & DRM_BUS_FLAG_DE_LOW)
val |= SUN4I_TCON0_IO_POL_DE_NEGATIVE;
  
-	/*

-* On A20 and similar SoCs, the only way to achieve Positive Edge
-* (Rising Edge), is setting dclk clock phase to 2/3(240°).
-* By default TCON works in Negative Edge(Falling Edge),
-* this is why phase is set to 0 in that case.
-* Unfortunately there's no way to logically invert dclk through
-* IO_POL register.
-* The only acceptable way to work, triple checked with scope,
-* is using clock phase set to 0° for Negative Edge and set to 240°
-* for Positive Edge.
-* On A33 and similar SoCs there would be a 90° phase option,
-* but it divides also dclk by 2.
-* Following code is a way to avoid quirks all around TCON
-* and DOTCLOCK drivers.
-*/
-   if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE)
-   clk_set_phase(tcon->dclk, 240);
-
if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)
-   clk_set_phase(tcon->dclk, 0);
+   val |= SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE;
  
  	regmap_update_bits(tcon->regs, SUN4I_TCON0_IO_POL_REG,

   SUN4I_TCON0_IO_POL_HSYNC_POSITIVE |
   SUN4I_TCON0_IO_POL_VSYNC_POSITIVE |

Here Below you missed an 'E'

+  SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGDGE |
   SUN4I_TCON0_IO_POL_DE_NEGATIVE,
   val);
  
diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.h b/drivers/gpu/drm/sun4i/sun4i_tcon.h

index cfbf4e6c1679..c5ac1b02482c 100644
--- a/drivers/gpu/drm/sun4i/sun4i_tcon.h
+++ b/drivers/gpu/drm/sun4i/sun4i_tcon.h
@@ -113,6 +113,7 @@
  #define SUN4I_TCON0_IO_POL_REG0x88
  #define SUN4I_TCON0_IO_POL_DCLK_PHASE(phase)  ((phase & 3) << 28)
  #define SUN4I_TCON0_IO_POL_DE_NEGATIVEBIT(27)
+#define SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE  BIT(26)
  #define SUN4I_TCON0_IO_POL_HSYNC_POSITIVE BIT(25)
  #define SUN4I_TCON0_IO_POL_VSYNC_POSITIVE BIT(24)
  

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v2 0/2] fix missing unplug interrupt problem

2021-01-14 Thread Kuogee Hsieh
Both AUX_SW_RESET and DP_SW_RESET clear pending HPD interrupts.
Therefore irq_hpd handler should not issues either aux or sw reset
to avoid following unplug interrupt be cleared accidentally.

Kuogee Hsieh (2):
  drm/msm/dp: return fail when both link lane and rate are 0 at dpcd
read
  drm/msm/dp: unplug interrupt missed after irq_hpd handler

 drivers/gpu/drm/msm/dp/dp_aux.c |  7 ---
 drivers/gpu/drm/msm/dp/dp_catalog.c | 24 
 drivers/gpu/drm/msm/dp/dp_ctrl.c| 15 ++-
 drivers/gpu/drm/msm/dp/dp_display.c |  7 +++
 drivers/gpu/drm/msm/dp/dp_panel.c   | 12 +---
 5 files changed, 50 insertions(+), 15 deletions(-)

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3] drm/sun4i: tcon: fix inverted DCLK polarity

2021-01-14 Thread Giulio Benetti

On 1/13/21 10:42 AM, Maxime Ripard wrote:

Hi,

On Mon, Jan 11, 2021 at 06:46:16PM +0100, Giulio Benetti wrote:

From: Giulio Benetti 

During commit 88bc4178568b ("drm: Use new
DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE flags") DRM_BUS_FLAG_*
macros have been changed to avoid ambiguity but just because of this
ambiguity previous DRM_BUS_FLAG_PIXDATA_(POS/NEG)EDGE were used meaning
_SAMPLE_ not _DRIVE_. This leads to DLCK inversion and need to fix but
instead of swapping phase values, let's adopt an easier approach Maxime
suggested:
It turned out that bit 26 of SUN4I_TCON0_IO_POL_REG is dedicated to
invert DCLK polarity and this makes things really easier than before. So
let's handle DCLK polarity by adding SUN4I_TCON0_IO_POL_DCLK_POSITIVE as
bit 26 and activating according to bus_flags the same way it is done for
all the other signals polarity.

Fixes: 88bc4178568b ("drm: Use new DRM_BUS_FLAG_*_(DRIVE|SAMPLE)_(POS|NEG)EDGE 
flags")
Suggested-by: Maxime Ripard 
Signed-off-by: Giulio Benetti 
---
  drivers/gpu/drm/sun4i/sun4i_tcon.c | 20 +---
  drivers/gpu/drm/sun4i/sun4i_tcon.h |  1 +
  2 files changed, 2 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c 
b/drivers/gpu/drm/sun4i/sun4i_tcon.c
index eaaf5d70e352..30171ccd87e5 100644
--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
+++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
@@ -569,26 +569,8 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon 
*tcon,
if (info->bus_flags & DRM_BUS_FLAG_DE_LOW)
val |= SUN4I_TCON0_IO_POL_DE_NEGATIVE;
  
-	/*

-* On A20 and similar SoCs, the only way to achieve Positive Edge
-* (Rising Edge), is setting dclk clock phase to 2/3(240°).
-* By default TCON works in Negative Edge(Falling Edge),
-* this is why phase is set to 0 in that case.
-* Unfortunately there's no way to logically invert dclk through
-* IO_POL register.
-* The only acceptable way to work, triple checked with scope,
-* is using clock phase set to 0° for Negative Edge and set to 240°
-* for Positive Edge.
-* On A33 and similar SoCs there would be a 90° phase option,
-* but it divides also dclk by 2.
-* Following code is a way to avoid quirks all around TCON
-* and DOTCLOCK drivers.
-*/
if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE)
-   clk_set_phase(tcon->dclk, 240);
-
-   if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)
-   clk_set_phase(tcon->dclk, 0);
+   val |= SUN4I_TCON0_IO_POL_DCLK_POSITIVE;
  
  	regmap_update_bits(tcon->regs, SUN4I_TCON0_IO_POL_REG,

   SUN4I_TCON0_IO_POL_HSYNC_POSITIVE |


You need to add SUN4I_TCON0_IO_POL_DCLK_POSITIVE to the mask you're
going to change here too


Ah, lost it during squash, I send a v4.

Thank you
Best regards
--
Giulio Benetti
Benetti Engineering sas
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Intel-gfx] [PATCH] drm-buf: Add debug option

2021-01-14 Thread Daniel Vetter
On Thu, Jan 14, 2021 at 10:23 AM Chris Wilson  wrote:
>
> Quoting Daniel Vetter (2021-01-14 09:02:57)
> > On Wed, Jan 13, 2021 at 10:08 PM Chris Wilson  
> > wrote:
> > > Quoting Daniel Vetter (2021-01-13 20:50:11)
> > > > On Wed, Jan 13, 2021 at 4:43 PM Chris Wilson  
> > > > wrote:
> > > > >
> > > > > Quoting Daniel Vetter (2021-01-13 14:06:04)
> > > > > > We have too many people abusing the struct page they can get at but
> > > > > > really shouldn't in importers. Aside from that the backing page 
> > > > > > might
> > > > > > simply not exist (for dynamic p2p mappings) looking at it and using 
> > > > > > it
> > > > > > e.g. for mmap can also wreak the page handling of the exporter
> > > > > > completely. Importers really must go through the proper interface 
> > > > > > like
> > > > > > dma_buf_mmap for everything.
> > > > >
> > > > > If the exporter doesn't want to expose the struct page, why are they
> > > > > setting it in the exported sg_table?
> > > >
> > > > You need to store it somewhere, otherwise the dma-api doesn't work.
> > > > Essentially this achieves clearing/resetting the struct page pointer,
> > > > without additional allocations somewhere, or tons of driver changes
> > > > (since presumably the driver does keep track of the struct page
> > > > somewhere too).
> > >
> > > Only for mapping, and that's before the export -- if there's even a
> > > struct page to begin with.
> > >
> > > > Also as long as we have random importers looking at struct page we
> > > > can't just remove it, or crashes everywhere. So it has to be some
> > > > debug option you can disable.
> > >
> > > Totally agreed that nothing generic can rely on pages being transported
> > > via dma-buf, and memfd is there if you do want a suitable transport. The
> > > one I don't know about is dma-buf heap, do both parties there consent to
> > > transport pages via the dma-buf? i.e. do they have special cases for
> > > import/export between heaps?
> >
> > heaps shouldn't be any different wrt the interface exposed to
> > importers. Adding John just in case I missed something.
> >
> > I think the only problem we have is that the first import for ttm
> > simply pulled out the struct page and ignored the sgtable otherwise,
> > then that copypasted to places and we're still have some of that left.
> > Although it's a lot better. So largely the problem is importers being
> > a bit silly.
> >
> > I also think I should change the defaulty y to default y if
> > DMA_API_DEBUG or something like that, to make sure it's actually
> > enabled often enough.
>
> It felt overly draconian, but other than the open question of dma-buf
> heaps (which I realise that we need some CI coverage for), I can't
> think of a good reason to argue for hiding a struct page transport
> within dma-buf.

Yeah there's the occasional idea floating around to split sgtable into
the page and the dma side completely. But aside from the bikeshed no
one volunteered for the massive amount of work rolling that out would
mean, so I'm trying to go with a cheap trick here meanwhile.

> The only other problem I see with the implementation is that there's
> nothing that says that each dmabuf->ops->map_dma_buf() returns a new
> sg_table, so we may end up undoing the xor. Or should each dma-buf
> return a fresh dma-mapping for iommu isolation?

Maybe I screwed it up, but that's why I extracted the little helpers:
We scramble when we get the sgtable from exporter, and unscramble
before we pass it back. dma-buf.c does some caching and will hand back
the same sgtable, but for that case we don't re-scramble.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v4 4/7] dt-bindings: display: imx: hdmi: Convert binding to YAML

2021-01-14 Thread Laurent Pinchart
Hi Philipp,

On Thu, Jan 14, 2021 at 09:17:47AM +0100, Philipp Zabel wrote:
> On Thu, 2021-01-14 at 08:44 +0200, Laurent Pinchart wrote:
> > Convert the i.MX6 HDMI TX text binding to YAML.
> > 
> > Signed-off-by: Laurent Pinchart 
> > Reviewed-by: Rob Herring 
> > ---
> > Changes since v3:
> > 
> > - Use port instead of port-base
> > 
> > Changes since v1:
> > 
> > - Only specify maxItems for clocks
> > - Drop reg and interrupts as they're checked in the base schema
> > - Rebase on top of OF graph schema, dropped redundant properties
> > - Fix identation for enum entries
> > - Drop clock-names items, use maxItems only
> > ---
> >  .../bindings/display/imx/fsl,imx6-hdmi.yaml   | 126 ++
> >  .../devicetree/bindings/display/imx/hdmi.txt  |  65 -
> >  2 files changed, 126 insertions(+), 65 deletions(-)
> >  create mode 100644 
> > Documentation/devicetree/bindings/display/imx/fsl,imx6-hdmi.yaml
> >  delete mode 100644 Documentation/devicetree/bindings/display/imx/hdmi.txt
> > 
> > diff --git 
> > a/Documentation/devicetree/bindings/display/imx/fsl,imx6-hdmi.yaml 
> > b/Documentation/devicetree/bindings/display/imx/fsl,imx6-hdmi.yaml
> > new file mode 100644
> > index ..af7fe9c4d196
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/display/imx/fsl,imx6-hdmi.yaml
> > @@ -0,0 +1,126 @@
> > +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> > +%YAML 1.2
> > +---
> > +$id: http://devicetree.org/schemas/display/imx/fsl,imx6-hdmi.yaml#
> > +$schema: http://devicetree.org/meta-schemas/core.yaml#
> > +
> > +title: Freescale i.MX6 DWC HDMI TX Encoder
> > +
> > +maintainers:
> > +  - Philipp Zabel 
> 
> Acked-by: Philipp Zabel 

Very appreciated, thank you.

-- 
Regards,

Laurent Pinchart
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/amd/display: Simplify bool comparison

2021-01-14 Thread Jani Nikula
On Wed, 13 Jan 2021, Yang Li  wrote:
> Fix the following coccicheck warning:
> ./drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c:580:23-31:
> WARNING: Comparison to bool
>
> Reported-by: Abaci Robot 
> Signed-off-by: Yang Li 
> ---
>  drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c 
> b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c
> index 4d3f7d5..904c2d2 100644
> --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c
> +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c
> @@ -577,7 +577,7 @@ void dpp1_power_on_degamma_lut(
>   struct dcn10_dpp *dpp = TO_DCN10_DPP(dpp_base);
>  
>   REG_SET(CM_MEM_PWR_CTRL, 0,
> - SHARED_MEM_PWR_DIS, power_on == true ? 0:1);
> + SHARED_MEM_PWR_DIS, power_on ? 0:1);

Not my driver, but this is as simple as it gets:

+   SHARED_MEM_PWR_DIS, !power_on);


>  
>  }

-- 
Jani Nikula, Intel Open Source Graphics Center
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/vc4: Initialize vc4_drm_driver with CMA helper defaults

2021-01-14 Thread Daniel Vetter
On Thu, Jan 14, 2021 at 09:49:49AM +0100, Thomas Zimmermann wrote:
> The function vc4_prime_import_sg_table() is an otherwise empty wrapper
> around CMA's drm_gem_cma_prime_import_sg_table(). Removing it in favor
> of the latter allows to initialize vc4_drm_driver with CMA's initializer
> macro.
> 
> Signed-off-by: Thomas Zimmermann 

Reviewed-by: Daniel Vetter 

> ---
>  drivers/gpu/drm/vc4/vc4_bo.c  | 14 --
>  drivers/gpu/drm/vc4/vc4_drv.c |  7 +--
>  drivers/gpu/drm/vc4/vc4_drv.h |  3 ---
>  3 files changed, 1 insertion(+), 23 deletions(-)
> 
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index 28e48ef2d295..fddaeb0b09c1 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -738,20 +738,6 @@ static const struct drm_gem_object_funcs 
> vc4_gem_object_funcs = {
>   .vm_ops = &vc4_vm_ops,
>  };
>  
> -struct drm_gem_object *
> -vc4_prime_import_sg_table(struct drm_device *dev,
> -   struct dma_buf_attachment *attach,
> -   struct sg_table *sgt)
> -{
> - struct drm_gem_object *obj;
> -
> - obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
> - if (IS_ERR(obj))
> - return obj;
> -
> - return obj;
> -}
> -
>  static int vc4_grab_bin_bo(struct vc4_dev *vc4, struct vc4_file *vc4file)
>  {
>   int ret;
> diff --git a/drivers/gpu/drm/vc4/vc4_drv.c b/drivers/gpu/drm/vc4/vc4_drv.c
> index d9b3bba7f2b7..556ad0f02a0d 100644
> --- a/drivers/gpu/drm/vc4/vc4_drv.c
> +++ b/drivers/gpu/drm/vc4/vc4_drv.c
> @@ -180,12 +180,7 @@ static struct drm_driver vc4_drm_driver = {
>  
>   .gem_create_object = vc4_create_object,
>  
> - .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
> - .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
> - .gem_prime_import_sg_table = vc4_prime_import_sg_table,
> - .gem_prime_mmap = drm_gem_prime_mmap,
> -
> - .dumb_create = vc4_dumb_create,
> + DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc4_dumb_create),
>  
>   .ioctls = vc4_drm_ioctls,
>   .num_ioctls = ARRAY_SIZE(vc4_drm_ioctls),
> diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
> index 0d9c0ecc4769..a7500716cf3f 100644
> --- a/drivers/gpu/drm/vc4/vc4_drv.h
> +++ b/drivers/gpu/drm/vc4/vc4_drv.h
> @@ -801,9 +801,6 @@ int vc4_get_hang_state_ioctl(struct drm_device *dev, void 
> *data,
>struct drm_file *file_priv);
>  int vc4_label_bo_ioctl(struct drm_device *dev, void *data,
>  struct drm_file *file_priv);
> -struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
> -  struct dma_buf_attachment 
> *attach,
> -  struct sg_table *sgt);
>  int vc4_bo_cache_init(struct drm_device *dev);
>  int vc4_bo_inc_usecnt(struct vc4_bo *bo);
>  void vc4_bo_dec_usecnt(struct vc4_bo *bo);
> -- 
> 2.29.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Intel-gfx] [PATCH] drm-buf: Add debug option

2021-01-14 Thread Chris Wilson
Quoting Daniel Vetter (2021-01-14 09:30:32)
> On Thu, Jan 14, 2021 at 10:23 AM Chris Wilson  
> wrote:
> > The only other problem I see with the implementation is that there's
> > nothing that says that each dmabuf->ops->map_dma_buf() returns a new
> > sg_table, so we may end up undoing the xor. Or should each dma-buf
> > return a fresh dma-mapping for iommu isolation?
> 
> Maybe I screwed it up, but that's why I extracted the little helpers:
> We scramble when we get the sgtable from exporter, and unscramble
> before we pass it back. dma-buf.c does some caching and will hand back
> the same sgtable, but for that case we don't re-scramble.

The attachment is only mapped once, but there can be more than one
attachment, and the backend could return the same sg_table for each
mapping. Conceivably, it could return its own private sg_table where it
wants to maintain the struct page. Seems like just adding a sentence to
@map_dma_buf to clarify that each call should return a new sg_table will
suffice.
-Chris
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v6 1/4] drm/i915: Keep track of pwm-related backlight hooks separately

2021-01-14 Thread Jani Nikula
On Thu, 14 Jan 2021, Jani Nikula  wrote:
> On Wed, 13 Jan 2021, Lyude Paul  wrote:
>> Currently, every different type of backlight hook that i915 supports is
>> pretty straight forward - you have a backlight, probably through PWM
>> (but maybe DPCD), with a single set of platform-specific hooks that are
>> used for controlling it.
>>
>> HDR backlights, in particular VESA and Intel's HDR backlight
>> implementations, can end up being more complicated. With Intel's
>> proprietary interface, HDR backlight controls always run through the
>> DPCD. When the backlight is in SDR backlight mode however, the driver
>> may need to bypass the TCON and control the backlight directly through
>> PWM.
>>
>> So, in order to support this we'll need to split our backlight callbacks
>> into two groups: a set of high-level backlight control callbacks in
>> intel_panel, and an additional set of pwm-specific backlight control
>> callbacks. This also implies a functional changes for how these
>> callbacks are used:
>>
>> * We now keep track of two separate backlight level ranges, one for the
>>   high-level backlight, and one for the pwm backlight range
>> * We also keep track of backlight enablement and PWM backlight
>>   enablement separately
>> * Since the currently set backlight level might not be the same as the
>>   currently programmed PWM backlight level, we stop setting
>>   panel->backlight.level with the currently programmed PWM backlight
>>   level in panel->backlight.pwm_funcs->setup(). Instead, we rely
>>   on the higher level backlight control functions to retrieve the
>>   current PWM backlight level (in this case, intel_pwm_get_backlight()).
>>   Note that there are still a few PWM backlight setup callbacks that
>>   do actually need to retrieve the current PWM backlight level, although
>>   we no longer save this value in panel->backlight.level like before.
>>
>> Additionally, we drop the call to lpt_get_backlight() in
>> lpt_setup_backlight(), and avoid unconditionally writing the PWM value that
>> we get from it and only write it back if we're in CPU mode, and switching
>> to PCH mode. The reason for this is because in the original codepath for
>> this, it was expected that the intel_panel_bl_funcs->setup() hook would be
>> responsible for fetching the initial backlight level. On lpt systems, the
>> only time we could ever be in PCH backlight mode is during the initial
>> driver load - meaning that outside of the setup() hook, lpt_get_backlight()
>> will always be the callback used for retrieving the current backlight
>> level. After this patch we still need to fetch and write-back the PCH
>> backlight value if we're switching from CPU mode to PCH, but because
>> intel_pwm_setup_backlight() will retrieve the backlight level after setup()
>> using the get() hook, which always ends up being lpt_get_backlight(). Thus
>> - an additional call to lpt_get_backlight() in lpt_setup_backlight() is
>> made redundant.
>>
>> v7:
>> * Use panel->backlight.pwm_funcs->get() to get the backlight level in
>>   intel_pwm_setup_backlight(), lest we upset lockdep
>
> I think this change is wrong, as it now bypasses
> intel_panel_invert_pwm_level(). Please explain. I don't see anything in
> there that could trigger a lockdep warning.
>
> Perhaps it's the below you're referring to, but I think the root cause
> is different?
>
>> @@ -1788,22 +1780,17 @@ static int vlv_setup_backlight(struct 
>> intel_connector *connector, enum pipe pipe
>>  panel->backlight.active_low_pwm = ctl2 & BLM_POLARITY_I965;
>>  
>>  ctl = intel_de_read(dev_priv, VLV_BLC_PWM_CTL(pipe));
>> -panel->backlight.max = ctl >> 16;
>> +panel->backlight.pwm_level_max = ctl >> 16;
>>  
>> -if (!panel->backlight.max)
>> -panel->backlight.max = get_backlight_max_vbt(connector);
>> +if (!panel->backlight.pwm_level_max)
>> +panel->backlight.pwm_level_max = 
>> get_backlight_max_vbt(connector);
>>  
>> -if (!panel->backlight.max)
>> +if (!panel->backlight.pwm_level_max)
>>  return -ENODEV;
>>  
>> -panel->backlight.min = get_backlight_min_vbt(connector);
>> +panel->backlight.pwm_level_min = get_backlight_min_vbt(connector);
>>  
>> -val = _vlv_get_backlight(dev_priv, pipe);
>
> Turns out this is a meaningful change, as the higher level
> vlv_get_backlight() function that will be called instead hits:
>
> <4>[   12.870202] i915 :00:02.0: 
> drm_WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex))
>
> in intel_connector_get_pipe(connector).
>
> It's a real problem. See this, it's obvious (in retrospect):
>
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19348/fi-bsw-kefka/igt@run...@aborted.html
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_19348/fi-bsw-kefka/boot0.txt
>
> I don't have a quick answer how this could be handled neatly. Perhaps
> the ->get call (or rather, intel_pwm_get_backlight) to set
> panel->backlight.level needs to be spread out to the end of each

Re: [Intel-gfx] [PATCH] drm-buf: Add debug option

2021-01-14 Thread Daniel Vetter
On Thu, Jan 14, 2021 at 09:45:37AM +, Chris Wilson wrote:
> Quoting Daniel Vetter (2021-01-14 09:30:32)
> > On Thu, Jan 14, 2021 at 10:23 AM Chris Wilson  
> > wrote:
> > > The only other problem I see with the implementation is that there's
> > > nothing that says that each dmabuf->ops->map_dma_buf() returns a new
> > > sg_table, so we may end up undoing the xor. Or should each dma-buf
> > > return a fresh dma-mapping for iommu isolation?
> > 
> > Maybe I screwed it up, but that's why I extracted the little helpers:
> > We scramble when we get the sgtable from exporter, and unscramble
> > before we pass it back. dma-buf.c does some caching and will hand back
> > the same sgtable, but for that case we don't re-scramble.
> 
> The attachment is only mapped once, but there can be more than one
> attachment, and the backend could return the same sg_table for each
> mapping. Conceivably, it could return its own private sg_table where it
> wants to maintain the struct page. Seems like just adding a sentence to
> @map_dma_buf to clarify that each call should return a new sg_table will
> suffice.

Ah yes good point, will augment (once CI stops being angry at me).
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 00/31] Rid W=1 warnings from Video

2021-01-14 Thread Daniel Vetter
On Thu, Jan 14, 2021 at 10:11:01AM +0100, Daniel Vetter wrote:
> On Thu, Jan 14, 2021 at 10:04 AM Jani Nikula
>  wrote:
> >
> > On Wed, 13 Jan 2021, Lee Jones  wrote:
> > > On Wed, 13 Jan 2021, Sam Ravnborg wrote:
> > >
> > >> Hi Lee,
> > >>
> > >> On Wed, Jan 13, 2021 at 02:49:38PM +, Lee Jones wrote:
> > >> > This set is part of a larger effort attempting to clean-up W=1
> > >> > kernel builds, which are currently overwhelmingly riddled with
> > >> > niggly little warnings.
> > >> >
> > >> > This patch-set clears all of the W=1 warnings currently residing
> > >> > in drivers/video.
> > >>
> > >> I am sorry to say that I expect most of your nice patches to clash
> > >> with patches that is already present in drm-misc-next.
> > >>
> > >> drivers/video/ are warning free with W=1 in drm-misc-next today.
> > >>
> > >> I do not know why drm-misc-next is not yet pullled into linux-next.
> > >
> > > Well that kinda sucks.  What are the chances of that?
> > >
> > > Most of my patches fix issues that have been there for years!
> 
> I planned to go through them all today, let's see what's still needed.

First bunch of patches are all not needed anymore, I think this is quicker
if you're rebasing. Unfortunate this happened :-/
-Daniel

> 
> > We auto-update the for-linux-next and for-linux-next-fixes branches, and
> > they seem to be up-to-date [1].
> 
> It only happened last week instead of right after -rc1 due to some
> confusion, but it should have been in linux-next for a few days
> already.
> 
> > How recent are the fixes, maybe because of this: [2]?
> >
> > BR,
> > Jani.
> >
> >
> > [1] https://cgit.freedesktop.org/drm/drm-misc
> > [2] http://lore.kernel.org/r/20210114113107.62210...@canb.auug.org.au
> 
> Patch for that just got committted, so this shouldn't be too big a
> window for drm-misc-next to be excluded should have been very small.
> -Daniel
> 
> >
> > --
> > Jani Nikula, Intel Open Source Graphics Center
> > ___
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: rcar-du: Use drmm_encoder_alloc() to manage encoder

2021-01-14 Thread Geert Uytterhoeven
On Wed, Jan 13, 2021 at 6:02 PM Kieran Bingham
 wrote:
> The encoder allocation was converted to a DRM managed resource at the
> same time as the addition of a new helper drmm_encoder_alloc() which
> simplifies the same process.
>
> Convert the custom drm managed resource allocation of the encoder
> with the helper to simplify the implementation, and prevent hitting a
> WARN_ON() due to the handling the drm_encoder_init() call directly
> without registering a .destroy() function op.
>
> Fixes: f5f16725edbc ("drm: rcar-du: Use DRM-managed allocation for encoders")
> Reported-by: Geert Uytterhoeven 
> Signed-off-by: Kieran Bingham 

Tested-by: Geert Uytterhoeven 
As in "the WARNING from drm_encoder_init() is gone".

Gr{oetje,eeting}s,

Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 00/30] [Set 13] Finally rid W=1 warnings from GPU

2021-01-14 Thread Lee Jones
On Wed, 13 Jan 2021, Lee Jones wrote:

> This set is part of a larger effort attempting to clean-up W=1
> kernel builds, which are currently overwhelmingly riddled with
> niggly little warnings.
> 
> 0 out of 5000 left!
> 
> LAST SET!  You're all clean.  Can you believe it!?

Ah, fair warning for you: we're not yet done.

Arm is clean.  There are still fix-ups required for x86 and PPC.

I'll get right on it.

-- 
Lee Jones [李琼斯]
Senior Technical Lead - Developer Services
Linaro.org │ Open source software for Arm SoCs
Follow Linaro: Facebook | Twitter | Blog
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v5 15/21] drm/tegra: Add new UAPI to header

2021-01-14 Thread Mikko Perttunen

On 1/14/21 10:36 AM, Dmitry Osipenko wrote:

13.01.2021 21:56, Mikko Perttunen пишет:

On 1/13/21 8:14 PM, Dmitry Osipenko wrote:

11.01.2021 16:00, Mikko Perttunen пишет:

+struct drm_tegra_submit_buf {
+    /**
+ * @mapping_id: [in]
+ *
+ * Identifier of the mapping to use in the submission.
+ */
+    __u32 mapping_id;


I'm now in process of trying out the UAPI using grate drivers and this
becomes the first obstacle.

Looks like this is not going to work well for older Tegra SoCs, in
particular for T20, which has a small GART.

Given that the usefulness of the partial mapping feature is very
questionable until it will be proven with a real userspace, we should
start with a dynamic mappings that are done at a time of job submission.

DRM already should have everything necessary for creating and managing
caches of mappings, grate kernel driver has been using drm_mm_scan for a
long time now for that.

It should be fine to support the static mapping feature, but it should
be done separately with the drm_mm integration, IMO.

What do think?



Can you elaborate on the requirements to be able to use GART? Are there
any other reasons this would not work on older chips?


We have all DRM devices in a single address space on T30+, hence having
duplicated mappings for each device should be a bit wasteful.


I guess this should be pretty easy to change to only keep one mapping 
per GEM object.





I think we should keep CHANNEL_MAP and mapping_ids, but if e.g. for GART
we cannot do mapping immediately at CHANNEL_MAP time, we can just treat
it as a "registration" call for the GEM object - potentially no-op like
direct physical addressing is. We can then do whatever is needed at
submit time. This way we can have the best of both worlds.


I have some thoughts now, but nothing concrete yet. Maybe we will need
to create a per-SoC ops for MM.


Yep, I think some specialized code will be needed, but hopefully it will 
be relatively minor.




I'll finish with trying what we currently have to see what else is
missing and then we will decide what to do about it.



Great :)


Note that partial mappings are already not present in this version of
the UAPI.


Oh, right :) I haven't got closely to this part of reviewing yet.



Mikko
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD

2021-01-14 Thread Daniel Vetter
On Thu, Jan 14, 2021 at 10:26 AM Daniel Vetter  wrote:
>
> On Thu, Jan 14, 2021 at 4:27 AM Jerome Glisse  wrote:
> >
> > On Wed, Jan 13, 2021 at 09:31:11PM +0100, Daniel Vetter wrote:
> > > On Wed, Jan 13, 2021 at 5:56 PM Jerome Glisse  wrote:
> > > > On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
> > > > > On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> > > > > > Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> > > > > > > On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> > > > > > >> This is the first version of our HMM based shared virtual memory 
> > > > > > >> manager
> > > > > > >> for KFD. There are still a number of known issues that we're 
> > > > > > >> working through
> > > > > > >> (see below). This will likely lead to some pretty significant 
> > > > > > >> changes in
> > > > > > >> MMU notifier handling and locking on the migration code paths. 
> > > > > > >> So don't
> > > > > > >> get hung up on those details yet.
> > > > > > >>
> > > > > > >> But I think this is a good time to start getting feedback. We're 
> > > > > > >> pretty
> > > > > > >> confident about the ioctl API, which is both simple and 
> > > > > > >> extensible for the
> > > > > > >> future. (see patches 4,16) The user mode side of the API can be 
> > > > > > >> found here:
> > > > > > >> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> > > > > > >>
> > > > > > >> I'd also like another pair of eyes on how we're interfacing with 
> > > > > > >> the GPU VM
> > > > > > >> code in amdgpu_vm.c (see patches 12,13), retry page fault 
> > > > > > >> handling (24,25),
> > > > > > >> and some retry IRQ handling changes (32).
> > > > > > >>
> > > > > > >>
> > > > > > >> Known issues:
> > > > > > >> * won't work with IOMMU enabled, we need to dma_map all pages 
> > > > > > >> properly
> > > > > > >> * still working on some race conditions and random bugs
> > > > > > >> * performance is not great yet
> > > > > > > Still catching up, but I think there's another one for your list:
> > > > > > >
> > > > > > >  * hmm gpu context preempt vs page fault handling. I've had a 
> > > > > > > short
> > > > > > >discussion about this one with Christian before the holidays, 
> > > > > > > and also
> > > > > > >some private chats with Jerome. It's nasty since no easy fix, 
> > > > > > > much less
> > > > > > >a good idea what's the best approach here.
> > > > > >
> > > > > > Do you have a pointer to that discussion or any more details?
> > > > >
> > > > > Essentially if you're handling an hmm page fault from the gpu, you can
> > > > > deadlock by calling dma_fence_wait on a (chain of, possibly) other 
> > > > > command
> > > > > submissions or compute contexts with dma_fence_wait. Which deadlocks 
> > > > > if
> > > > > you can't preempt while you have that page fault pending. Two 
> > > > > solutions:
> > > > >
> > > > > - your hw can (at least for compute ctx) preempt even when a page 
> > > > > fault is
> > > > >   pending
> > > > >
> > > > > - lots of screaming in trying to come up with an alternate solution. 
> > > > > They
> > > > >   all suck.
> > > > >
> > > > > Note that the dma_fence_wait is hard requirement, because we need 
> > > > > that for
> > > > > mmu notifiers and shrinkers, disallowing that would disable dynamic 
> > > > > memory
> > > > > management. Which is the current "ttm is self-limited to 50% of system
> > > > > memory" limitation Christian is trying to lift. So that's really not
> > > > > a restriction we can lift, at least not in upstream where we need to 
> > > > > also
> > > > > support old style hardware which doesn't have page fault support and
> > > > > really has no other option to handle memory management than
> > > > > dma_fence_wait.
> > > > >
> > > > > Thread was here:
> > > > >
> > > > > https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=e...@mail.gmail.com/
> > > > >
> > > > > There's a few ways to resolve this (without having preempt-capable
> > > > > hardware), but they're all supremely nasty.
> > > > > -Daniel
> > > > >
> > > >
> > > > I had a new idea, i wanted to think more about it but have not yet,
> > > > anyway here it is. Adding a new callback to dma fence which ask the
> > > > question can it dead lock ? Any time a GPU driver has pending page
> > > > fault (ie something calling into the mm) it answer yes, otherwise
> > > > no. The GPU shrinker would ask the question before waiting on any
> > > > dma-fence and back of if it gets yes. Shrinker can still try many
> > > > dma buf object for which it does not get a yes on associated fence.
> > >
> > > Having that answer on a given fence isn't enough, you still need to
> > > forward that information through the entire dependency graph, across
> > > drivers. That's the hard part, since that dependency graph is very
> > > implicit in the code, and we'd need to first roll it out across all
> > > drivers.
> >
> > Here i am sayi

Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD

2021-01-14 Thread Christian König

Am 13.01.21 um 17:56 schrieb Jerome Glisse:

On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:

On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:

Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:

On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:

This is the first version of our HMM based shared virtual memory manager
for KFD. There are still a number of known issues that we're working through
(see below). This will likely lead to some pretty significant changes in
MMU notifier handling and locking on the migration code paths. So don't
get hung up on those details yet.

But I think this is a good time to start getting feedback. We're pretty
confident about the ioctl API, which is both simple and extensible for the
future. (see patches 4,16) The user mode side of the API can be found here:
https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c

I'd also like another pair of eyes on how we're interfacing with the GPU VM
code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
and some retry IRQ handling changes (32).


Known issues:
* won't work with IOMMU enabled, we need to dma_map all pages properly
* still working on some race conditions and random bugs
* performance is not great yet

Still catching up, but I think there's another one for your list:

  * hmm gpu context preempt vs page fault handling. I've had a short
discussion about this one with Christian before the holidays, and also
some private chats with Jerome. It's nasty since no easy fix, much less
a good idea what's the best approach here.

Do you have a pointer to that discussion or any more details?

Essentially if you're handling an hmm page fault from the gpu, you can
deadlock by calling dma_fence_wait on a (chain of, possibly) other command
submissions or compute contexts with dma_fence_wait. Which deadlocks if
you can't preempt while you have that page fault pending. Two solutions:

- your hw can (at least for compute ctx) preempt even when a page fault is
   pending

- lots of screaming in trying to come up with an alternate solution. They
   all suck.

Note that the dma_fence_wait is hard requirement, because we need that for
mmu notifiers and shrinkers, disallowing that would disable dynamic memory
management. Which is the current "ttm is self-limited to 50% of system
memory" limitation Christian is trying to lift. So that's really not
a restriction we can lift, at least not in upstream where we need to also
support old style hardware which doesn't have page fault support and
really has no other option to handle memory management than
dma_fence_wait.

Thread was here:

https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=e...@mail.gmail.com/

There's a few ways to resolve this (without having preempt-capable
hardware), but they're all supremely nasty.
-Daniel


I had a new idea, i wanted to think more about it but have not yet,
anyway here it is. Adding a new callback to dma fence which ask the
question can it dead lock ? Any time a GPU driver has pending page
fault (ie something calling into the mm) it answer yes, otherwise
no. The GPU shrinker would ask the question before waiting on any
dma-fence and back of if it gets yes. Shrinker can still try many
dma buf object for which it does not get a yes on associated fence.

This does not solve the mmu notifier case, for this you would just
invalidate the gem userptr object (with a flag but not releasing the
page refcount) but you would not wait for the GPU (ie no dma fence
wait in that code path anymore). The userptr API never really made
the contract that it will always be in sync with the mm view of the
world so if different page get remapped to same virtual address
while GPU is still working with the old pages it should not be an
issue (it would not be in our usage of userptr for compositor and
what not).


The current working idea in my mind goes into a similar direction.

But instead of a callback I'm adding a complete new class of HMM fences.

Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for 
the dma_fences and HMM fences are ignored in container objects.


When you handle an implicit or explicit synchronization request from 
userspace you need to block for HMM fences to complete before taking any 
resource locks.


Regards,
Christian.



Maybe i overlook something there.

Cheers,
Jérôme

___
amd-gfx mailing list
amd-...@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PULL] drm-intel-fixes

2021-01-14 Thread Jani Nikula


Hi Dave & Daniel -

drm-intel-fixes-2021-01-14:
drm/i915 fixes for v5.11-rc4:
- Allow the sysadmin to override security mitigations
- Restore clear-residual mitigations for ivb/byt
- Limit VFE threads based on GT
- GVT: fix vfio edid and full display detection
- Fix DSI DSC power refcounting
- Fix LPT CPU mode backlight takeover
- Disable RPM wakeref assertions during driver shutdown
- Fix DSI sequence sleeps

BR,
Jani.

The following changes since commit 7c53f6b671f4aba70ff15e1b05148b10d58c2837:

  Linux 5.11-rc3 (2021-01-10 14:34:50 -0800)

are available in the Git repository at:

  git://anongit.freedesktop.org/drm/drm-intel tags/drm-intel-fixes-2021-01-14

for you to fetch changes up to 984cadea032b103c5824a5f29d0a36b3e9df6333:

  drm/i915: Allow the sysadmin to override security mitigations (2021-01-12 
19:03:40 +0200)


drm/i915 fixes for v5.11-rc4:
- Allow the sysadmin to override security mitigations
- Restore clear-residual mitigations for ivb/byt
- Limit VFE threads based on GT
- GVT: fix vfio edid and full display detection
- Fix DSI DSC power refcounting
- Fix LPT CPU mode backlight takeover
- Disable RPM wakeref assertions during driver shutdown
- Fix DSI sequence sleeps


Chris Wilson (4):
  drm/i915: Disable RPM wakeref assertions during driver shutdown
  drm/i915/gt: Limit VFE threads based on GT
  drm/i915/gt: Restore clear-residual mitigations for Ivybridge, Baytrail
  drm/i915: Allow the sysadmin to override security mitigations

Colin Xu (1):
  drm/i915/gvt: Fix vfio_edid issue for BXT/APL

Hans de Goede (1):
  drm/i915/dsi: Use unconditional msleep for the panel_on_delay when there 
is no reset-deassert MIPI-sequence

Imre Deak (1):
  drm/i915/icl: Fix initing the DSI DSC power refcount during HW readout

Jani Nikula (2):
  drm/i915/backlight: fix CPU mode backlight takeover on LPT
  Merge tag 'gvt-fixes-2020-01-08' of https://github.com/intel/gvt-linux 
into drm-intel-fixes

 drivers/gpu/drm/i915/Makefile   |   1 +
 drivers/gpu/drm/i915/display/icl_dsi.c  |   4 -
 drivers/gpu/drm/i915/display/intel_panel.c  |   9 +-
 drivers/gpu/drm/i915/display/vlv_dsi.c  |  16 ++-
 drivers/gpu/drm/i915/gt/gen7_renderclear.c  | 157 ++--
 drivers/gpu/drm/i915/gt/intel_ring_submission.c |   6 +-
 drivers/gpu/drm/i915/gvt/display.c  |  81 
 drivers/gpu/drm/i915/gvt/vgpu.c |   5 +-
 drivers/gpu/drm/i915/i915_drv.c |   4 +
 drivers/gpu/drm/i915/i915_mitigations.c | 146 ++
 drivers/gpu/drm/i915/i915_mitigations.h |  13 ++
 11 files changed, 339 insertions(+), 103 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_mitigations.c
 create mode 100644 drivers/gpu/drm/i915/i915_mitigations.h

-- 
Jani Nikula, Intel Open Source Graphics Center
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: linux-next: build failure after merge of the drm-misc tree

2021-01-14 Thread Thomas Zimmermann

Hi

Am 14.01.21 um 01:31 schrieb Stephen Rothwell:

Hi all,

After merging the drm-misc tree, today's linux-next build (arm
multi_v7_defconfig) failed like this:

drivers/gpu/drm/drm_cache.c: In function 'drm_need_swiotlb':
drivers/gpu/drm/drm_cache.c:202:6: error: implicit declaration of function 
'mem_encrypt_active' [-Werror=implicit-function-declaration]
   202 |  if (mem_encrypt_active())
   |  ^~


Caused by commit

   3abc66706385 ("drm: Implement drm_need_swiotlb() in drm_cache.c")

I have used the drm-misc tree from next-20210107 again for today.


Sorry for the breakage. Fixed in drm-misc-next.

Best regards
Thomas





--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



OpenPGP_signature
Description: OpenPGP digital signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD

2021-01-14 Thread Daniel Vetter
On Thu, Jan 14, 2021 at 11:49 AM Christian König
 wrote:
>
> Am 13.01.21 um 17:56 schrieb Jerome Glisse:
> > On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
> >> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> >>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
>  On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> > This is the first version of our HMM based shared virtual memory manager
> > for KFD. There are still a number of known issues that we're working 
> > through
> > (see below). This will likely lead to some pretty significant changes in
> > MMU notifier handling and locking on the migration code paths. So don't
> > get hung up on those details yet.
> >
> > But I think this is a good time to start getting feedback. We're pretty
> > confident about the ioctl API, which is both simple and extensible for 
> > the
> > future. (see patches 4,16) The user mode side of the API can be found 
> > here:
> > https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> >
> > I'd also like another pair of eyes on how we're interfacing with the 
> > GPU VM
> > code in amdgpu_vm.c (see patches 12,13), retry page fault handling 
> > (24,25),
> > and some retry IRQ handling changes (32).
> >
> >
> > Known issues:
> > * won't work with IOMMU enabled, we need to dma_map all pages properly
> > * still working on some race conditions and random bugs
> > * performance is not great yet
>  Still catching up, but I think there's another one for your list:
> 
>    * hmm gpu context preempt vs page fault handling. I've had a short
>  discussion about this one with Christian before the holidays, and 
>  also
>  some private chats with Jerome. It's nasty since no easy fix, much 
>  less
>  a good idea what's the best approach here.
> >>> Do you have a pointer to that discussion or any more details?
> >> Essentially if you're handling an hmm page fault from the gpu, you can
> >> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> >> submissions or compute contexts with dma_fence_wait. Which deadlocks if
> >> you can't preempt while you have that page fault pending. Two solutions:
> >>
> >> - your hw can (at least for compute ctx) preempt even when a page fault is
> >>pending
> >>
> >> - lots of screaming in trying to come up with an alternate solution. They
> >>all suck.
> >>
> >> Note that the dma_fence_wait is hard requirement, because we need that for
> >> mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> >> management. Which is the current "ttm is self-limited to 50% of system
> >> memory" limitation Christian is trying to lift. So that's really not
> >> a restriction we can lift, at least not in upstream where we need to also
> >> support old style hardware which doesn't have page fault support and
> >> really has no other option to handle memory management than
> >> dma_fence_wait.
> >>
> >> Thread was here:
> >>
> >> https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=e...@mail.gmail.com/
> >>
> >> There's a few ways to resolve this (without having preempt-capable
> >> hardware), but they're all supremely nasty.
> >> -Daniel
> >>
> > I had a new idea, i wanted to think more about it but have not yet,
> > anyway here it is. Adding a new callback to dma fence which ask the
> > question can it dead lock ? Any time a GPU driver has pending page
> > fault (ie something calling into the mm) it answer yes, otherwise
> > no. The GPU shrinker would ask the question before waiting on any
> > dma-fence and back of if it gets yes. Shrinker can still try many
> > dma buf object for which it does not get a yes on associated fence.
> >
> > This does not solve the mmu notifier case, for this you would just
> > invalidate the gem userptr object (with a flag but not releasing the
> > page refcount) but you would not wait for the GPU (ie no dma fence
> > wait in that code path anymore). The userptr API never really made
> > the contract that it will always be in sync with the mm view of the
> > world so if different page get remapped to same virtual address
> > while GPU is still working with the old pages it should not be an
> > issue (it would not be in our usage of userptr for compositor and
> > what not).
>
> The current working idea in my mind goes into a similar direction.
>
> But instead of a callback I'm adding a complete new class of HMM fences.
>
> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
> the dma_fences and HMM fences are ignored in container objects.
>
> When you handle an implicit or explicit synchronization request from
> userspace you need to block for HMM fences to complete before taking any
> resource locks.

Isnt' that what I call gang scheduling? I.e. you either run in HMM
mode

Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD

2021-01-14 Thread Christian König

Am 14.01.21 um 06:34 schrieb Felix Kuehling:

Am 2021-01-11 um 11:29 a.m. schrieb Daniel Vetter:

On Fri, Jan 08, 2021 at 12:56:24PM -0500, Felix Kuehling wrote:

Am 2021-01-08 um 11:53 a.m. schrieb Daniel Vetter:

On Fri, Jan 8, 2021 at 5:36 PM Felix Kuehling  wrote:

Am 2021-01-08 um 11:06 a.m. schrieb Daniel Vetter:

On Fri, Jan 8, 2021 at 4:58 PM Felix Kuehling  wrote:

Am 2021-01-08 um 9:40 a.m. schrieb Daniel Vetter:

On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:

Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:

On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:

This is the first version of our HMM based shared virtual memory manager
for KFD. There are still a number of known issues that we're working through
(see below). This will likely lead to some pretty significant changes in
MMU notifier handling and locking on the migration code paths. So don't
get hung up on those details yet.

But I think this is a good time to start getting feedback. We're pretty
confident about the ioctl API, which is both simple and extensible for the
future. (see patches 4,16) The user mode side of the API can be found here:
https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c

I'd also like another pair of eyes on how we're interfacing with the GPU VM
code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
and some retry IRQ handling changes (32).


Known issues:
* won't work with IOMMU enabled, we need to dma_map all pages properly
* still working on some race conditions and random bugs
* performance is not great yet

Still catching up, but I think there's another one for your list:

  * hmm gpu context preempt vs page fault handling. I've had a short
discussion about this one with Christian before the holidays, and also
some private chats with Jerome. It's nasty since no easy fix, much less
a good idea what's the best approach here.

Do you have a pointer to that discussion or any more details?

Essentially if you're handling an hmm page fault from the gpu, you can
deadlock by calling dma_fence_wait on a (chain of, possibly) other command
submissions or compute contexts with dma_fence_wait. Which deadlocks if
you can't preempt while you have that page fault pending. Two solutions:

- your hw can (at least for compute ctx) preempt even when a page fault is
   pending

Our GFXv9 GPUs can do this. GFXv10 cannot.

Uh, why did your hw guys drop this :-/

Performance. It's the same reason why the XNACK mode selection API
exists (patch 16). When we enable recoverable page fault handling in the
compute units on GFXv9, it costs some performance even when no page
faults are happening. On GFXv10 that retry fault handling moved out of
the compute units, so they don't take the performance hit. But that
sacrificed the ability to preempt during page faults. We'll need to work
with our hardware teams to restore that capability in a future generation.

Ah yes, you need to stall in more points in the compute cores to make sure
you can recover if the page fault gets interrupted.

Maybe my knowledge is outdated, but my understanding is that nvidia can
also preempt (but only for compute jobs, since oh dear the pain this would
be for all the fixed function stuff). Since gfx10 moved page fault
handling further away from compute cores, do you know whether this now
means you can do page faults for (some?) fixed function stuff too? Or
still only for compute?

I'm not sure.



Supporting page fault for 3d would be real pain with the corner we're
stuck in right now, but better we know about this early than later :-/

I know Christian hates the idea.


Well I don't hate the idea. I just don't think that this will ever work 
correctly and performant.


A big part of the additional fun is that we currently have a mix of HMM 
capable engines (3D, compute, DMA) and not HMM capable engines (display, 
multimedia etc..).



We know that page faults on GPUs can be
a huge performance drain because you're stalling potentially so many
threads and the CPU can become a bottle neck dealing with all the page
faults from many GPU threads. On the compute side, applications will be
optimized to avoid them as much as possible, e.g. by pre-faulting or
pre-fetching data before it's needed.

But I think you need page faults to make overcommitted memory with user
mode command submission not suck.


Yeah, completely agree.

The only short term alternative I see is to have an IOCTL telling the 
kernel which memory is currently in use. And that is complete nonsense 
cause it kills the advantage why we want user mode command submission in 
the first place.


Regards,
Christian.


I do think it can be rescued with what I call gang scheduling of
engines: I.e. when a given engine is running a context (or a group of
engines, depending how your hw works) that can cause a page fault, you
must flush out all workloads running on the same engine which could
block a dma_fence (preem

Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions

2021-01-14 Thread Kieran Bingham
Hi Thomas,

On 23/11/2020 11:56, Thomas Zimmermann wrote:
> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
> and offset as in the old implementation and immediately maps in the
> buffer's memory pages.
> 
> Changing CMA helpers to use the GEM object function allows for the
> removal of the special implementations for mmap and gem_prime_mmap
> callbacks. The regular functions drm_gem_mmap() and drm_gem_prime_mmap()
> are now used.

I've encountered a memory leak regression in our Renesas R-Car DU tests,
and git bisection has led me to this patch (as commit f5ca8eb6f9).

Running the tests sequentially, while grepping /proc/meminfo for Cma, it
is evident that CMA memory is not released, until exhausted and the
allocations fail (seen in [0]) shown by the error report:

> self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay, 
> mode.vdisplay, "XR24"))
> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory


Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
successfully [1] on the commit previous to that (bc2532ab7c2):

Reverting f5ca8eb6f9 also produces a successful pass [2]

 [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
 [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
 [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert


I don't believe we handle mmap specially in the RCar-DU driver, so I
wonder if this issue has hit anyone else as well?

Any ideas of a repair without a revert ? Or do we just need to submit a
revert?

I've yet to fully understand the implications of the patch below.

Thanks
--
Regards

Kieran



> Signed-off-by: Thomas Zimmermann 
> ---
>  drivers/gpu/drm/drm_file.c   |   3 +-
>  drivers/gpu/drm/drm_gem_cma_helper.c | 121 +--
>  drivers/gpu/drm/pl111/pl111_drv.c|   2 +-
>  drivers/gpu/drm/vc4/vc4_bo.c |   2 +-
>  include/drm/drm_gem_cma_helper.h |  10 +--
>  5 files changed, 44 insertions(+), 94 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
> index b50380fa80ce..80886d50d0f1 100644
> --- a/drivers/gpu/drm/drm_file.c
> +++ b/drivers/gpu/drm/drm_file.c
> @@ -113,8 +113,7 @@ bool drm_dev_needs_global_mutex(struct drm_device *dev)
>   * The memory mapping implementation will vary depending on how the driver
>   * manages memory. Legacy drivers will use the deprecated drm_legacy_mmap()
>   * function, modern drivers should use one of the provided memory-manager
> - * specific implementations. For GEM-based drivers this is drm_gem_mmap(), 
> and
> - * for drivers which use the CMA GEM helpers it's drm_gem_cma_mmap().
> + * specific implementations. For GEM-based drivers this is drm_gem_mmap().
>   *
>   * No other file operations are supported by the DRM userspace API. Overall 
> the
>   * following is an example &file_operations structure::
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c 
> b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 6a4ef335ebc9..7942cf05cd93 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs 
> drm_gem_cma_default_funcs = {
>   .print_info = drm_gem_cma_print_info,
>   .get_sg_table = drm_gem_cma_get_sg_table,
>   .vmap = drm_gem_cma_vmap,
> + .mmap = drm_gem_cma_mmap,
>   .vm_ops = &drm_gem_cma_vm_ops,
>  };
>  
> @@ -277,62 +278,6 @@ const struct vm_operations_struct drm_gem_cma_vm_ops = {
>  };
>  EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
>  
> -static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
> - struct vm_area_struct *vma)
> -{
> - int ret;
> -
> - /*
> -  * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
> -  * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
> -  * the whole buffer.
> -  */
> - vma->vm_flags &= ~VM_PFNMAP;
> - vma->vm_pgoff = 0;
> -
> - ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
> -   cma_obj->paddr, vma->vm_end - vma->vm_start);
> - if (ret)
> - drm_gem_vm_close(vma);
> -
> - return ret;
> -}
> -
> -/**
> - * drm_gem_cma_mmap - memory-map a CMA GEM object
> - * @filp: file object
> - * @vma: VMA for the area to be mapped
> - *
> - * This function implements an augmented version of the GEM DRM file mmap
> - * operation for CMA objects: In addition to the usual GEM VMA setup it
> - * immediately faults in the entire object instead of using on-demaind
> - * faulting. Drivers which employ the CMA helpers should use this function
> - * as their ->mmap() handler in the DRM device file's file_operations
> - * structure.
> - *
> - * Instead of directly referencing this function, drivers should use the
> - * DEFINE_DRM_GEM_CMA_FOPS().macro.
> - *
> - * Returns:
> - * 0 on success or a negative error code on failure.
> - */
> -int drm_gem_cma_mma

Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions

2021-01-14 Thread Thomas Zimmermann

Hi Kieran

Am 14.01.21 um 13:51 schrieb Kieran Bingham:

Hi Thomas,

On 23/11/2020 11:56, Thomas Zimmermann wrote:

The new GEM object function drm_gem_cma_mmap() sets the VMA flags
and offset as in the old implementation and immediately maps in the
buffer's memory pages.

Changing CMA helpers to use the GEM object function allows for the
removal of the special implementations for mmap and gem_prime_mmap
callbacks. The regular functions drm_gem_mmap() and drm_gem_prime_mmap()
are now used.


I've encountered a memory leak regression in our Renesas R-Car DU tests,
and git bisection has led me to this patch (as commit f5ca8eb6f9).

Running the tests sequentially, while grepping /proc/meminfo for Cma, it
is evident that CMA memory is not released, until exhausted and the
allocations fail (seen in [0]) shown by the error report:


 self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay, mode.vdisplay, 
"XR24"))
ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory



Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
successfully [1] on the commit previous to that (bc2532ab7c2):

Reverting f5ca8eb6f9 also produces a successful pass [2]

  [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
  [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
  [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert


I don't believe we handle mmap specially in the RCar-DU driver, so I
wonder if this issue has hit anyone else as well?

Any ideas of a repair without a revert ? Or do we just need to submit a
revert?


I think we might not be setting the VMA ops and therefore not finalize 
the BO correctly. Could you please apply the attched (quick-and-dirty) 
patch and try again?


Best regards
Thomas



I've yet to fully understand the implications of the patch below.

Thanks
--
Regards

Kieran




Signed-off-by: Thomas Zimmermann 
---
  drivers/gpu/drm/drm_file.c   |   3 +-
  drivers/gpu/drm/drm_gem_cma_helper.c | 121 +--
  drivers/gpu/drm/pl111/pl111_drv.c|   2 +-
  drivers/gpu/drm/vc4/vc4_bo.c |   2 +-
  include/drm/drm_gem_cma_helper.h |  10 +--
  5 files changed, 44 insertions(+), 94 deletions(-)

diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
index b50380fa80ce..80886d50d0f1 100644
--- a/drivers/gpu/drm/drm_file.c
+++ b/drivers/gpu/drm/drm_file.c
@@ -113,8 +113,7 @@ bool drm_dev_needs_global_mutex(struct drm_device *dev)
   * The memory mapping implementation will vary depending on how the driver
   * manages memory. Legacy drivers will use the deprecated drm_legacy_mmap()
   * function, modern drivers should use one of the provided memory-manager
- * specific implementations. For GEM-based drivers this is drm_gem_mmap(), and
- * for drivers which use the CMA GEM helpers it's drm_gem_cma_mmap().
+ * specific implementations. For GEM-based drivers this is drm_gem_mmap().
   *
   * No other file operations are supported by the DRM userspace API. Overall 
the
   * following is an example &file_operations structure::
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c 
b/drivers/gpu/drm/drm_gem_cma_helper.c
index 6a4ef335ebc9..7942cf05cd93 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs 
drm_gem_cma_default_funcs = {
.print_info = drm_gem_cma_print_info,
.get_sg_table = drm_gem_cma_get_sg_table,
.vmap = drm_gem_cma_vmap,
+   .mmap = drm_gem_cma_mmap,
.vm_ops = &drm_gem_cma_vm_ops,
  };
  
@@ -277,62 +278,6 @@ const struct vm_operations_struct drm_gem_cma_vm_ops = {

  };
  EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
  
-static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,

-   struct vm_area_struct *vma)
-{
-   int ret;
-
-   /*
-* Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
-* vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
-* the whole buffer.
-*/
-   vma->vm_flags &= ~VM_PFNMAP;
-   vma->vm_pgoff = 0;
-
-   ret = dma_mmap_wc(cma_obj->base.dev->dev, vma, cma_obj->vaddr,
- cma_obj->paddr, vma->vm_end - vma->vm_start);
-   if (ret)
-   drm_gem_vm_close(vma);
-
-   return ret;
-}
-
-/**
- * drm_gem_cma_mmap - memory-map a CMA GEM object
- * @filp: file object
- * @vma: VMA for the area to be mapped
- *
- * This function implements an augmented version of the GEM DRM file mmap
- * operation for CMA objects: In addition to the usual GEM VMA setup it
- * immediately faults in the entire object instead of using on-demaind
- * faulting. Drivers which employ the CMA helpers should use this function
- * as their ->mmap() handler in the DRM device file's file_operations
- * structure.
- *
- * Instead of directly referencing this function, drivers should use the
- * DEF

[PATCH] drm/vmwgfx: Add 1080p to vmw_kms_connector_builtin list

2021-01-14 Thread Maxim Kochetkov
It allows to set 1080p mode without vmware/virtulabox tools installed.

Signed-off-by: Maxim Kochetkov 
---
 drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c 
b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
index 312ed0881a99..b47622c6a112 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
@@ -2103,6 +2103,10 @@ static struct drm_display_mode 
vmw_kms_connector_builtin[] = {
{ DRM_MODE("1856x1392", DRM_MODE_TYPE_DRIVER, 218250, 1856, 1952,
   2176, 2528, 0, 1392, 1393, 1396, 1439, 0,
   DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC) },
+   /* 1920x1080@60Hz */
+   { DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2008,
+  2052, 2200, 0, 1080, 1084, 1089, 1125, 0,
+  DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC) },
/* 1920x1200@60Hz */
{ DRM_MODE("1920x1200", DRM_MODE_TYPE_DRIVER, 193250, 1920, 2056,
   2256, 2592, 0, 1200, 1203, 1209, 1245, 0,
-- 
2.29.2

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


HMM fence (was Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD)

2021-01-14 Thread Christian König

Am 14.01.21 um 12:52 schrieb Daniel Vetter:

[SNIP]

I had a new idea, i wanted to think more about it but have not yet,
anyway here it is. Adding a new callback to dma fence which ask the
question can it dead lock ? Any time a GPU driver has pending page
fault (ie something calling into the mm) it answer yes, otherwise
no. The GPU shrinker would ask the question before waiting on any
dma-fence and back of if it gets yes. Shrinker can still try many
dma buf object for which it does not get a yes on associated fence.

This does not solve the mmu notifier case, for this you would just
invalidate the gem userptr object (with a flag but not releasing the
page refcount) but you would not wait for the GPU (ie no dma fence
wait in that code path anymore). The userptr API never really made
the contract that it will always be in sync with the mm view of the
world so if different page get remapped to same virtual address
while GPU is still working with the old pages it should not be an
issue (it would not be in our usage of userptr for compositor and
what not).

The current working idea in my mind goes into a similar direction.

But instead of a callback I'm adding a complete new class of HMM fences.

Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
the dma_fences and HMM fences are ignored in container objects.

When you handle an implicit or explicit synchronization request from
userspace you need to block for HMM fences to complete before taking any
resource locks.

Isnt' that what I call gang scheduling? I.e. you either run in HMM
mode, or in legacy fencing mode (whether implicit or explicit doesn't
really matter I think). By forcing that split we avoid the problem,
but it means occasionally full stalls on mixed workloads.

But that's not what Jerome wants (afaiui at least), I think his idea
is to track the reverse dependencies of all the fences floating
around, and then skip evicting an object if you have to wait for any
fence that is problematic for the current calling context. And I don't
think that's very feasible in practice.

So what kind of hmm fences do you have in mind here?


It's a bit more relaxed than your gang schedule.

See the requirements are as follow:

1. dma_fences never depend on hmm_fences.
2. hmm_fences can never preempt dma_fences.
3. dma_fences must be able to preempt hmm_fences or we always reserve 
enough hardware resources (CUs) to guarantee forward progress of dma_fences.


Critical sections are MMU notifiers, page faults, GPU schedulers and 
dma_reservation object locks.


4. It is valid to wait for a dma_fences in critical sections.
5. It is not valid to wait for hmm_fences in critical sections.

Fence creation either happens during command submission or by adding 
something like a barrier or signal command to your userspace queue.


6. If we have an hmm_fence as implicit or explicit dependency for 
creating a dma_fence we must wait for that before taking any locks or 
reserving resources.
7. If we have a dma_fence as implicit or explicit dependency for 
creating an hmm_fence we can wait later on. So busy waiting or special 
WAIT hardware commands are valid.


This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the 
same time on the hardware.


In other words we can have a high priority gfx queue running jobs based 
on dma_fences and a low priority compute queue running jobs based on 
hmm_fences.


Only when we switch from hmm_fence to dma_fence we need to block the 
submission until all the necessary resources (both memory as well as 
CUs) are available.


This is somewhat an extension to your gang submit idea.

Regards,
Christian.


-Daniel



___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 06/10] drm: amd: amdgpu_dm.h: fix a wrong kernel-doc markup

2021-01-14 Thread Christian König

Am 14.01.21 um 08:53 schrieb Mauro Carvalho Chehab:

There's a missing colon, causing the markup to be ignored,
solving those warnings:

../drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h:340: warning: 
Incorrect use of kernel-doc format:  * @active_vblank_irq_count
../drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h:379: warning: 
Function parameter or member 'active_vblank_irq_count' not described in 
'amdgpu_display_manager'

Signed-off-by: Mauro Carvalho Chehab 


Reviewed-by: Christian König 


---
  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h 
b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
index f084e2fc9569..5ee1b766884e 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -337,7 +337,7 @@ struct amdgpu_display_manager {
const struct gpu_info_soc_bounding_box_v1_0 *soc_bounding_box;
  
  	/**

-* @active_vblank_irq_count
+* @active_vblank_irq_count:
 *
 * number of currently active vblank irqs
 */


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [drm:dm_plane_helper_prepare_fb [amdgpu]] *ERROR* Failed to pin framebuffer with error -12

2021-01-14 Thread Christian König

Am 14.01.21 um 01:22 schrieb Mikhail Gavrilov:

On Tue, 12 Jan 2021 at 01:45, Christian König  wrote:

But what you have in your logs so far are only unrelated symptoms, the
root of the problem is that somebody is leaking memory.

What you could do as well is to try to enable kmemleak

I captured some memleaks.
Do they contain any useful information?


Unfortunately not of hand.

I also don't see any bug reports from other people and can't reproduce 
the last backtrace you send out TTM here.


Do you have any local modifications or special setup in your system? 
Like bpf scripts or something like that?


Christian.



[1] https://pastebin.com/n0FE7Hsu
[2] https://pastebin.com/MUX55L1k
[3] https://pastebin.com/a3FT7DVG
[4] https://pastebin.com/1ALvJKz7

--
Best Regards,
Mike Gavrilov.
___
amd-gfx mailing list
amd-...@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: HMM fence (was Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD)

2021-01-14 Thread Daniel Vetter
On Thu, Jan 14, 2021 at 2:37 PM Christian König
 wrote:
>
> Am 14.01.21 um 12:52 schrieb Daniel Vetter:
> > [SNIP]
> >>> I had a new idea, i wanted to think more about it but have not yet,
> >>> anyway here it is. Adding a new callback to dma fence which ask the
> >>> question can it dead lock ? Any time a GPU driver has pending page
> >>> fault (ie something calling into the mm) it answer yes, otherwise
> >>> no. The GPU shrinker would ask the question before waiting on any
> >>> dma-fence and back of if it gets yes. Shrinker can still try many
> >>> dma buf object for which it does not get a yes on associated fence.
> >>>
> >>> This does not solve the mmu notifier case, for this you would just
> >>> invalidate the gem userptr object (with a flag but not releasing the
> >>> page refcount) but you would not wait for the GPU (ie no dma fence
> >>> wait in that code path anymore). The userptr API never really made
> >>> the contract that it will always be in sync with the mm view of the
> >>> world so if different page get remapped to same virtual address
> >>> while GPU is still working with the old pages it should not be an
> >>> issue (it would not be in our usage of userptr for compositor and
> >>> what not).
> >> The current working idea in my mind goes into a similar direction.
> >>
> >> But instead of a callback I'm adding a complete new class of HMM fences.
> >>
> >> Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
> >> the dma_fences and HMM fences are ignored in container objects.
> >>
> >> When you handle an implicit or explicit synchronization request from
> >> userspace you need to block for HMM fences to complete before taking any
> >> resource locks.
> > Isnt' that what I call gang scheduling? I.e. you either run in HMM
> > mode, or in legacy fencing mode (whether implicit or explicit doesn't
> > really matter I think). By forcing that split we avoid the problem,
> > but it means occasionally full stalls on mixed workloads.
> >
> > But that's not what Jerome wants (afaiui at least), I think his idea
> > is to track the reverse dependencies of all the fences floating
> > around, and then skip evicting an object if you have to wait for any
> > fence that is problematic for the current calling context. And I don't
> > think that's very feasible in practice.
> >
> > So what kind of hmm fences do you have in mind here?
>
> It's a bit more relaxed than your gang schedule.
>
> See the requirements are as follow:
>
> 1. dma_fences never depend on hmm_fences.
> 2. hmm_fences can never preempt dma_fences.
> 3. dma_fences must be able to preempt hmm_fences or we always reserve
> enough hardware resources (CUs) to guarantee forward progress of dma_fences.
>
> Critical sections are MMU notifiers, page faults, GPU schedulers and
> dma_reservation object locks.
>
> 4. It is valid to wait for a dma_fences in critical sections.
> 5. It is not valid to wait for hmm_fences in critical sections.
>
> Fence creation either happens during command submission or by adding
> something like a barrier or signal command to your userspace queue.
>
> 6. If we have an hmm_fence as implicit or explicit dependency for
> creating a dma_fence we must wait for that before taking any locks or
> reserving resources.
> 7. If we have a dma_fence as implicit or explicit dependency for
> creating an hmm_fence we can wait later on. So busy waiting or special
> WAIT hardware commands are valid.
>
> This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the
> same time on the hardware.
>
> In other words we can have a high priority gfx queue running jobs based
> on dma_fences and a low priority compute queue running jobs based on
> hmm_fences.
>
> Only when we switch from hmm_fence to dma_fence we need to block the
> submission until all the necessary resources (both memory as well as
> CUs) are available.
>
> This is somewhat an extension to your gang submit idea.

Either I'm missing something, or this is just exactly what we
documented already with userspace fences in general, and how you can't
have a dma_fence depend upon a userspace (or hmm_fence).

My gang scheduling idea is really just an alternative for what you
have listed as item 3 above. Instead of requiring preempt or requiring
guaranteed forward progress of some other sorts we flush out any
pending dma_fence request. But _only_ those which would get stalled by
the job we're running, so high-priority sdma requests we need in the
kernel to shuffle buffers around are still all ok. This would be
needed if you're hw can't preempt, and you also have shared engines
between compute and gfx, so reserving CUs won't solve the problem
either.

What I don't mean with my gang scheduling is a completely exclusive
mode between hmm_fence and dma_fence, since that would prevent us from
using copy engines and dma_fence in the kernel to shuffle memory
around for hmm jobs. And that would suck, even on compute-only
workloads. Maybe I should rename "gang scheduling"

Re: [drm:dm_plane_helper_prepare_fb [amdgpu]] *ERROR* Failed to pin framebuffer with error -12

2021-01-14 Thread Daniel Vetter
On Thu, Jan 14, 2021 at 2:56 PM Christian König
 wrote:
>
> Am 14.01.21 um 01:22 schrieb Mikhail Gavrilov:
> > On Tue, 12 Jan 2021 at 01:45, Christian König  
> > wrote:
> >> But what you have in your logs so far are only unrelated symptoms, the
> >> root of the problem is that somebody is leaking memory.
> >>
> >> What you could do as well is to try to enable kmemleak
> > I captured some memleaks.
> > Do they contain any useful information?
>
> Unfortunately not of hand.
>
> I also don't see any bug reports from other people and can't reproduce
> the last backtrace you send out TTM here.
>
> Do you have any local modifications or special setup in your system?
> Like bpf scripts or something like that?

There's another bug report (for rcar-du, bisected to the a switch to
use more cma helpers) about leaking mmaps, which keeps too many fb
alive, so maybe we have gained a refcount leak somewhere recently. But
could also be totally unrelated.
-Daniel



>
> Christian.
>
> >
> > [1] https://pastebin.com/n0FE7Hsu
> > [2] https://pastebin.com/MUX55L1k
> > [3] https://pastebin.com/a3FT7DVG
> > [4] https://pastebin.com/1ALvJKz7
> >
> > --
> > Best Regards,
> > Mike Gavrilov.
> > ___
> > amd-gfx mailing list
> > amd-...@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: HMM fence (was Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD)

2021-01-14 Thread Christian König

Am 14.01.21 um 14:57 schrieb Daniel Vetter:

On Thu, Jan 14, 2021 at 2:37 PM Christian König
 wrote:

Am 14.01.21 um 12:52 schrieb Daniel Vetter:

[SNIP]

I had a new idea, i wanted to think more about it but have not yet,
anyway here it is. Adding a new callback to dma fence which ask the
question can it dead lock ? Any time a GPU driver has pending page
fault (ie something calling into the mm) it answer yes, otherwise
no. The GPU shrinker would ask the question before waiting on any
dma-fence and back of if it gets yes. Shrinker can still try many
dma buf object for which it does not get a yes on associated fence.

This does not solve the mmu notifier case, for this you would just
invalidate the gem userptr object (with a flag but not releasing the
page refcount) but you would not wait for the GPU (ie no dma fence
wait in that code path anymore). The userptr API never really made
the contract that it will always be in sync with the mm view of the
world so if different page get remapped to same virtual address
while GPU is still working with the old pages it should not be an
issue (it would not be in our usage of userptr for compositor and
what not).

The current working idea in my mind goes into a similar direction.

But instead of a callback I'm adding a complete new class of HMM fences.

Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
the dma_fences and HMM fences are ignored in container objects.

When you handle an implicit or explicit synchronization request from
userspace you need to block for HMM fences to complete before taking any
resource locks.

Isnt' that what I call gang scheduling? I.e. you either run in HMM
mode, or in legacy fencing mode (whether implicit or explicit doesn't
really matter I think). By forcing that split we avoid the problem,
but it means occasionally full stalls on mixed workloads.

But that's not what Jerome wants (afaiui at least), I think his idea
is to track the reverse dependencies of all the fences floating
around, and then skip evicting an object if you have to wait for any
fence that is problematic for the current calling context. And I don't
think that's very feasible in practice.

So what kind of hmm fences do you have in mind here?

It's a bit more relaxed than your gang schedule.

See the requirements are as follow:

1. dma_fences never depend on hmm_fences.
2. hmm_fences can never preempt dma_fences.
3. dma_fences must be able to preempt hmm_fences or we always reserve
enough hardware resources (CUs) to guarantee forward progress of dma_fences.

Critical sections are MMU notifiers, page faults, GPU schedulers and
dma_reservation object locks.

4. It is valid to wait for a dma_fences in critical sections.
5. It is not valid to wait for hmm_fences in critical sections.

Fence creation either happens during command submission or by adding
something like a barrier or signal command to your userspace queue.

6. If we have an hmm_fence as implicit or explicit dependency for
creating a dma_fence we must wait for that before taking any locks or
reserving resources.
7. If we have a dma_fence as implicit or explicit dependency for
creating an hmm_fence we can wait later on. So busy waiting or special
WAIT hardware commands are valid.

This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the
same time on the hardware.

In other words we can have a high priority gfx queue running jobs based
on dma_fences and a low priority compute queue running jobs based on
hmm_fences.

Only when we switch from hmm_fence to dma_fence we need to block the
submission until all the necessary resources (both memory as well as
CUs) are available.

This is somewhat an extension to your gang submit idea.

Either I'm missing something, or this is just exactly what we
documented already with userspace fences in general, and how you can't
have a dma_fence depend upon a userspace (or hmm_fence).

My gang scheduling idea is really just an alternative for what you
have listed as item 3 above. Instead of requiring preempt or requiring
guaranteed forward progress of some other sorts we flush out any
pending dma_fence request. But _only_ those which would get stalled by
the job we're running, so high-priority sdma requests we need in the
kernel to shuffle buffers around are still all ok. This would be
needed if you're hw can't preempt, and you also have shared engines
between compute and gfx, so reserving CUs won't solve the problem
either.

What I don't mean with my gang scheduling is a completely exclusive
mode between hmm_fence and dma_fence, since that would prevent us from
using copy engines and dma_fence in the kernel to shuffle memory
around for hmm jobs. And that would suck, even on compute-only
workloads. Maybe I should rename "gang scheduling" to "engine flush"
or something like that.


Yeah, "engine flush" makes it much more clearer.

What I wanted to emphasis is that we have to mix dma_fences and 
hmm_fences running at the same time on the same hardware

Re: [PATCH v6 15/16] drm: drm_crc: fix a kernel-doc markup

2021-01-14 Thread Simon Ser
On Thursday, January 14th, 2021 at 9:06 AM, Simon Ser  
wrote:

> On Thursday, January 14th, 2021 at 9:04 AM, Mauro Carvalho Chehab 
>  wrote:
>
> > A function has a different name between their prototype
> > and its kernel-doc markup:
> >
> > ../include/drm/drm_crtc.h:1257: warning: expecting prototype for 
> > drm_crtc_alloc_with_planes(). Prototype was for 
> > drmm_crtc_alloc_with_planes() instead
> >
> > Signed-off-by: Mauro Carvalho Chehab 
>
> Acked-by: Simon Ser 

Pushed to drm-misc-next, thanks for the fix!
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/vblank: Fix typo in docs

2021-01-14 Thread Sumera Priyadarsini
Fix typo in intro chapter in drm_vblank.c.
Change 'sacn' to 'scan'.

Signed-off-by: Sumera Priyadarsini 
---
 drivers/gpu/drm/drm_vblank.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index d30e2f2b8f3c..30912d8f82a5 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -74,7 +74,7 @@
  *||   updates the
  *||   frame as it
  *||   travels down
- *||   ("sacn out")
+ *||   ("scan out")
  *|   Old frame|
  *||
  *||
-- 
2.25.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: HMM fence (was Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD)

2021-01-14 Thread Daniel Vetter
On Thu, Jan 14, 2021 at 3:13 PM Christian König
 wrote:
>
> Am 14.01.21 um 14:57 schrieb Daniel Vetter:
> > On Thu, Jan 14, 2021 at 2:37 PM Christian König
> >  wrote:
> >> Am 14.01.21 um 12:52 schrieb Daniel Vetter:
> >>> [SNIP]
> > I had a new idea, i wanted to think more about it but have not yet,
> > anyway here it is. Adding a new callback to dma fence which ask the
> > question can it dead lock ? Any time a GPU driver has pending page
> > fault (ie something calling into the mm) it answer yes, otherwise
> > no. The GPU shrinker would ask the question before waiting on any
> > dma-fence and back of if it gets yes. Shrinker can still try many
> > dma buf object for which it does not get a yes on associated fence.
> >
> > This does not solve the mmu notifier case, for this you would just
> > invalidate the gem userptr object (with a flag but not releasing the
> > page refcount) but you would not wait for the GPU (ie no dma fence
> > wait in that code path anymore). The userptr API never really made
> > the contract that it will always be in sync with the mm view of the
> > world so if different page get remapped to same virtual address
> > while GPU is still working with the old pages it should not be an
> > issue (it would not be in our usage of userptr for compositor and
> > what not).
>  The current working idea in my mind goes into a similar direction.
> 
>  But instead of a callback I'm adding a complete new class of HMM fences.
> 
>  Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
>  the dma_fences and HMM fences are ignored in container objects.
> 
>  When you handle an implicit or explicit synchronization request from
>  userspace you need to block for HMM fences to complete before taking any
>  resource locks.
> >>> Isnt' that what I call gang scheduling? I.e. you either run in HMM
> >>> mode, or in legacy fencing mode (whether implicit or explicit doesn't
> >>> really matter I think). By forcing that split we avoid the problem,
> >>> but it means occasionally full stalls on mixed workloads.
> >>>
> >>> But that's not what Jerome wants (afaiui at least), I think his idea
> >>> is to track the reverse dependencies of all the fences floating
> >>> around, and then skip evicting an object if you have to wait for any
> >>> fence that is problematic for the current calling context. And I don't
> >>> think that's very feasible in practice.
> >>>
> >>> So what kind of hmm fences do you have in mind here?
> >> It's a bit more relaxed than your gang schedule.
> >>
> >> See the requirements are as follow:
> >>
> >> 1. dma_fences never depend on hmm_fences.
> >> 2. hmm_fences can never preempt dma_fences.
> >> 3. dma_fences must be able to preempt hmm_fences or we always reserve
> >> enough hardware resources (CUs) to guarantee forward progress of 
> >> dma_fences.
> >>
> >> Critical sections are MMU notifiers, page faults, GPU schedulers and
> >> dma_reservation object locks.
> >>
> >> 4. It is valid to wait for a dma_fences in critical sections.
> >> 5. It is not valid to wait for hmm_fences in critical sections.
> >>
> >> Fence creation either happens during command submission or by adding
> >> something like a barrier or signal command to your userspace queue.
> >>
> >> 6. If we have an hmm_fence as implicit or explicit dependency for
> >> creating a dma_fence we must wait for that before taking any locks or
> >> reserving resources.
> >> 7. If we have a dma_fence as implicit or explicit dependency for
> >> creating an hmm_fence we can wait later on. So busy waiting or special
> >> WAIT hardware commands are valid.
> >>
> >> This prevents hard cuts, e.g. can mix hmm_fences and dma_fences at the
> >> same time on the hardware.
> >>
> >> In other words we can have a high priority gfx queue running jobs based
> >> on dma_fences and a low priority compute queue running jobs based on
> >> hmm_fences.
> >>
> >> Only when we switch from hmm_fence to dma_fence we need to block the
> >> submission until all the necessary resources (both memory as well as
> >> CUs) are available.
> >>
> >> This is somewhat an extension to your gang submit idea.
> > Either I'm missing something, or this is just exactly what we
> > documented already with userspace fences in general, and how you can't
> > have a dma_fence depend upon a userspace (or hmm_fence).
> >
> > My gang scheduling idea is really just an alternative for what you
> > have listed as item 3 above. Instead of requiring preempt or requiring
> > guaranteed forward progress of some other sorts we flush out any
> > pending dma_fence request. But _only_ those which would get stalled by
> > the job we're running, so high-priority sdma requests we need in the
> > kernel to shuffle buffers around are still all ok. This would be
> > needed if you're hw can't preempt, and you also have shared engines
> > between compute and gfx, so reserving 

Re: [PATCH 2/2] drm/cma-helper: Implement mmap as GEM CMA object functions

2021-01-14 Thread Kieran Bingham
Hi Thomas,

On 14/01/2021 13:26, Thomas Zimmermann wrote:
> Hi Kieran
> 
> Am 14.01.21 um 13:51 schrieb Kieran Bingham:
>> Hi Thomas,
>>
>> On 23/11/2020 11:56, Thomas Zimmermann wrote:
>>> The new GEM object function drm_gem_cma_mmap() sets the VMA flags
>>> and offset as in the old implementation and immediately maps in the
>>> buffer's memory pages.
>>>
>>> Changing CMA helpers to use the GEM object function allows for the
>>> removal of the special implementations for mmap and gem_prime_mmap
>>> callbacks. The regular functions drm_gem_mmap() and drm_gem_prime_mmap()
>>> are now used.
>>
>> I've encountered a memory leak regression in our Renesas R-Car DU tests,
>> and git bisection has led me to this patch (as commit f5ca8eb6f9).
>>
>> Running the tests sequentially, while grepping /proc/meminfo for Cma, it
>> is evident that CMA memory is not released, until exhausted and the
>> allocations fail (seen in [0]) shown by the error report:
>>
>>>  self.fbs.append(pykms.DumbFramebuffer(self.card, mode.hdisplay,
>>> mode.vdisplay, "XR24"))
>>> ValueError: DRM_IOCTL_MODE_CREATE_DUMB failed: Cannot allocate memory
>>
>>
>> Failing tests at f5ca8eb6f9 can be seen at [0], while the tests pass
>> successfully [1] on the commit previous to that (bc2532ab7c2):
>>
>> Reverting f5ca8eb6f9 also produces a successful pass [2]
>>
>>   [0] https://paste.ubuntu.com/p/VjPGPgswxR/ # Failed at f5ca8eb6f9
>>   [1] https://paste.ubuntu.com/p/78RRp2WpNR/ # Success at bc2532ab7c2
>>   [2] https://paste.ubuntu.com/p/qJKjZZN2pt/ # Success with revert
>>
>>
>> I don't believe we handle mmap specially in the RCar-DU driver, so I
>> wonder if this issue has hit anyone else as well?
>>
>> Any ideas of a repair without a revert ? Or do we just need to submit a
>> revert?
> 
> I think we might not be setting the VMA ops and therefore not finalize
> the BO correctly. Could you please apply the attched (quick-and-dirty)
> patch and try again?

Thanks for the quick response.

I can confirm the quick-and-dirty patch resolves the issue:
  https://paste.ubuntu.com/p/sKDp3dNvwV/

You can add a
Tested-by: Kieran Bingham 

if it stays like that, but I suspect there might be a better place to
initialise the ops rather than in the mmap call itself.

Happy to do further testing as required.

--
Regards

Kieran


> Best regards
> Thomas
> 
>>
>> I've yet to fully understand the implications of the patch below.
>>
>> Thanks
>> -- 
>> Regards
>>
>> Kieran
>>
>>
>>
>>> Signed-off-by: Thomas Zimmermann 
>>> ---
>>>   drivers/gpu/drm/drm_file.c   |   3 +-
>>>   drivers/gpu/drm/drm_gem_cma_helper.c | 121 +--
>>>   drivers/gpu/drm/pl111/pl111_drv.c    |   2 +-
>>>   drivers/gpu/drm/vc4/vc4_bo.c |   2 +-
>>>   include/drm/drm_gem_cma_helper.h |  10 +--
>>>   5 files changed, 44 insertions(+), 94 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
>>> index b50380fa80ce..80886d50d0f1 100644
>>> --- a/drivers/gpu/drm/drm_file.c
>>> +++ b/drivers/gpu/drm/drm_file.c
>>> @@ -113,8 +113,7 @@ bool drm_dev_needs_global_mutex(struct drm_device
>>> *dev)
>>>    * The memory mapping implementation will vary depending on how the
>>> driver
>>>    * manages memory. Legacy drivers will use the deprecated
>>> drm_legacy_mmap()
>>>    * function, modern drivers should use one of the provided
>>> memory-manager
>>> - * specific implementations. For GEM-based drivers this is
>>> drm_gem_mmap(), and
>>> - * for drivers which use the CMA GEM helpers it's drm_gem_cma_mmap().
>>> + * specific implementations. For GEM-based drivers this is
>>> drm_gem_mmap().
>>>    *
>>>    * No other file operations are supported by the DRM userspace API.
>>> Overall the
>>>    * following is an example &file_operations structure::
>>> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c
>>> b/drivers/gpu/drm/drm_gem_cma_helper.c
>>> index 6a4ef335ebc9..7942cf05cd93 100644
>>> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
>>> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
>>> @@ -38,6 +38,7 @@ static const struct drm_gem_object_funcs
>>> drm_gem_cma_default_funcs = {
>>>   .print_info = drm_gem_cma_print_info,
>>>   .get_sg_table = drm_gem_cma_get_sg_table,
>>>   .vmap = drm_gem_cma_vmap,
>>> +    .mmap = drm_gem_cma_mmap,
>>>   .vm_ops = &drm_gem_cma_vm_ops,
>>>   };
>>>   @@ -277,62 +278,6 @@ const struct vm_operations_struct
>>> drm_gem_cma_vm_ops = {
>>>   };
>>>   EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
>>>   -static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
>>> -    struct vm_area_struct *vma)
>>> -{
>>> -    int ret;
>>> -
>>> -    /*
>>> - * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and
>>> set the
>>> - * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we
>>> want to map
>>> - * the whole buffer.
>>> - */
>>> -    vma->vm_flags &= ~VM_PFNMAP;
>>> -    vma->vm_pgoff = 0;
>>> -
>>> -    ret = dma_mmap_wc(cma_obj->base.dev->dev, 

RE: [PATCH v4 01/13] dma-buf: Add vmap_local and vnumap_local operations

2021-01-14 Thread Ruhl, Michael J


>-Original Message-
>From: dri-devel  On Behalf Of
>Thomas Zimmermann
>Sent: Tuesday, January 12, 2021 2:45 AM
>To: Ruhl, Michael J ; sumit.sem...@linaro.org;
>christian.koe...@amd.com; airl...@redhat.com; dan...@ffwll.ch;
>maarten.lankho...@linux.intel.com; mrip...@kernel.org;
>kra...@redhat.com; hdego...@redhat.com; s...@poorly.run;
>e...@anholt.net; s...@ravnborg.org
>Cc: linaro-mm-...@lists.linaro.org; Daniel Vetter ;
>virtualizat...@lists.linux-foundation.org; dri-devel@lists.freedesktop.org;
>linux-me...@vger.kernel.org
>Subject: Re: [PATCH v4 01/13] dma-buf: Add vmap_local and vnumap_local
>operations
>
>Hi
>
>Am 08.01.21 um 17:12 schrieb Ruhl, Michael J:
>>> -Original Message-
>>> From: dri-devel  On Behalf Of
>>> Thomas Zimmermann
>>> Sent: Friday, January 8, 2021 4:43 AM
>>> To: sumit.sem...@linaro.org; christian.koe...@amd.com;
>>> airl...@redhat.com; dan...@ffwll.ch; maarten.lankho...@linux.intel.com;
>>> mrip...@kernel.org; kra...@redhat.com; hdego...@redhat.com;
>>> s...@poorly.run; e...@anholt.net; s...@ravnborg.org
>>> Cc: Daniel Vetter ; dri-
>de...@lists.freedesktop.org;
>>> virtualizat...@lists.linux-foundation.org; linaro-mm-...@lists.linaro.org;
>>> Thomas Zimmermann ; linux-
>>> me...@vger.kernel.org
>>> Subject: [PATCH v4 01/13] dma-buf: Add vmap_local and vnumap_local
>>> operations
>>>
>>> The existing dma-buf calls dma_buf_vmap() and dma_buf_vunmap() are
>>> allowed to pin the buffer or acquire the buffer's reservation object
>>> lock.
>>>
>>> This is a problem for callers that only require a short-term mapping
>>> of the buffer without the pinning, or callers that have special locking
>>> requirements. These may suffer from unnecessary overhead or interfere
>>> with regular pin operations.
>>>
>>> The new interfaces dma_buf_vmap_local(), dma_buf_vunmapo_local(),
>and
>>> their rsp callbacks in struct dma_buf_ops provide an alternative without
>>> pinning or reservation locking. Callers are responsible for these
>>> operations.
>>>
>>> v4:
>>> * update documentation (Daniel)
>>>
>>> Signed-off-by: Thomas Zimmermann 
>>> Reviewed-by: Daniel Vetter 
>>> Suggested-by: Daniel Vetter 
>>> ---
>>> drivers/dma-buf/dma-buf.c | 81
>>> +++
>>> include/linux/dma-buf.h   | 34 
>>> 2 files changed, 115 insertions(+)
>>>
>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>> index b8465243eca2..01f9c74d97fa 100644
>>> --- a/drivers/dma-buf/dma-buf.c
>>> +++ b/drivers/dma-buf/dma-buf.c
>>> @@ -1295,6 +1295,87 @@ void dma_buf_vunmap(struct dma_buf
>*dmabuf,
>>> struct dma_buf_map *map)
>>> }
>>> EXPORT_SYMBOL_GPL(dma_buf_vunmap);
>>>
>>> +/**
>>> + * dma_buf_vmap_local - Create virtual mapping for the buffer object
>into
>>> kernel
>>> + * address space.
>>> + * @dmabuf:[in]buffer to vmap
>>> + * @map:   [out]   returns the vmap pointer
>>> + *
>>> + * Unlike dma_buf_vmap() this is a short term mapping and will not pin
>>> + * the buffer. The struct dma_resv for the @dmabuf must be locked until
>>> + * dma_buf_vunmap_local() is called.
>>> + *
>>> + * Returns:
>>> + * 0 on success, or a negative errno code otherwise.
>>> + */
>>> +int dma_buf_vmap_local(struct dma_buf *dmabuf, struct dma_buf_map
>>> *map)
>>> +{
>>> +   struct dma_buf_map ptr;
>>> +   int ret = 0;
>>> +
>>> +   dma_buf_map_clear(map);
>>> +
>>> +   if (WARN_ON(!dmabuf))
>>> +   return -EINVAL;
>>> +
>>> +   dma_resv_assert_held(dmabuf->resv);
>>> +
>>> +   if (!dmabuf->ops->vmap_local)
>>> +   return -EINVAL;
>>
>> You are clearing the map, and then doing the above checks.
>>
>> Is it ok to change the map info and then exit on error?
>
>In vmap_local map argument returns the mapping's address. Callers are
>expected to check the return code. But I would expect a careless caller
>to not check, or check for map being cleared. Clearing it here first is
>the save thing to do.

Ahh, a pre-emptive attempt to make sure the caller doesn't cause more
problems on error. 😊

Thanks,

M

>Best regards
>Thomas
>
>>
>> Mike
>>
>>> +   mutex_lock(&dmabuf->lock);
>>> +   if (dmabuf->vmapping_counter) {
>>> +   dmabuf->vmapping_counter++;
>>> +   BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
>>> +   *map = dmabuf->vmap_ptr;
>>> +   goto out_unlock;
>>> +   }
>>> +
>>> +   BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
>>> +
>>> +   ret = dmabuf->ops->vmap_local(dmabuf, &ptr);
>>> +   if (WARN_ON_ONCE(ret))
>>> +   goto out_unlock;
>>> +
>>> +   dmabuf->vmap_ptr = ptr;
>>> +   dmabuf->vmapping_counter = 1;
>>> +
>>> +   *map = dmabuf->vmap_ptr;
>>> +
>>> +out_unlock:
>>> +   mutex_unlock(&dmabuf->lock);
>>> +   return ret;
>>> +}
>>> +EXPORT_SYMBOL_GPL(dma_buf_vmap_local);
>>> +
>>> +/**
>>> + * dma_buf_vunmap_local - Unmap a vmap obtained by
>>> dma_buf_vmap_local.
>>> + * @dmabuf:[in]buffer to vunmap
>>> + * @map:   [in]vmap pointer to vunmap
>>> + *
>>> + * Release a ma

Re: [PATCH] drm/vblank: Fix typo in docs

2021-01-14 Thread Daniel Vetter
On Thu, Jan 14, 2021 at 07:52:45PM +0530, Sumera Priyadarsini wrote:
> Fix typo in intro chapter in drm_vblank.c.
> Change 'sacn' to 'scan'.
> 
> Signed-off-by: Sumera Priyadarsini 

Nice catch, applied.
-Daniel

> ---
>  drivers/gpu/drm/drm_vblank.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
> index d30e2f2b8f3c..30912d8f82a5 100644
> --- a/drivers/gpu/drm/drm_vblank.c
> +++ b/drivers/gpu/drm/drm_vblank.c
> @@ -74,7 +74,7 @@
>   *||   updates the
>   *||   frame as it
>   *||   travels down
> - *||   ("sacn out")
> + *||   ("scan out")
>   *|   Old frame|
>   *||
>   *||
> -- 
> 2.25.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


  1   2   >