Re: [PATCH 1/2] drm/imx: imx-tve: depend on COMMON_CLK

2019-01-30 Thread Linus Walleij
On Thu, Jan 24, 2019 at 1:51 PM Philipp Zabel  wrote:

> Since the TVE provides a clock to the DI, the driver can only be
> compiled if the common clock framework is enabled. With the COMMON_CLK
> dependency in place, it will be possible to allow building the other
> parts of imx-drm under COMPILE_TEST on architectures that do not select
> the common clock framework.
>
> Signed-off-by: Philipp Zabel 

Since the clock framework has stubs I bet it can be *compiled* without
COMMON_CLK. But it will certainly not work so if you reword the commit:
Reviewed-by: Linus Walleij 

Yours,
Linus Walleij
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/imx: allow building under COMPILE_TEST

2019-01-30 Thread Linus Walleij
On Thu, Jan 24, 2019 at 1:51 PM Philipp Zabel  wrote:

> Allow to compile-test imx-drm on other platforms.
>
> Signed-off-by: Philipp Zabel 

Reviewed-by: Linus Walleij 

Yours,
Linus Walleij
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Intel-gfx] linux-next: build failure after merge of the drm-intel-fixes tree

2019-01-30 Thread Jani Nikula
On Tue, 29 Jan 2019, Lucas De Marchi  wrote:
> On Tue, Jan 29, 2019 at 2:39 PM Stephen Rothwell  
> wrote:
>>
>> Hi all,
>>
>> After merging the drm-intel-fixes tree, today's linux-next build (x86_64
>> allmodconfig) failed like this:
>>
>> drivers/gpu/drm/i915/intel_display.c: In function 'has_bogus_dpll_config':
>> drivers/gpu/drm/i915/intel_display.c:15432:27: error: macro "IS_GEN" 
>> requires 3 arguments, but only 2 given
>>   return IS_GEN(dev_priv, 6) &&
>>^
>>
>> Caused by commit
>>
>>   a49a17226feb ("drm/i915: Try to sanitize bogus DPLL state left over by 
>> broken SNB BIOSen")
>>
>> It seems this was cherry-picked incorrectly :-(
>
> while the cherry-pick was correct, the macro is different on
> drm-intel-fixes. IS_GEN(dev_priv, 6) needs to be converted to
> IS_GEN6(dev_priv).
>
> Lucas De Marchi
>
>>
>> I have reverted that commit for today.

Dropped the commit from drm-intel-fixes.

Somehow I had managed to screw up my kernel config in a way that avoided
the build failure.

BR,
Jani.

-- 
Jani Nikula, Intel Open Source Graphics Center
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Jason Gunthorpe
On Tue, Jan 29, 2019 at 02:11:23PM -0500, Jerome Glisse wrote:
> On Tue, Jan 29, 2019 at 11:36:29AM -0700, Logan Gunthorpe wrote:
> > 
> > 
> > On 2019-01-29 10:47 a.m., jgli...@redhat.com wrote:
> > 
> > > + /*
> > > +  * Optional for device driver that want to allow peer to peer (p2p)
> > > +  * mapping of their vma (which can be back by some device memory) to
> > > +  * another device.
> > > +  *
> > > +  * Note that the exporting device driver might not have map anything
> > > +  * inside the vma for the CPU but might still want to allow a peer
> > > +  * device to access the range of memory corresponding to a range in
> > > +  * that vma.
> > > +  *
> > > +  * FOR PREDICTABILITY IF DRIVER SUCCESSFULY MAP A RANGE ONCE FOR A
> > > +  * DEVICE THEN FURTHER MAPPING OF THE SAME IF THE VMA IS STILL VALID
> > > +  * SHOULD ALSO BE SUCCESSFUL. Following this rule allow the importing
> > > +  * device to map once during setup and report any failure at that time
> > > +  * to the userspace. Further mapping of the same range might happen
> > > +  * after mmu notifier invalidation over the range. The exporting device
> > > +  * can use this to move things around (defrag BAR space for instance)
> > > +  * or do other similar task.
> > > +  *
> > > +  * IMPORTER MUST OBEY mmu_notifier NOTIFICATION AND CALL p2p_unmap()
> > > +  * WHEN A NOTIFIER IS CALL FOR THE RANGE ! THIS CAN HAPPEN AT ANY
> > > +  * POINT IN TIME WITH NO LOCK HELD.
> > > +  *
> > > +  * In below function, the device argument is the importing device,
> > > +  * the exporting device is the device to which the vma belongs.
> > > +  */
> > > + long (*p2p_map)(struct vm_area_struct *vma,
> > > + struct device *device,
> > > + unsigned long start,
> > > + unsigned long end,
> > > + dma_addr_t *pa,
> > > + bool write);
> > > + long (*p2p_unmap)(struct vm_area_struct *vma,
> > > +   struct device *device,
> > > +   unsigned long start,
> > > +   unsigned long end,
> > > +   dma_addr_t *pa);
> > 
> > I don't understand why we need new p2p_[un]map function pointers for
> > this. In subsequent patches, they never appear to be set anywhere and
> > are only called by the HMM code. I'd have expected it to be called by
> > some core VMA code and set by HMM as that's what vm_operations_struct is
> > for.
> > 
> > But the code as all very confusing, hard to follow and seems to be
> > missing significant chunks. So I'm not really sure what is going on.
> 
> It is set by device driver when userspace do mmap(fd) where fd comes
> from open("/dev/somedevicefile"). So it is set by device driver. HMM
> has nothing to do with this. It must be set by device driver mmap
> call back (mmap callback of struct file_operations). For this patch
> you can completely ignore all the HMM patches. Maybe posting this as
> 2 separate patchset would make it clearer.
> 
> For instance see [1] for how a non HMM driver can export its memory
> by just setting those callback. Note that a proper implementation of
> this should also include some kind of driver policy on what to allow
> to map and what to not allow ... All this is driver specific in any
> way.

I'm imagining that the RDMA drivers would use this interface on their
per-process 'doorbell' BAR pages - we also wish to have P2P DMA to
this memory. Also the entire VFIO PCI BAR mmap would be good to cover
with this too.

Jerome, I think it would be nice to have a helper scheme - I think the
simple case would be simple remapping of PCI BAR memory, so if we
could have, say something like:

static const struct vm_operations_struct my_ops {
  .p2p_map = p2p_ioremap_map_op,
  .p2p_unmap = p2p_ioremap_unmap_op,
}

struct ioremap_data {
  [..]
}

fops_mmap() {
   vma->private_data = &driver_priv->ioremap_data;
   return p2p_ioremap_device_memory(vma, exporting_device, [..]);
}

Which closely matches at least what the RDMA drivers do. Where
p2p_ioremap_device_memory populates p2p_map and p2p_unmap pointers
with sensible functions, etc.

It looks like vfio would be able to use this as well (though I am
unsure why vfio uses remap_pfn_range instead of io_remap_pfn range for
BAR memory..)

Do any drivers need more control than this?

Jason
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Jason Gunthorpe
On Tue, Jan 29, 2019 at 06:17:43PM -0700, Logan Gunthorpe wrote:

> This isn't answering my question at all... I specifically asked what is
> backing the VMA when we are *not* using HMM.

At least for RDMA what backs the VMA today is non-struct-page BAR
memory filled in with io_remap_pfn.

And we want to expose this for P2P DMA. None of the HMM stuff applies
here and the p2p_map/unmap are a nice simple approach that covers all
the needs RDMA has, at least.

Every attempt to give BAR memory to struct page has run into major
trouble, IMHO, so I like that this approach avoids that.

And if you don't have struct page then the only kernel object left to
hang meta data off is the VMA itself.

It seems very similar to the existing P2P work between in-kernel
consumers, just that VMA is now mediating a general user space driven
discovery process instead of being hard wired into a driver.

Jason
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 1/5] pci/p2p: add a function to test peer to peer capability

2019-01-30 Thread Logan Gunthorpe


On 2019-01-29 12:44 p.m., Greg Kroah-Hartman wrote:
> On Tue, Jan 29, 2019 at 11:24:09AM -0700, Logan Gunthorpe wrote:
>>
>>
>> On 2019-01-29 10:47 a.m., jgli...@redhat.com wrote:
>>> +bool pci_test_p2p(struct device *devA, struct device *devB)
>>> +{
>>> +   struct pci_dev *pciA, *pciB;
>>> +   bool ret;
>>> +   int tmp;
>>> +
>>> +   /*
>>> +* For now we only support PCIE peer to peer but other inter-connect
>>> +* can be added.
>>> +*/
>>> +   pciA = find_parent_pci_dev(devA);
>>> +   pciB = find_parent_pci_dev(devB);
>>> +   if (pciA == NULL || pciB == NULL) {
>>> +   ret = false;
>>> +   goto out;
>>> +   }
>>> +
>>> +   tmp = upstream_bridge_distance(pciA, pciB, NULL);
>>> +   ret = tmp < 0 ? false : true;
>>> +
>>> +out:
>>> +   pci_dev_put(pciB);
>>> +   pci_dev_put(pciA);
>>> +   return false;
>>> +}
>>> +EXPORT_SYMBOL_GPL(pci_test_p2p);
>>
>> This function only ever returns false
> 
> I guess it was nevr actually tested :(
> 
> I feel really worried about passing random 'struct device' pointers into
> the PCI layer.  Are we _sure_ it can handle this properly?

Yes, there are a couple of pci_p2pdma functions that take struct devices
directly simply because it's way more convenient for the caller. That's
what find_parent_pci_dev() takes care of (it returns false if the device
is not a PCI device). Whether that's appropriate here is hard to say
seeing we haven't seen any caller code.

Logan


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Jason Gunthorpe
On Tue, Jan 29, 2019 at 02:50:55PM -0500, Jerome Glisse wrote:

> GPU driver do want more control :) GPU driver are moving things around
> all the time and they have more memory than bar space (on newer platform
> AMD GPU do resize the bar but it is not the rule for all GPUs). So
> GPU driver do actualy manage their BAR address space and they map and
> unmap thing there. They can not allow someone to just pin stuff there
> randomly or this would disrupt their regular work flow. Hence they need
> control and they might implement threshold for instance if they have
> more than N pages of bar space map for peer to peer then they can decide
> to fall back to main memory for any new peer mapping.

But this API doesn't seem to offer any control - I thought that
control was all coming from the mm/hmm notifiers triggering p2p_unmaps?

I would think that the importing driver can assume the BAR page is
kept alive until it calls unmap (presumably triggered by notifiers)?

ie the exporting driver sees the BAR page as pinned until unmap.

Jason
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/amd/display: add -msse2 to prevent Clang from emitting libcalls to undefined SW FP routines

2019-01-30 Thread Nick Desaulniers
Suggestions:
1. revert patch
2. get me the disassembly from gcc of the translation unit in question.
3. land patch that adds clang guards or something different based on 2.

Revert first; ask questions later.

On Tue, Jan 29, 2019 at 11:01 AM Wentland, Harry  wrote:
>
> On 2019-01-29 1:56 p.m., Guenter Roeck wrote:
> > On Tue, Jan 29, 2019 at 10:30:31AM -0500, Alex Deucher wrote:
> >> On Fri, Jan 25, 2019 at 10:29 AM Wentland, Harry  
> >> wrote:
> >>>
> >>> On 2019-01-24 7:52 p.m., ndesaulni...@google.com wrote:
>  arch/x86/Makefile disables SSE and SSE2 for the whole kernel.  The
>  AMDGPU drivers modified in this patch re-enable SSE but not SSE2.  Turn
>  on SSE2 to support emitting double precision floating point instructions
>  rather than calls to non-existent (usually available from gcc_s or
>  compiler_rt) floating point helper routines.
> 
>  Link: 
>  https://gcc.gnu.org/onlinedocs/gccint/Soft-float-library-routines.html
>  Link: https://github.com/ClangBuiltLinux/linux/issues/327
>  Cc: sta...@vger.kernel.org # 4.19
>  Reported-by: S, Shirish 
>  Reported-by: Matthias Kaehlcke 
>  Suggested-by: James Y Knight 
>  Suggested-by: Nathan Chancellor 
>  Signed-off-by: Nick Desaulniers 
>  Tested-by: Guenter Roeck 
> >>>
> >>> Reviewed-by: Harry Wentland 
> >>>
> >>> and applied.
> >>>
> >>
> >> This patch causes a segfault:
> >> https://bugs.freedesktop.org/show_bug.cgi?id=109487
> >>
> >> Any ideas?
> >>
> >
> > Set -msse2 only for clang ? I suspect, though, that this might only
> > solve the compile problem. If I understand correctly, the first
> > warning in the log is due to BREAK_TO_DEBUGGER(), suggesting that
> > something is seriously wrong. Maybe enabling sse2 results in different
> > results from floating point operations.
> >
> > Unfortunately I don't have a system with the affected GPU,
> > so I can't run any tests on real hardware. Unless someone can test
> > with real hardware, I think we have no choice but to revert the
> > patch.
> >
>
> Might be a good idea. Even though, best to revert for now until we understand 
> what's going on.
>
> It seems like people at AMD building with gcc 5.4 are fine, but those using 
> gcc 7.3 or 8.2 experience the crash.
>
> Harry
>
> > Guenter
> >
> >> Alex
> >>
> >>> Harry
> >>>
>  ---
>   drivers/gpu/drm/amd/display/dc/calcs/Makefile | 2 +-
>   drivers/gpu/drm/amd/display/dc/dml/Makefile   | 2 +-
>   2 files changed, 2 insertions(+), 2 deletions(-)
> 
>  diff --git a/drivers/gpu/drm/amd/display/dc/calcs/Makefile 
>  b/drivers/gpu/drm/amd/display/dc/calcs/Makefile
>  index 95f332ee3e7e..dc85a3c088af 100644
>  --- a/drivers/gpu/drm/amd/display/dc/calcs/Makefile
>  +++ b/drivers/gpu/drm/amd/display/dc/calcs/Makefile
>  @@ -30,7 +30,7 @@ else ifneq ($(call cc-option, -mstack-alignment=16),)
>    cc_stack_align := -mstack-alignment=16
>   endif
> 
>  -calcs_ccflags := -mhard-float -msse $(cc_stack_align)
>  +calcs_ccflags := -mhard-float -msse -msse2 $(cc_stack_align)
> 
>   CFLAGS_dcn_calcs.o := $(calcs_ccflags)
>   CFLAGS_dcn_calc_auto.o := $(calcs_ccflags)
>  diff --git a/drivers/gpu/drm/amd/display/dc/dml/Makefile 
>  b/drivers/gpu/drm/amd/display/dc/dml/Makefile
>  index d97ca6528f9d..33c7d7588712 100644
>  --- a/drivers/gpu/drm/amd/display/dc/dml/Makefile
>  +++ b/drivers/gpu/drm/amd/display/dc/dml/Makefile
>  @@ -30,7 +30,7 @@ else ifneq ($(call cc-option, -mstack-alignment=16),)
>    cc_stack_align := -mstack-alignment=16
>   endif
> 
>  -dml_ccflags := -mhard-float -msse $(cc_stack_align)
>  +dml_ccflags := -mhard-float -msse -msse2 $(cc_stack_align)
> 
>   CFLAGS_display_mode_lib.o := $(dml_ccflags)
>   CFLAGS_display_pipe_clocks.o := $(dml_ccflags)
> 
> >>> ___
> >>> amd-gfx mailing list
> >>> amd-...@lists.freedesktop.org
> >>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx



-- 
Thanks,
~Nick Desaulniers
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/nouveau: fix missing break in switch statement

2019-01-30 Thread Gustavo A. R. Silva


On 1/29/19 2:49 PM, Gustavo A. R. Silva wrote:
> 
> 
> On 10/8/18 3:47 PM, Colin King wrote:
>> From: Colin Ian King 
>>
>> The NOUVEAU_GETPARAM_PCI_DEVICE case is missing a break statement and falls
>> through to the following NOUVEAU_GETPARAM_BUS_TYPE case and may end up
>> re-assigning the getparam->value to an undesired value. Fix this by adding
>> in the missing break.
>>
>> Detected by CoverityScan, CID#1460507 ("Missing break in switch")
>>
>> Fixes: 359088d5b8ec ("drm/nouveau: remove trivial cases of nvxx_device() 
>> usage")
>> Signed-off-by: Colin Ian King 
> 
> Reviewed-by: Gustavo A. R. Silva 
> 
> Friendly ping:
> 
> Who can take this?
> 

BTW, this should be tagged for stable:

Cc: sta...@vger.kernel.org

Thanks
--
Gustavo


>> ---
>>  drivers/gpu/drm/nouveau/nouveau_abi16.c | 1 +
>>  1 file changed, 1 insertion(+)
>>
>> diff --git a/drivers/gpu/drm/nouveau/nouveau_abi16.c 
>> b/drivers/gpu/drm/nouveau/nouveau_abi16.c
>> index e67a471331b5..6ec745873bc5 100644
>> --- a/drivers/gpu/drm/nouveau/nouveau_abi16.c
>> +++ b/drivers/gpu/drm/nouveau/nouveau_abi16.c
>> @@ -214,6 +214,7 @@ nouveau_abi16_ioctl_getparam(ABI16_IOCTL_ARGS)
>>  WARN_ON(1);
>>  break;
>>  }
>> +break;
>>  case NOUVEAU_GETPARAM_FB_SIZE:
>>  getparam->value = drm->gem.vram_available;
>>  break;
>>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 2/5] drivers/base: add a function to test peer to peer capability

2019-01-30 Thread Logan Gunthorpe


On 2019-01-29 10:47 a.m., jgli...@redhat.com wrote:
> From: Jérôme Glisse 
> 
> device_test_p2p() return true if two devices can peer to peer to
> each other. We add a generic function as different inter-connect
> can support peer to peer and we want to genericaly test this no
> matter what the inter-connect might be. However this version only
> support PCIE for now.

This doesn't appear to be used in any of the further patches; so it's
very confusing.

I'm not sure a struct device wrapper is really necessary...

Logan
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/via: mark expected switch fall-throughs

2019-01-30 Thread Gustavo A. R. Silva
In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.

This patch fixes the following warnings:

drivers/gpu/drm/via/via_dmablit.c:179:3: warning: this statement may fall 
through [-Wimplicit-fallthrough=]
drivers/gpu/drm/via/via_dmablit.c:185:3: warning: this statement may fall 
through [-Wimplicit-fallthrough=]
drivers/gpu/drm/via/via_dmablit.c:187:3: warning: this statement may fall 
through [-Wimplicit-fallthrough=]
drivers/gpu/drm/via/via_dmablit.c:195:3: warning: this statement may fall 
through [-Wimplicit-fallthrough=]

Warning level 3 was used: -Wimplicit-fallthrough=3

This patch is part of the ongoing efforts to enabling
-Wimplicit-fallthrough.

Signed-off-by: Gustavo A. R. Silva 
---
 drivers/gpu/drm/via/via_dmablit.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/via/via_dmablit.c 
b/drivers/gpu/drm/via/via_dmablit.c
index 345bda4494e1..8bf3a7c23ed3 100644
--- a/drivers/gpu/drm/via/via_dmablit.c
+++ b/drivers/gpu/drm/via/via_dmablit.c
@@ -177,12 +177,14 @@ via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t 
*vsg)
switch (vsg->state) {
case dr_via_device_mapped:
via_unmap_blit_from_device(pdev, vsg);
+   /* fall through */
case dr_via_desc_pages_alloc:
for (i = 0; i < vsg->num_desc_pages; ++i) {
if (vsg->desc_pages[i] != NULL)
free_page((unsigned long)vsg->desc_pages[i]);
}
kfree(vsg->desc_pages);
+   /* fall through */
case dr_via_pages_locked:
for (i = 0; i < vsg->num_pages; ++i) {
if (NULL != (page = vsg->pages[i])) {
@@ -191,8 +193,10 @@ via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t 
*vsg)
put_page(page);
}
}
+   /* fall through */
case dr_via_pages_alloc:
vfree(vsg->pages);
+   /* fall through */
default:
vsg->state = dr_via_sg_init;
}
-- 
2.20.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/nouveau: mark expected switch fall-through

2019-01-30 Thread Gustavo A. R. Silva
In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.

This patch fixes the following warning:

drivers/gpu/drm/nouveau/nouveau_bo.c:1434:53: warning: this statement may fall 
through [-Wimplicit-fallthrough=]

Warning level 3 was used: -Wimplicit-fallthrough=3

This patch is part of the ongoing efforts to enabling
-Wimplicit-fallthrough.

Signed-off-by: Gustavo A. R. Silva 
---
 drivers/gpu/drm/nouveau/nouveau_bo.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 73eff52036d2..a72be71c45b4 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1434,7 +1434,7 @@ nouveau_ttm_io_mem_reserve(struct ttm_bo_device *bdev, 
struct ttm_mem_reg *reg)
if (drm->client.mem->oclass < NVIF_CLASS_MEM_NV50 || !mem->kind)
/* untiled */
break;
-   /* fallthrough, tiled memory */
+   /* fall through - tiled memory */
case TTM_PL_VRAM:
reg->bus.offset = reg->start << PAGE_SHIFT;
reg->bus.base = device->func->resource_addr(device, 1);
-- 
2.20.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/amd/display: add -msse2 to prevent Clang from emitting libcalls to undefined SW FP routines

2019-01-30 Thread Guenter Roeck
On Tue, Jan 29, 2019 at 10:30:31AM -0500, Alex Deucher wrote:
> On Fri, Jan 25, 2019 at 10:29 AM Wentland, Harry  
> wrote:
> >
> > On 2019-01-24 7:52 p.m., ndesaulni...@google.com wrote:
> > > arch/x86/Makefile disables SSE and SSE2 for the whole kernel.  The
> > > AMDGPU drivers modified in this patch re-enable SSE but not SSE2.  Turn
> > > on SSE2 to support emitting double precision floating point instructions
> > > rather than calls to non-existent (usually available from gcc_s or
> > > compiler_rt) floating point helper routines.
> > >
> > > Link: 
> > > https://gcc.gnu.org/onlinedocs/gccint/Soft-float-library-routines.html
> > > Link: https://github.com/ClangBuiltLinux/linux/issues/327
> > > Cc: sta...@vger.kernel.org # 4.19
> > > Reported-by: S, Shirish 
> > > Reported-by: Matthias Kaehlcke 
> > > Suggested-by: James Y Knight 
> > > Suggested-by: Nathan Chancellor 
> > > Signed-off-by: Nick Desaulniers 
> > > Tested-by: Guenter Roeck 
> >
> > Reviewed-by: Harry Wentland 
> >
> > and applied.
> >
> 
> This patch causes a segfault:
> https://bugs.freedesktop.org/show_bug.cgi?id=109487
> 
> Any ideas?
> 

Set -msse2 only for clang ? I suspect, though, that this might only
solve the compile problem. If I understand correctly, the first
warning in the log is due to BREAK_TO_DEBUGGER(), suggesting that
something is seriously wrong. Maybe enabling sse2 results in different
results from floating point operations.

Unfortunately I don't have a system with the affected GPU,
so I can't run any tests on real hardware. Unless someone can test
with real hardware, I think we have no choice but to revert the
patch.

Guenter

> Alex
> 
> > Harry
> >
> > > ---
> > >  drivers/gpu/drm/amd/display/dc/calcs/Makefile | 2 +-
> > >  drivers/gpu/drm/amd/display/dc/dml/Makefile   | 2 +-
> > >  2 files changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/amd/display/dc/calcs/Makefile 
> > > b/drivers/gpu/drm/amd/display/dc/calcs/Makefile
> > > index 95f332ee3e7e..dc85a3c088af 100644
> > > --- a/drivers/gpu/drm/amd/display/dc/calcs/Makefile
> > > +++ b/drivers/gpu/drm/amd/display/dc/calcs/Makefile
> > > @@ -30,7 +30,7 @@ else ifneq ($(call cc-option, -mstack-alignment=16),)
> > >   cc_stack_align := -mstack-alignment=16
> > >  endif
> > >
> > > -calcs_ccflags := -mhard-float -msse $(cc_stack_align)
> > > +calcs_ccflags := -mhard-float -msse -msse2 $(cc_stack_align)
> > >
> > >  CFLAGS_dcn_calcs.o := $(calcs_ccflags)
> > >  CFLAGS_dcn_calc_auto.o := $(calcs_ccflags)
> > > diff --git a/drivers/gpu/drm/amd/display/dc/dml/Makefile 
> > > b/drivers/gpu/drm/amd/display/dc/dml/Makefile
> > > index d97ca6528f9d..33c7d7588712 100644
> > > --- a/drivers/gpu/drm/amd/display/dc/dml/Makefile
> > > +++ b/drivers/gpu/drm/amd/display/dc/dml/Makefile
> > > @@ -30,7 +30,7 @@ else ifneq ($(call cc-option, -mstack-alignment=16),)
> > >   cc_stack_align := -mstack-alignment=16
> > >  endif
> > >
> > > -dml_ccflags := -mhard-float -msse $(cc_stack_align)
> > > +dml_ccflags := -mhard-float -msse -msse2 $(cc_stack_align)
> > >
> > >  CFLAGS_display_mode_lib.o := $(dml_ccflags)
> > >  CFLAGS_display_pipe_clocks.o := $(dml_ccflags)
> > >
> > ___
> > amd-gfx mailing list
> > amd-...@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Logan Gunthorpe


On 2019-01-29 1:57 p.m., Jerome Glisse wrote:
> GPU driver must be in control and must be call to. Here there is 2 cases
> in this patchset and i should have instead posted 2 separate patchset as
> it seems that it is confusing things.
> 
> For the HMM page, the physical address of the page ie the pfn does not
> correspond to anything ie there is nothing behind it. So the importing
> device has no idea how to get a valid physical address from an HMM page
> only the device driver exporting its memory with HMM device memory knows
> that.
> 
> 
> For the special vma ie mmap of a device file. GPU driver do manage their
> BAR ie the GPU have a page table that map BAR page to GPU memory and the
> driver _constantly_ update this page table, it is reflected by invalidating
> the CPU mapping. In fact most of the time the CPU mapping of GPU object are
> invalid they are valid only a small fraction of their lifetime. So you
> _must_ have some call to inform the exporting device driver that another
> device would like to map one of its vma. The exporting device can then
> try to avoid as much churn as possible for the importing device. But this
> has consequence and the exporting device driver must be allow to apply
> policy and make decission on wether or not it authorize the other device
> to peer map its memory. For GPU the userspace application have to call
> specific API that translate into specific ioctl which themself set flags
> on object (in the kernel struct tracking the user space object). The only
> way to allow program predictability is if the application can ask and know
> if it can peer export an object (ie is there enough BAR space left).

This all seems like it's an HMM problem and not related to mapping
BARs/"potential BARs" to userspace. If some code wants to DMA map HMM
pages, it calls an HMM function to map them. If HMM needs to consult
with the driver on aspects of how that's mapped, then that's between HMM
and the driver and not something I really care about. But making the
entire mapping stuff tied to userspace VMAs does not make sense to me.
What if somebody wants to map some HMM pages in the same way but from
kernel space and they therefore don't have a VMA?


>> I also figured there'd be a fault version of p2p_ioremap_device_memory()
>> for when you are mapping P2P memory and you want to assign the pages
>> lazily. Though, this can come later when someone wants to implement that.
> 
> For GPU the BAR address space is manage page by page and thus you do not
> want to map a range of BAR addresses but you want to allow mapping of
> multiple page of BAR address that are not adjacent to each other nor
> ordered in anyway. But providing helper for simpler device does make sense.

Well, this has little do with the backing device but how the memory is
mapped into userspace. With p2p_ioremap_device_memory() the entire range
is mapped into the userspace VMA immediately during the call to mmap().
With p2p_fault_device_memory(), mmap() would not actually map anything
and a page in the VMA would be mapped only when userspace accesses it
(using fault()). It seems to me like GPUs would prefer the latter but if
HMM takes care of the mapping from userspace potential pages to actual
GPU pages through the BAR then that may not be true.

Logan
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Logan Gunthorpe


On 2019-01-29 4:47 p.m., Jerome Glisse wrote:
> The whole point is to allow to use device memory for range of virtual
> address of a process when it does make sense to use device memory for
> that range. So they are multiple cases where it does make sense:
> [1] - Only the device is accessing the range and they are no CPU access
>   For instance the program is executing/running a big function on
>   the GPU and they are not concurrent CPU access, this is very
>   common in all the existing GPGPU code. In fact AFAICT It is the
>   most common pattern. So here you can use HMM private or public
>   memory.
> [2] - Both device and CPU access a common range of virtul address
>   concurrently. In that case if you are on a platform with cache
>   coherent inter-connect like OpenCAPI or CCIX then you can use
>   HMM public device memory and have both access the same memory.
>   You can not use HMM private memory.
> 
> So far on x86 we only have PCIE and thus so far on x86 we only have
> private HMM device memory that is not accessible by the CPU in any
> way.

I feel like you're just moving the rug out from under us... Before you
said ignore HMM and I was asking about the use case that wasn't using
HMM and how it works without HMM. In response, you just give me *way*
too much information describing HMM. And still, as best as I can see,
managing DMA mappings (which is different from the userspace mappings)
for GPU P2P should be handled by HMM and the userspace mappings should
*just* link VMAs to HMM pages using the standard infrastructure we
already have.

>> And what struct pages are actually going to be backing these VMAs if
>> it's not using HMM?
> 
> When you have some range of virtual address migrated to HMM private
> memory then the CPU pte are special swap entry and they behave just
> as if the memory was swapped to disk. So CPU access to those will
> fault and trigger a migration back to main memory.

This isn't answering my question at all... I specifically asked what is
backing the VMA when we are *not* using HMM.

Logan
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Logan Gunthorpe


On 2019-01-29 12:44 p.m., Jerome Glisse wrote:
>> I'd suggest [1] should be a part of the patchset so we can actually see
>> a user of the stuff you're adding.
> 
> I did not wanted to clutter patchset with device driver specific usage
> of this. As the API can be reason about in abstract way.

It's hard to reason about an interface when you can't see what all the
layers want to do with it. Most maintainers (I'd hope) would certainly
never merge code that has no callers, and for much the same reason, I'd
rather not review patches that don't have real use case examples.

Logan
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Jason Gunthorpe
On Tue, Jan 29, 2019 at 03:44:00PM -0500, Jerome Glisse wrote:

> > But this API doesn't seem to offer any control - I thought that
> > control was all coming from the mm/hmm notifiers triggering p2p_unmaps?
> 
> The control is within the driver implementation of those callbacks. 

Seems like what you mean by control is 'the exporter gets to choose
the physical address at the instant of map' - which seems reasonable
for GPU.


> will only allow p2p map to succeed for objects that have been tagged by the
> userspace in some way ie the userspace application is in control of what
> can be map to peer device.

I would have thought this means the VMA for the object is created
without the map/unmap ops? Or are GPU objects and VMAs unrelated?

> For moving things around after a successful p2p_map yes the exporting
> device have to call for instance zap_vma_ptes() or something
> similar.

Okay, great, RDMA needs this flow for hotplug - we zap the VMA's when
unplugging the PCI device and we can delay the PCI unplug completion
until all the p2p_unmaps are called...

But in this case a future p2p_map will have to fail as the BAR no
longer exists. How to handle this?

> > I would think that the importing driver can assume the BAR page is
> > kept alive until it calls unmap (presumably triggered by notifiers)?
> > 
> > ie the exporting driver sees the BAR page as pinned until unmap.
> 
> The intention with this patchset is that it is not pin ie the importer
> device _must_ abide by all mmu notifier invalidations and they can
> happen at anytime. The importing device can however re-p2p_map the
> same range after an invalidation.
>
> I would like to restrict this to importer that can invalidate for
> now because i believe all the first device to use can support the
> invalidation.

This seems reasonable (and sort of says importers not getting this
from HMM need careful checking), was this in the comment above the
ops?

Jason
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Xen-devel][PATCH] drm/xen-front: Fix mmap attributes for display buffers

2019-01-30 Thread Julien Grall

Hi Oleksandr,

On 1/29/19 3:04 PM, Oleksandr Andrushchenko wrote:

From: Oleksandr Andrushchenko 

When GEM backing storage is allocated those are normal pages,
so there is no point using pgprot_writecombine while mmaping.
This fixes mismatch of buffer pages' memory attributes between
the frontend and backend which may cause screen artifacts.

Fixes: c575b7eeb89f ("drm/xen-front: Add support for Xen PV display frontend")

Signed-off-by: Oleksandr Andrushchenko 
Suggested-by: Julien Grall 
---
  drivers/gpu/drm/xen/xen_drm_front_gem.c | 5 ++---
  1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c 
b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index d303a2e17f5e..9d5c03d7668d 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -235,8 +235,7 @@ static int gem_mmap_obj(struct xen_gem_object *xen_obj,
vma->vm_flags &= ~VM_PFNMAP;
vma->vm_flags |= VM_MIXEDMAP;
vma->vm_pgoff = 0;
-   vma->vm_page_prot =
-   pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+   vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);


The patch looks good to me. It would be worth expanding the comment a 
bit before to explain that we overwrite vm_page_prot to use cacheable 
attribute as required by the Xen ABI.


With the comment updated:

Acked-by: Julien Grall 

Cheers,

--
Julien Grall
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Jason Gunthorpe
On Tue, Jan 29, 2019 at 07:08:06PM -0500, Jerome Glisse wrote:
> On Tue, Jan 29, 2019 at 11:02:25PM +, Jason Gunthorpe wrote:
> > On Tue, Jan 29, 2019 at 03:44:00PM -0500, Jerome Glisse wrote:
> > 
> > > > But this API doesn't seem to offer any control - I thought that
> > > > control was all coming from the mm/hmm notifiers triggering p2p_unmaps?
> > > 
> > > The control is within the driver implementation of those callbacks. 
> > 
> > Seems like what you mean by control is 'the exporter gets to choose
> > the physical address at the instant of map' - which seems reasonable
> > for GPU.
> > 
> > 
> > > will only allow p2p map to succeed for objects that have been tagged by 
> > > the
> > > userspace in some way ie the userspace application is in control of what
> > > can be map to peer device.
> > 
> > I would have thought this means the VMA for the object is created
> > without the map/unmap ops? Or are GPU objects and VMAs unrelated?
> 
> GPU object and VMA are unrelated in all open source GPU driver i am
> somewhat familiar with (AMD, Intel, NVidia). You can create a GPU
> object and never map it (and thus never have it associated with a
> vma) and in fact this is very common. For graphic you usualy only
> have hand full of the hundreds of GPU object your application have
> mapped.

I mean the other way does every VMA with a p2p_map/unmap point to
exactly one GPU object?

ie I'm surprised you say that p2p_map needs to have policy, I would
have though the policy is applied when the VMA is created (ie objects
that are not for p2p do not have p2p_map set), and even for GPU
p2p_map should really only have to do with window allocation and pure
'can I even do p2p' type functionality.

> Idea is that we can only ask exporter to be predictable and still allow
> them to fail if things are really going bad.

I think hot unplug / PCI error recovery is one of the 'really going
bad' cases..

> I think i put it in the comment above the ops but in any cases i should
> write something in documentation with example and thorough guideline.
> Note that there won't be any mmu notifier to mmap of a device file
> unless the device driver calls for it or there is a syscall like munmap
> or mremap or mprotect well any syscall that work on vma.

This is something we might need to explore, does calling
zap_vma_ptes() invoke enough notifiers that a MMU notifiers or HMM
mirror consumer will release any p2p maps on that VMA?

> If we ever want to support full pin then we might have to add a
> flag so that GPU driver can refuse an importer that wants things
> pin forever.

This would become interesting for VFIO and RDMA at least - I don't
think VFIO has anything like SVA so it would want to import a p2p_map
and indicate that it will not respond to MMU notifiers.

GPU can refuse, but maybe RDMA would allow it...

Jason
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Xen-devel] [PATCH v2] drm/xen-front: Make shmem backed display buffer coherent

2019-01-30 Thread Oleksandr Andrushchenko
On 1/24/19 5:02 PM, Julien Grall wrote:
>
>
> On 24/01/2019 14:34, Oleksandr Andrushchenko wrote:
>> Hello, Julien!
>
> Hi,
>
>> On 1/22/19 1:44 PM, Julien Grall wrote:
>>>
>>>
>>> On 1/22/19 10:28 AM, Oleksandr Andrushchenko wrote:
 Hello, Julien!
>>>
>>> Hi,
>>>
 On 1/21/19 7:09 PM, Julien Grall wrote:
 Well, I didn't get the attributes of pages at the backend side, but 
 IMO
 those
 do not matter in my use-case (for simplicity I am not using
 zero-copying at
 backend side):
>>>
>>> They are actually important no matter what is your use case. If you
>>> access the same physical page with different attributes, then you are
>>> asking for trouble.
>> So, we have:
>>
>> DomU: frontend side
>> 
>> !PTE_RDONLY + PTE_PXN + PTE_SHARED + PTE_AF + PTE_UXN +
>> PTE_ATTRINDX(MT_NORMAL)
>
> I still don't understand how you came up with MT_NORMAL when you seem 
> to confirm...
>
>>
>> DomD: backend side
>> 
>> PTE_USER + !PTE_RDONLY + PTE_PXN + PTE_NG + PTE_CONT + PTE_TABLE_BIT +
>> PTE_UXN + PTE_ATTRINDX(MT_NORMAL)
>>
>>   From the above it seems that I don't violate cached/non-cached
>> agreement here
>>>
>>> This is why Xen imposes all the pages shared to have their memory
>>> attributes following some rules. Actually, speaking with Mark R., we
>>> may want to tight a bit more the attributes.
>>>

 1. Frontend device allocates display buffer pages which come from 
 shmem
 and have these attributes:
 !PTE_RDONLY + PTE_PXN + PTE_SHARED + PTE_AF + PTE_UXN +
 PTE_ATTRINDX(MT_NORMAL)
>>>
>>> My knowledge of Xen DRM is inexistent. However, looking at the code in
>>> 5.0-rc2, I don't seem to find the same attributes. For instance
>>> xen_drm_front_gem_prime_vmap and gem_mmap_obj are using
>>> pgprot_writecombine. So it looks like, the mapping will be
>>> non-cacheable on Arm64.
>>>
>>> Can you explain how you came up to these attributes?
>> pgprot_writecombine is PTE_ATTRINDX(MT_NORMAL_NC), so it seems to be
>> applicable here? [1]
>
> ... that MT_NORMAL_NC is used for the frontend pages.
>
> MT_NORMAL_NC is different from MT_NORMAL. The use of the former will 
> result to non-cacheable memory while the latter will result to 
> cacheable memory.
>
> To me, this looks like the exact reason why you see artifact on the 
> display buffer. As the author of this code, can you explain why you 
> decided to use pgprot_writecombine here instead of relying on the 
> default VMA prot?
>
> [...]
Sorry for late reply, it took me quite some time to re-check all the 
use-cases
that we have with PV DRM.
First of all I found a bug in my debug code which reported page 
attributes and
now I can confirm that all the use-cases sue MT_NORMAL, both backend and 
frontend.
You are right: there is no reason to have pgprot_writecombine and indeed 
I have
to rely on almost default VMA prot. I came up with the following (used 
by omap drm,
for example):

diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c 
b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index ae28ad4b4254..867800a2ed42 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -238,8 +238,7 @@ static int gem_mmap_obj(struct xen_gem_object *xen_obj,
     vma->vm_flags &= ~VM_PFNMAP;
     vma->vm_flags |= VM_MIXEDMAP;
     vma->vm_pgoff = 0;
-   vma->vm_page_prot =
- pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+   vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);

     {
     int cnt = xen_obj->num_pages > 5 ? 5 : xen_obj->num_pages;
@@ -295,7 +294,7 @@ void *xen_drm_front_gem_prime_vmap(struct 
drm_gem_object *gem_obj)
     return NULL;

     return vmap(xen_obj->pages, xen_obj->num_pages,
-   VM_MAP, pgprot_writecombine(PAGE_KERNEL));
+   VM_MAP, PAGE_KERNEL);
  }

With the above all the artifacts are gone now as page attributes are the 
same across
all domains. So, I consider this as a fix and will send it as v3 or better
drop this patch and have a new one.

THANK YOU for helping figuring this out!
>
>>> We actually never required to use cache flush in other PV protocol, so
>>> I still don't understand why the PV DRM should be different here.
>> Well, you are right. But at the same time not flushing the buffer makes
>> troubles,
>> so this is why I am trying to figure out what is wrong here.
>
> The cache flush is likely hiding the real problem rather than solving it.
>
It does hide the real issue. And I have confirmed this
>>>
>>> To me, it looks like that you are either missing some barriers
>> Barriers for the buffer? Not sure what you mean here.
>
> If you share information between two entities, you may need some 
> ordering so the information are seen consistently by the consumer 
> side. This can be achieved by using barriers.
>
>> Even more, we have
>> a use case
>> when the buffer is not touched by CPU in DomD and is solely owned by 
>> the HW.
>

Re: [Intel-gfx] linux-next: build failure after merge of the drm-intel-fixes tree

2019-01-30 Thread Lucas De Marchi
On Tue, Jan 29, 2019 at 2:39 PM Stephen Rothwell  wrote:
>
> Hi all,
>
> After merging the drm-intel-fixes tree, today's linux-next build (x86_64
> allmodconfig) failed like this:
>
> drivers/gpu/drm/i915/intel_display.c: In function 'has_bogus_dpll_config':
> drivers/gpu/drm/i915/intel_display.c:15432:27: error: macro "IS_GEN" requires 
> 3 arguments, but only 2 given
>   return IS_GEN(dev_priv, 6) &&
>^
>
> Caused by commit
>
>   a49a17226feb ("drm/i915: Try to sanitize bogus DPLL state left over by 
> broken SNB BIOSen")
>
> It seems this was cherry-picked incorrectly :-(

while the cherry-pick was correct, the macro is different on
drm-intel-fixes. IS_GEN(dev_priv, 6) needs to be converted to
IS_GEN6(dev_priv).

Lucas De Marchi

>
> I have reverted that commit for today.
>
> --
> Cheers,
> Stephen Rothwell
> ___
> Intel-gfx mailing list
> intel-...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx



-- 
Lucas De Marchi
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/savage: mark expected switch fall-throughs

2019-01-30 Thread Gustavo A. R. Silva
In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.

This patch fixes the following warnings:

drivers/gpu/drm/savage/savage_state.c:301:8: warning: this statement may fall 
through [-Wimplicit-fallthrough=]
drivers/gpu/drm/savage/savage_state.c:438:8: warning: this statement may fall 
through [-Wimplicit-fallthrough=]
drivers/gpu/drm/savage/savage_state.c:559:8: warning: this statement may fall 
through [-Wimplicit-fallthrough=]
drivers/gpu/drm/savage/savage_state.c:697:8: warning: this statement may fall 
through [-Wimplicit-fallthrough=]

Warning level 3 was used: -Wimplicit-fallthrough=3

This patch is part of the ongoing efforts to enabling
-Wimplicit-fallthrough.

Signed-off-by: Gustavo A. R. Silva 
---
 drivers/gpu/drm/savage/savage_state.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/savage/savage_state.c 
b/drivers/gpu/drm/savage/savage_state.c
index 7559a820bd43..ebb8b7d32b33 100644
--- a/drivers/gpu/drm/savage/savage_state.c
+++ b/drivers/gpu/drm/savage/savage_state.c
@@ -299,6 +299,7 @@ static int savage_dispatch_dma_prim(drm_savage_private_t * 
dev_priv,
case SAVAGE_PRIM_TRILIST_201:
reorder = 1;
prim = SAVAGE_PRIM_TRILIST;
+   /* fall through */
case SAVAGE_PRIM_TRILIST:
if (n % 3 != 0) {
DRM_ERROR("wrong number of vertices %u in TRILIST\n",
@@ -436,6 +437,7 @@ static int savage_dispatch_vb_prim(drm_savage_private_t * 
dev_priv,
case SAVAGE_PRIM_TRILIST_201:
reorder = 1;
prim = SAVAGE_PRIM_TRILIST;
+   /* fall through */
case SAVAGE_PRIM_TRILIST:
if (n % 3 != 0) {
DRM_ERROR("wrong number of vertices %u in TRILIST\n",
@@ -557,6 +559,7 @@ static int savage_dispatch_dma_idx(drm_savage_private_t * 
dev_priv,
case SAVAGE_PRIM_TRILIST_201:
reorder = 1;
prim = SAVAGE_PRIM_TRILIST;
+   /* fall through */
case SAVAGE_PRIM_TRILIST:
if (n % 3 != 0) {
DRM_ERROR("wrong number of indices %u in TRILIST\n", n);
@@ -695,6 +698,7 @@ static int savage_dispatch_vb_idx(drm_savage_private_t * 
dev_priv,
case SAVAGE_PRIM_TRILIST_201:
reorder = 1;
prim = SAVAGE_PRIM_TRILIST;
+   /* fall through */
case SAVAGE_PRIM_TRILIST:
if (n % 3 != 0) {
DRM_ERROR("wrong number of indices %u in TRILIST\n", n);
-- 
2.20.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Logan Gunthorpe


On 2019-01-29 10:47 a.m., jgli...@redhat.com wrote:

> + /*
> +  * Optional for device driver that want to allow peer to peer (p2p)
> +  * mapping of their vma (which can be back by some device memory) to
> +  * another device.
> +  *
> +  * Note that the exporting device driver might not have map anything
> +  * inside the vma for the CPU but might still want to allow a peer
> +  * device to access the range of memory corresponding to a range in
> +  * that vma.
> +  *
> +  * FOR PREDICTABILITY IF DRIVER SUCCESSFULY MAP A RANGE ONCE FOR A
> +  * DEVICE THEN FURTHER MAPPING OF THE SAME IF THE VMA IS STILL VALID
> +  * SHOULD ALSO BE SUCCESSFUL. Following this rule allow the importing
> +  * device to map once during setup and report any failure at that time
> +  * to the userspace. Further mapping of the same range might happen
> +  * after mmu notifier invalidation over the range. The exporting device
> +  * can use this to move things around (defrag BAR space for instance)
> +  * or do other similar task.
> +  *
> +  * IMPORTER MUST OBEY mmu_notifier NOTIFICATION AND CALL p2p_unmap()
> +  * WHEN A NOTIFIER IS CALL FOR THE RANGE ! THIS CAN HAPPEN AT ANY
> +  * POINT IN TIME WITH NO LOCK HELD.
> +  *
> +  * In below function, the device argument is the importing device,
> +  * the exporting device is the device to which the vma belongs.
> +  */
> + long (*p2p_map)(struct vm_area_struct *vma,
> + struct device *device,
> + unsigned long start,
> + unsigned long end,
> + dma_addr_t *pa,
> + bool write);
> + long (*p2p_unmap)(struct vm_area_struct *vma,
> +   struct device *device,
> +   unsigned long start,
> +   unsigned long end,
> +   dma_addr_t *pa);

I don't understand why we need new p2p_[un]map function pointers for
this. In subsequent patches, they never appear to be set anywhere and
are only called by the HMM code. I'd have expected it to be called by
some core VMA code and set by HMM as that's what vm_operations_struct is
for.

But the code as all very confusing, hard to follow and seems to be
missing significant chunks. So I'm not really sure what is going on.

Logan
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/nouveau: fix missing break in switch statement

2019-01-30 Thread Gustavo A. R. Silva


On 10/8/18 3:47 PM, Colin King wrote:
> From: Colin Ian King 
> 
> The NOUVEAU_GETPARAM_PCI_DEVICE case is missing a break statement and falls
> through to the following NOUVEAU_GETPARAM_BUS_TYPE case and may end up
> re-assigning the getparam->value to an undesired value. Fix this by adding
> in the missing break.
> 
> Detected by CoverityScan, CID#1460507 ("Missing break in switch")
> 
> Fixes: 359088d5b8ec ("drm/nouveau: remove trivial cases of nvxx_device() 
> usage")
> Signed-off-by: Colin Ian King 

Reviewed-by: Gustavo A. R. Silva 

Friendly ping:

Who can take this?

Thanks
--
Gustavo

> ---
>  drivers/gpu/drm/nouveau/nouveau_abi16.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_abi16.c 
> b/drivers/gpu/drm/nouveau/nouveau_abi16.c
> index e67a471331b5..6ec745873bc5 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_abi16.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_abi16.c
> @@ -214,6 +214,7 @@ nouveau_abi16_ioctl_getparam(ABI16_IOCTL_ARGS)
>   WARN_ON(1);
>   break;
>   }
> + break;
>   case NOUVEAU_GETPARAM_FB_SIZE:
>   getparam->value = drm->gem.vram_available;
>   break;
> 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Logan Gunthorpe


On 2019-01-29 12:11 p.m., Jerome Glisse wrote:
> On Tue, Jan 29, 2019 at 11:36:29AM -0700, Logan Gunthorpe wrote:
>>
>>
>> On 2019-01-29 10:47 a.m., jgli...@redhat.com wrote:
>>
>>> +   /*
>>> +* Optional for device driver that want to allow peer to peer (p2p)
>>> +* mapping of their vma (which can be back by some device memory) to
>>> +* another device.
>>> +*
>>> +* Note that the exporting device driver might not have map anything
>>> +* inside the vma for the CPU but might still want to allow a peer
>>> +* device to access the range of memory corresponding to a range in
>>> +* that vma.
>>> +*
>>> +* FOR PREDICTABILITY IF DRIVER SUCCESSFULY MAP A RANGE ONCE FOR A
>>> +* DEVICE THEN FURTHER MAPPING OF THE SAME IF THE VMA IS STILL VALID
>>> +* SHOULD ALSO BE SUCCESSFUL. Following this rule allow the importing
>>> +* device to map once during setup and report any failure at that time
>>> +* to the userspace. Further mapping of the same range might happen
>>> +* after mmu notifier invalidation over the range. The exporting device
>>> +* can use this to move things around (defrag BAR space for instance)
>>> +* or do other similar task.
>>> +*
>>> +* IMPORTER MUST OBEY mmu_notifier NOTIFICATION AND CALL p2p_unmap()
>>> +* WHEN A NOTIFIER IS CALL FOR THE RANGE ! THIS CAN HAPPEN AT ANY
>>> +* POINT IN TIME WITH NO LOCK HELD.
>>> +*
>>> +* In below function, the device argument is the importing device,
>>> +* the exporting device is the device to which the vma belongs.
>>> +*/
>>> +   long (*p2p_map)(struct vm_area_struct *vma,
>>> +   struct device *device,
>>> +   unsigned long start,
>>> +   unsigned long end,
>>> +   dma_addr_t *pa,
>>> +   bool write);
>>> +   long (*p2p_unmap)(struct vm_area_struct *vma,
>>> + struct device *device,
>>> + unsigned long start,
>>> + unsigned long end,
>>> + dma_addr_t *pa);
>>
>> I don't understand why we need new p2p_[un]map function pointers for
>> this. In subsequent patches, they never appear to be set anywhere and
>> are only called by the HMM code. I'd have expected it to be called by
>> some core VMA code and set by HMM as that's what vm_operations_struct is
>> for.
>>
>> But the code as all very confusing, hard to follow and seems to be
>> missing significant chunks. So I'm not really sure what is going on.
> 
> It is set by device driver when userspace do mmap(fd) where fd comes
> from open("/dev/somedevicefile"). So it is set by device driver. HMM
> has nothing to do with this. It must be set by device driver mmap
> call back (mmap callback of struct file_operations). For this patch
> you can completely ignore all the HMM patches. Maybe posting this as
> 2 separate patchset would make it clearer.
> 
> For instance see [1] for how a non HMM driver can export its memory
> by just setting those callback. Note that a proper implementation of
> this should also include some kind of driver policy on what to allow
> to map and what to not allow ... All this is driver specific in any
> way.

I'd suggest [1] should be a part of the patchset so we can actually see
a user of the stuff you're adding.

But it still doesn't explain everything as without the HMM code nothing
calls the new vm_ops. And there's still no callers for the p2p_test
functions you added. And I still don't understand why we need the new
vm_ops or who calls them and when. Why can't drivers use the existing
'fault' vm_op and call a new helper function to map p2p when appropriate
or a different helper function to map a large range in its mmap
operation? Just like regular mmap code...

Logan
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-01-30 Thread Liam Mark
On Fri, 18 Jan 2019, Liam Mark wrote:

> On Fri, 18 Jan 2019, Andrew F. Davis wrote:
> 
> > On 1/18/19 12:37 PM, Liam Mark wrote:
> > > The ION begin_cpu_access and end_cpu_access functions use the
> > > dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> > > maintenance.
> > > 
> > > Currently it is possible to apply cache maintenance, via the
> > > begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> > > dma mapped.
> > > 
> > > The dma sync sg APIs should not be called on sg lists which have not been
> > > dma mapped as this can result in cache maintenance being applied to the
> > > wrong address. If an sg list has not been dma mapped then its dma_address
> > > field has not been populated, some dma ops such as the swiotlb_dma_ops ops
> > > use the dma_address field to calculate the address onto which to apply
> > > cache maintenance.
> > > 
> > > Also I don’t think we want CMOs to be applied to a buffer which is not
> > > dma mapped as the memory should already be coherent for access from the
> > > CPU. Any CMOs required for device access taken care of in the
> > > dma_buf_map_attachment and dma_buf_unmap_attachment calls.
> > > So really it only makes sense for begin_cpu_access and end_cpu_access to
> > > apply CMOs if the buffer is dma mapped.
> > > 
> > > Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> > > cache maintenance to buffers which are dma mapped.
> > > 
> > > Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing 
> > > and mapping")
> > > Signed-off-by: Liam Mark 
> > > ---
> > >  drivers/staging/android/ion/ion.c | 26 +-
> > >  1 file changed, 21 insertions(+), 5 deletions(-)
> > > 
> > > diff --git a/drivers/staging/android/ion/ion.c 
> > > b/drivers/staging/android/ion/ion.c
> > > index 6f5afab7c1a1..1fe633a7fdba 100644
> > > --- a/drivers/staging/android/ion/ion.c
> > > +++ b/drivers/staging/android/ion/ion.c
> > > @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
> > >   struct device *dev;
> > >   struct sg_table *table;
> > >   struct list_head list;
> > > + bool dma_mapped;
> > >  };
> > >  
> > >  static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> > > @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> > >  
> > >   a->table = table;
> > >   a->dev = attachment->dev;
> > > + a->dma_mapped = false;
> > >   INIT_LIST_HEAD(&a->list);
> > >  
> > >   attachment->priv = a;
> > > @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct 
> > > dma_buf_attachment *attachment,
> > >  {
> > >   struct ion_dma_buf_attachment *a = attachment->priv;
> > >   struct sg_table *table;
> > > + struct ion_buffer *buffer = attachment->dmabuf->priv;
> > >  
> > >   table = a->table;
> > >  
> > > + mutex_lock(&buffer->lock);
> > >   if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> > > - direction))
> > > + direction)) {
> > > + mutex_unlock(&buffer->lock);
> > >   return ERR_PTR(-ENOMEM);
> > > + }
> > > + a->dma_mapped = true;
> > > + mutex_unlock(&buffer->lock);
> > >  
> > >   return table;
> > >  }
> > > @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct 
> > > dma_buf_attachment *attachment,
> > > struct sg_table *table,
> > > enum dma_data_direction direction)
> > >  {
> > > + struct ion_dma_buf_attachment *a = attachment->priv;
> > > + struct ion_buffer *buffer = attachment->dmabuf->priv;
> > > +
> > > + mutex_lock(&buffer->lock);
> > >   dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> > > + a->dma_mapped = false;
> > > + mutex_unlock(&buffer->lock);
> > >  }
> > >  
> > >  static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> > > @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct 
> > > dma_buf *dmabuf,
> > >  
> > >   mutex_lock(&buffer->lock);
> > >   list_for_each_entry(a, &buffer->attachments, list) {
> > 
> > When no devices are attached then buffer->attachments is empty and the
> > below does not run, so if I understand this patch correctly then what
> > you are protecting against is CPU access in the window after
> > dma_buf_attach but before dma_buf_map.
> > 
> 
> Yes
> 
> > This is the kind of thing that again makes me think a couple more
> > ordering requirements on DMA-BUF ops are needed. DMA-BUFs do not require
> > the backing memory to be allocated until map time, this is why the
> > dma_address field would still be null as you note in the commit message.
> > So why should the CPU be performing accesses on a buffer that is not
> > actually backed yet?
> > 
> > I can think of two solutions:
> > 
> > 1) Only allow CPU access (mmap, kmap, {begin,end}_cpu_access) while at
> > least one device is mapped.
> > 
> 
> Would be quite limiting to clients.
> 
> > 2) Treat the CPU access request like the a device map request and
> > trigger the allocation of backing memory ju

Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Logan Gunthorpe


On 2019-01-29 12:32 p.m., Jason Gunthorpe wrote:
> Jerome, I think it would be nice to have a helper scheme - I think the
> simple case would be simple remapping of PCI BAR memory, so if we
> could have, say something like:
> 
> static const struct vm_operations_struct my_ops {
>   .p2p_map = p2p_ioremap_map_op,
>   .p2p_unmap = p2p_ioremap_unmap_op,
> }
> 
> struct ioremap_data {
>   [..]
> }
> 
> fops_mmap() {
>vma->private_data = &driver_priv->ioremap_data;
>return p2p_ioremap_device_memory(vma, exporting_device, [..]);
> }

This is roughly what I was expecting, except I don't see exactly what
the p2p_map and p2p_unmap callbacks are for. The importing driver should
see p2pdma/hmm struct pages and use the appropriate function to map
them. It shouldn't be the responsibility of the exporting driver to
implement the mapping. And I don't think we should have 'special' vma's
for this (though we may need something to ensure we don't get mapping
requests mixed with different types of pages...).

I also figured there'd be a fault version of p2p_ioremap_device_memory()
for when you are mapping P2P memory and you want to assign the pages
lazily. Though, this can come later when someone wants to implement that.

Logan
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 1/5] pci/p2p: add a function to test peer to peer capability

2019-01-30 Thread Logan Gunthorpe


On 2019-01-29 10:47 a.m., jgli...@redhat.com wrote:
> +bool pci_test_p2p(struct device *devA, struct device *devB)
> +{
> + struct pci_dev *pciA, *pciB;
> + bool ret;
> + int tmp;
> +
> + /*
> +  * For now we only support PCIE peer to peer but other inter-connect
> +  * can be added.
> +  */
> + pciA = find_parent_pci_dev(devA);
> + pciB = find_parent_pci_dev(devB);
> + if (pciA == NULL || pciB == NULL) {
> + ret = false;
> + goto out;
> + }
> +
> + tmp = upstream_bridge_distance(pciA, pciB, NULL);
> + ret = tmp < 0 ? false : true;
> +
> +out:
> + pci_dev_put(pciB);
> + pci_dev_put(pciA);
> + return false;
> +}
> +EXPORT_SYMBOL_GPL(pci_test_p2p);

This function only ever returns false

Logan
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 1/5] pci/p2p: add a function to test peer to peer capability

2019-01-30 Thread Logan Gunthorpe


On 2019-01-29 12:56 p.m., Alex Deucher wrote:
> On Tue, Jan 29, 2019 at 12:47 PM  wrote:
>>
>> From: Jérôme Glisse 
>>
>> device_test_p2p() return true if two devices can peer to peer to
>> each other. We add a generic function as different inter-connect
>> can support peer to peer and we want to genericaly test this no
>> matter what the inter-connect might be. However this version only
>> support PCIE for now.
>>
> 
> What about something like these patches:
> https://cgit.freedesktop.org/~deathsimple/linux/commit/?h=p2p&id=4fab9ff69cb968183f717551441b475fabce6c1c
> https://cgit.freedesktop.org/~deathsimple/linux/commit/?h=p2p&id=f90b12d41c277335d08c9dab62433f27c0fadbe5
> They are a bit more thorough.

Those new functions seem to have a lot of overlap with the code that is
already upstream in p2pdma Perhaps you should be improving the
p2pdma functions if they aren't suitable for what you want already
instead of creating new ones.

Logan
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v2 2/2] drm/vkms: Modify memset() in compute_crc function

2019-01-30 Thread Mamta Shukla
Replace memset(vaddr_out + src_offset + 24, 0,  8) with
memset(vaddr_out + src_offset + 3, 0, 1) because memset fills
memory in bytes and not in bits.

Signed-off-by: Mamta Shukla 
---
No changes in v2.

 drivers/gpu/drm/vkms/vkms_crc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/vkms/vkms_crc.c b/drivers/gpu/drm/vkms/vkms_crc.c
index dc6cb4c2cced..5135642fb204 100644
--- a/drivers/gpu/drm/vkms/vkms_crc.c
+++ b/drivers/gpu/drm/vkms/vkms_crc.c
@@ -31,7 +31,7 @@ static uint32_t compute_crc(void *vaddr_out, struct 
vkms_crc_data *crc_out)
 + (i * crc_out->pitch)
 + (j * crc_out->cpp);
/* XRGB format ignores Alpha channel */
-   memset(vaddr_out + src_offset + 24, 0,  8);
+   memset(vaddr_out + src_offset + 3, 0, 1);
crc = crc32_le(crc, vaddr_out + src_offset,
   sizeof(u32));
}
-- 
2.17.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Jason Gunthorpe
On Tue, Jan 29, 2019 at 01:39:49PM -0700, Logan Gunthorpe wrote:

> implement the mapping. And I don't think we should have 'special' vma's
> for this (though we may need something to ensure we don't get mapping
> requests mixed with different types of pages...).

I think Jerome explained the point here is to have a 'special vma'
rather than a 'special struct page' as, really, we don't need a
struct page at all to make this work.

If I recall your earlier attempts at adding struct page for BAR
memory, it ran aground on issues related to O_DIRECT/sgls, etc, etc.

This does seem to avoid that pitfall entirely as we can never
accidently get into the SGL system with this kind of memory or VMA?

Jason
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Nouveau] [PATCH] drm/nouveau: mark expected switch fall-through

2019-01-30 Thread Gustavo A. R. Silva
Ben,

I wonder if you can take this too:

https://lore.kernel.org/patchwork/patch/1001180/

Thanks
--
Gustavo

On 1/29/19 8:51 PM, Ben Skeggs wrote:
> Got it, thanks.
> 
> On Wed, 30 Jan 2019 at 06:55, Gustavo A. R. Silva
>  wrote:
>>
>> In preparation to enabling -Wimplicit-fallthrough, mark switch cases
>> where we are expecting to fall through.
>>
>> This patch fixes the following warning:
>>
>> drivers/gpu/drm/nouveau/nouveau_bo.c:1434:53: warning: this statement may 
>> fall through [-Wimplicit-fallthrough=]
>>
>> Warning level 3 was used: -Wimplicit-fallthrough=3
>>
>> This patch is part of the ongoing efforts to enabling
>> -Wimplicit-fallthrough.
>>
>> Signed-off-by: Gustavo A. R. Silva 
>> ---
>>  drivers/gpu/drm/nouveau/nouveau_bo.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
>> b/drivers/gpu/drm/nouveau/nouveau_bo.c
>> index 73eff52036d2..a72be71c45b4 100644
>> --- a/drivers/gpu/drm/nouveau/nouveau_bo.c
>> +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
>> @@ -1434,7 +1434,7 @@ nouveau_ttm_io_mem_reserve(struct ttm_bo_device *bdev, 
>> struct ttm_mem_reg *reg)
>> if (drm->client.mem->oclass < NVIF_CLASS_MEM_NV50 || 
>> !mem->kind)
>> /* untiled */
>> break;
>> -   /* fallthrough, tiled memory */
>> +   /* fall through - tiled memory */
>> case TTM_PL_VRAM:
>> reg->bus.offset = reg->start << PAGE_SHIFT;
>> reg->bus.base = device->func->resource_addr(device, 1);
>> --
>> 2.20.1
>>
>> ___
>> Nouveau mailing list
>> nouv...@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/nouveau
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Logan Gunthorpe


On 2019-01-29 2:50 p.m., Jerome Glisse wrote:
> No this is the non HMM case i am talking about here. Fully ignore HMM
> in this frame. A GPU driver that do not support or use HMM in anyway
> has all the properties and requirement i do list above. So all the points
> i was making are without HMM in the picture whatsoever. I should have
> posted this a separate patches to avoid this confusion.
> 
> Regarding your HMM question. You can not map HMM pages, all code path
> that would try that would trigger a migration back to regular memory
> and will use the regular memory for CPU access.
> 

I thought this was the whole point of HMM... And eventually it would
support being able to map the pages through the BAR in cooperation with
the driver. If not, what's that whole layer for? Why not just have HMM
handle this situation?

And what struct pages are actually going to be backing these VMAs if
it's not using HMM?


> Again HMM has nothing to do here, ignore HMM it does not play any role
> and it is not involve in anyway here. GPU want to control what object
> they allow other device to access and object they do not allow. GPU driver
> _constantly_ invalidate the CPU page table and in fact the CPU page table
> do not have any valid pte for a vma that is an mmap of GPU device file
> for most of the vma lifetime. Changing that would highly disrupt and
> break GPU drivers. They need to control that, they need to control what
> to do if another device tries to peer map some of their memory. Hence
> why they need to implement the callback and decide on wether or not they
> allow the peer mapping or use device memory for it (they can decide to
> fallback to main memory).

But mapping is an operation of the memory/struct pages behind the VMA;
not of the VMA itself and I think that's evident by the code in that the
only way the VMA layer is involved is the fact that you're abusing
vm_ops by adding new ops there and calling it by other layers.

Logan
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v2,1/2] drm/vkms: Use alpha for blending in blend() function

2019-01-30 Thread Mamta Shukla
Use the alpha value to blend vaddr_src with vaddr_dst instead
of overwriting it in blend().

Signed-off-by: Mamta Shukla 
---
changes in v2:
-Use macro to avoid code duplication
-Add spaces around '/' and '-'
-Remove spaces at the end of the line

 drivers/gpu/drm/vkms/vkms_crc.c | 25 +++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/vkms/vkms_crc.c b/drivers/gpu/drm/vkms/vkms_crc.c
index 9d9e8146db90..dc6cb4c2cced 100644
--- a/drivers/gpu/drm/vkms/vkms_crc.c
+++ b/drivers/gpu/drm/vkms/vkms_crc.c
@@ -5,6 +5,8 @@
 #include 
 #include 
 
+#define BITSHIFT(val,i)  (u8)((*(u32 *)(val)) >> i & 0xff)
+
 /**
  * compute_crc - Compute CRC value on output frame
  *
@@ -71,6 +73,9 @@ static void blend(void *vaddr_dst, void *vaddr_src,
int y_limit = y_src + h_dst;
int x_limit = x_src + w_dst;
 
+   u8 alpha, r_src, r_dst, g_src, g_dst, b_src, b_dst;
+   u8 r_alpha, g_alpha, b_alpha;
+
for (i = y_src, i_dst = y_dst; i < y_limit; ++i) {
for (j = x_src, j_dst = x_dst; j < x_limit; ++j) {
offset_dst = crc_dst->offset
@@ -79,9 +84,25 @@ static void blend(void *vaddr_dst, void *vaddr_src,
offset_src = crc_src->offset
 + (i * crc_src->pitch)
 + (j * crc_src->cpp);
+   
+   /*Currently handles alpha values for fully opaque or 
fully transparent*/
+   alpha = (u8)((*(u32 *)vaddr_src + offset_src) >> 24);
+   alpha = alpha / 255;
+   r_src = BITSHIFT(vaddr_src + offset_src, 16);
+   g_src = BITSHIFT(vaddr_src + offset_src, 8);
+   b_src = BITSHIFT(vaddr_src + offset_src, 0);
+   r_dst = BITSHIFT(vaddr_dst + offset_dst, 16);
+   g_dst = BITSHIFT(vaddr_dst + offset_dst, 8);
+   b_dst = BITSHIFT(vaddr_dst + offset_dst, 0);
+
+   /*Pre-multiplied alpha for blending */
+   r_alpha = (r_src) + (r_dst * (1 - alpha));
+   g_alpha = (g_src) + (g_dst * (1 - alpha));
+   b_alpha = (b_src) + (b_dst * (1 - alpha));
+   memset(vaddr_dst + offset_dst, b_alpha, sizeof(u8));
+   memset(vaddr_dst + offset_dst + 1, g_alpha, sizeof(u8));
+   memset(vaddr_dst + offset_dst + 2, r_alpha, sizeof(u8));
 
-   memcpy(vaddr_dst + offset_dst,
-  vaddr_src + offset_src, sizeof(u32));
}
i_dst++;
}
-- 
2.17.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/vmwgfx: Use ttm_dma_page_alloc_enabled

2019-01-30 Thread Thomas Hellstrom

Hi, Sam,

On 01/25/2019 07:22 PM, Sam Ravnborg wrote:

Hi Thomas.

One little nit, and an improvment proposal (for another patch/day).

On Fri, Jan 25, 2019 at 02:05:48PM +0100, Thomas Hellstrom wrote:

Instead of guessing whether TTM has the dma page allocator enabled,
ask TTM.

Signed-off-by: Thomas Hellstrom 
---
  drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c 
b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
index 77f422cd18ab..125a2b423847 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
@@ -35,6 +35,7 @@
  #include 
  #include 
  #include 
+#include 
  
  #define VMWGFX_DRIVER_DESC "Linux drm driver for VMware graphics devices"

  #define VMWGFX_CHIP_SVGAII 0
@@ -594,8 +595,7 @@ static int vmw_dma_select_mode(struct vmw_private *dev_priv)
if (dev_priv->map_mode == vmw_dma_map_populate && vmw_restrict_iommu)
dev_priv->map_mode = vmw_dma_map_bind;
  
-	/* No TTM coherent page pool? FIXME: Ask TTM instead! */

-if (!(IS_ENABLED(CONFIG_SWIOTLB) || IS_ENABLED(CONFIG_INTEL_IOMMU)) &&
+if (!ttm_dma_page_alloc_enabled() &&

The line uses spaces for indent, tabs?


Ugh, I thought I ran this through checkpatch. Anyway that will get 
corrected. Thanks.






(dev_priv->map_mode == vmw_dma_alloc_coherent))

The code could benefit from a:
static bool is_map_coherent(const struct vmw_private *dev_priv)
{
return dev_priv->map_mode == vmw_dma_alloc_coherent;
}

Then the above would become:

if (!ttm_dma_page_alloc_enabled() && is_map_coherent(dev_priv))

And the other places that test for vmw_dma_alloc_coherent
would be a bit simpler too.
Same goes for the other map types obviously.
Sure. I'll add that to the backlog for consideration, although if we get 
to it we'll use other names because all map modes are presumed to be 
coherent...


Thanks,
/Thomas

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm/imx: imx-tve: depend on COMMON_CLK

2019-01-30 Thread Philipp Zabel
Hi Linus,

On Wed, 2019-01-30 at 09:16 +0100, Linus Walleij wrote:
> On Thu, Jan 24, 2019 at 1:51 PM Philipp Zabel  wrote:
> 
> > Since the TVE provides a clock to the DI, the driver can only be
> > compiled if the common clock framework is enabled. With the COMMON_CLK
> > dependency in place, it will be possible to allow building the other
> > parts of imx-drm under COMPILE_TEST on architectures that do not select
> > the common clock framework.
> > 
> > Signed-off-by: Philipp Zabel 
> 
> Since the clock framework has stubs I bet it can be *compiled* without
> COMMON_CLK. But it will certainly not work so if you reword the commit:
> Reviewed-by: Linus Walleij 

thank you for the review. It indeed does not compile because struct
imx_tve embeds a struct clk_hw, which is defined in linux/clk-provider.h 
only if COMMON_CLK is enabled.

regards
Philipp
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/doc: Task to rename CMA helpers

2019-01-30 Thread Laurent Pinchart
Hi Daniel,

On Tue, Jan 29, 2019 at 02:21:53PM +0100, Daniel Vetter wrote:
> I'm kinda fed up explaining why the have a confusing name :-)

Fully agreed, it's confusing and should definitely be fixed.

Acked-by: Laurent Pinchart 

> Cc: Noralf Trønnes 
> Cc: Laurent Pinchart 
> Signed-off-by: Daniel Vetter 
> ---
>  Documentation/gpu/todo.rst | 10 ++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 38360ede1221..17f1cde6adab 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -262,6 +262,16 @@ As a reference, take a look at the conversions already 
> completed in drm core.
>  
>  Contact: Sean Paul, respective driver maintainers
>  
> +Rename CMA helpers to DMA helpers
> +-
> +
> +CMA (standing for contiguous memory allocator) is really a bit an accident of
> +what these were used for first, a much better name would be DMA helpers. In 
> the
> +text these should even be called coherent DMA memory helpers (so maybd CDM, 
> but
> +no one knows what that means) since underneath they just use 
> dma_alloc_coherent.
> +
> +Contact: Laurent Pinchart, Daniel Vetter
> +
>  Core refactorings
>  =
>  

-- 
Regards,

Laurent Pinchart
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] phy: Add driver for mixel dphy

2019-01-30 Thread Guido Günther
Hi,
On Mon, Jan 28, 2019 at 01:10:45PM -0200, Fabio Estevam wrote:
> Hi Guido,
> 
> On Fri, Jan 25, 2019 at 8:15 AM Guido Günther  wrote:
> 
> > +config PHY_MIXEL_MIPI_DPHY
> > +   bool
> > +   depends on OF
> > +   select GENERIC_PHY
> > +   select GENERIC_PHY_MIPI_DPHY
> > +   default ARCH_MXC && ARM64
> 
> No need to force this selection for all i.MX8M boards, as not everyone
> is interested in using Mixel DSI.
> 
> > --- /dev/null
> > +++ b/drivers/phy/phy-mixel-mipi-dphy.c
> > @@ -0,0 +1,449 @@
> > +/*
> > + * Copyright 2017 NXP
> > + * Copyright 2019 Purism SPC
> > + *
> > + * SPDX-License-Identifier: GPL-2.0
> 
> SPDX line should be in the first line and start with //
> 
> > + */
> > +
> > +/* #define DEBUG 1 */
> 
> Please remove it.
> 
> > +/* DPHY registers */
> > +#define DPHY_PD_DPHY   0x00
> > +#define DPHY_M_PRG_HS_PREPARE  0x04
> > +#define DPHY_MC_PRG_HS_PREPARE 0x08
> > +#define DPHY_M_PRG_HS_ZERO 0x0c
> > +#define DPHY_MC_PRG_HS_ZERO0x10
> > +#define DPHY_M_PRG_HS_TRAIL0x14
> > +#define DPHY_MC_PRG_HS_TRAIL   0x18
> > +#define DPHY_PD_PLL0x1c
> > +#define DPHY_TST   0x20
> > +#define DPHY_CN0x24
> > +#define DPHY_CM0x28
> > +#define DPHY_CO0x2c
> > +#define DPHY_LOCK  0x30
> > +#define DPHY_LOCK_BYP  0x34
> > +#define DPHY_TX_RCAL   0x38
> > +#define DPHY_AUTO_PD_EN0x3c
> > +#define DPHY_RXLPRP0x40
> > +#define DPHY_RXCDRP0x44
> 
> In the NXP vendor tree we have these additional offsets for imx8m that
> are missing here:
> 
> .reg_rxhs_settle = 0x48,
> .reg_bypass_pll = 0x4c,
> 
> > +#define MBPS(x) ((x) * 100)
> > +
> > +#define DATA_RATE_MAX_SPEED MBPS(1500)
> > +#define DATA_RATE_MIN_SPEED MBPS(80)
> > +
> > +#define CN_BUF 0xcb7a89c0
> > +#define CO_BUF 0x63
> > +#define CM(x)  (   \
> > +   ((x) <  32)?0xe0|((x)-16) : \
> > +   ((x) <  64)?0xc0|((x)-32) : \
> > +   ((x) < 128)?0x80|((x)-64) : \
> > +   ((x) - 128))
> > +#define CN(x)  (((x) == 1)?0x1f : (((CN_BUF)>>((x)-1))&0x1f))
> > +#define CO(x)  ((CO_BUF)>>(8-(x))&0x3)
> > +
> > +static inline u32 phy_read(struct phy *phy, unsigned int reg)
> > +{
> > +   struct mixel_dphy_priv *priv = phy_get_drvdata(phy);
> > +
> > +   return readl(priv->regs + reg);
> 
> Maybe it's worth using regmap here. It makes it very easy to dump all
> the MIXEL registers at once.
> 
> > +static int mixel_dphy_config_from_opts(struct phy *phy,
> > +  struct phy_configure_opts_mipi_dphy *dphy_opts,
> > +  struct mixel_dphy_cfg *cfg)
> > +{
> > +   struct mixel_dphy_priv *priv = dev_get_drvdata(phy->dev.parent);
> > +   unsigned long ref_clk = clk_get_rate(priv->phy_ref_clk);
> > +   int i;
> > +   unsigned long numerator, denominator, frequency;
> > +   unsigned step;
> > +
> > +   if (dphy_opts->hs_clk_rate > DATA_RATE_MAX_SPEED ||
> > +   dphy_opts->hs_clk_rate < DATA_RATE_MIN_SPEED)
> > +   return -EINVAL;
> > +   cfg->hs_clk_rate = dphy_opts->hs_clk_rate;
> > +
> > +   numerator = dphy_opts->hs_clk_rate;
> > +   denominator = ref_clk;
> > +   get_best_ratio(&numerator, &denominator, 255, 256);
> > +   if (!numerator || !denominator) {
> > +   dev_dbg(&phy->dev, "Invalid %ld/%ld for %ld/%ld\n",
> 
> dev_err should be more appropriate here.
> 
> > +static int mixel_dphy_ref_power_on(struct phy *phy)
> > +{
> > +   struct mixel_dphy_priv *priv = phy_get_drvdata(phy);
> > +   u32 lock, timeout;
> > +   int ret = 0;
> > +
> > +   mutex_lock(&priv->lock);
> > +   clk_prepare_enable(priv->phy_ref_clk);
> > +
> > +   phy_write(phy, PWR_ON, DPHY_PD_DPHY);
> > +   phy_write(phy, PWR_ON, DPHY_PD_PLL);
> > +
> > +   timeout = 100;
> > +   while (!(lock = phy_read(phy, DPHY_LOCK))) {
> > +   udelay(10);
> > +   if (--timeout == 0) {
> > +   dev_err(&phy->dev, "Could not get DPHY lock!\n");
> > +   mutex_unlock(&priv->lock);
> > +   return -EINVAL;
> 
> Could you use readl_poll_timeout() instead?
> 
> > +static int mixel_dphy_ref_power_off(struct phy *phy)
> > +{
> > +   struct mixel_dphy_priv *priv = phy_get_drvdata(phy);
> > +   int ret = 0;
> 
> This variable is not needed.
> 
> > +
> > +   mutex_lock(&priv->lock);
> > +
> > +   phy_write(phy, PWR_OFF, DPHY_PD_PLL);
> > +   phy_write(phy, PWR_OFF, DPHY_PD_DPHY);
> > +
> > +   clk_disable_unprepare(priv->phy_ref_clk);
> > +   mutex_unlock(&priv->lock);
> > +
> > +   return ret;
> 
> and you could simply do a 'return 0' instea

[PATCH v2 1/6] drm/virtio: move virtio_gpu_object_{attach, detach} calls.

2019-01-30 Thread Gerd Hoffmann
Drop the dummy ttm backend implementation, add a real one for
TTM_PL_FLAG_TT objects.  The bin/unbind callbacks will call
virtio_gpu_object_{attach,detach}, to update the object state
on the host side, instead of invoking those calls from the
move_notify() callback.

With that in place the move and move_notify callbacks are not
needed any more, so drop them.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_ttm.c | 92 ++--
 1 file changed, 24 insertions(+), 68 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_ttm.c 
b/drivers/gpu/drm/virtio/virtgpu_ttm.c
index 4bfbf25fab..77407976c7 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ttm.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ttm.c
@@ -194,42 +194,45 @@ static void virtio_gpu_ttm_io_mem_free(struct 
ttm_bo_device *bdev,
  */
 struct virtio_gpu_ttm_tt {
struct ttm_dma_tt   ttm;
-   struct virtio_gpu_device*vgdev;
-   u64 offset;
+   struct virtio_gpu_object*obj;
 };
 
-static int virtio_gpu_ttm_backend_bind(struct ttm_tt *ttm,
-  struct ttm_mem_reg *bo_mem)
+static int virtio_gpu_ttm_tt_bind(struct ttm_tt *ttm,
+ struct ttm_mem_reg *bo_mem)
 {
-   struct virtio_gpu_ttm_tt *gtt = (void *)ttm;
+   struct virtio_gpu_ttm_tt *gtt =
+   container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
+   struct virtio_gpu_device *vgdev =
+   virtio_gpu_get_vgdev(gtt->obj->tbo.bdev);
 
-   gtt->offset = (unsigned long)(bo_mem->start << PAGE_SHIFT);
-   if (!ttm->num_pages)
-   WARN(1, "nothing to bind %lu pages for mreg %p back %p!\n",
-ttm->num_pages, bo_mem, ttm);
-
-   /* Not implemented */
+   virtio_gpu_object_attach(vgdev, gtt->obj, NULL);
return 0;
 }
 
-static int virtio_gpu_ttm_backend_unbind(struct ttm_tt *ttm)
+static int virtio_gpu_ttm_tt_unbind(struct ttm_tt *ttm)
 {
-   /* Not implemented */
+   struct virtio_gpu_ttm_tt *gtt =
+   container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
+   struct virtio_gpu_device *vgdev =
+   virtio_gpu_get_vgdev(gtt->obj->tbo.bdev);
+
+   virtio_gpu_object_detach(vgdev, gtt->obj);
return 0;
 }
 
-static void virtio_gpu_ttm_backend_destroy(struct ttm_tt *ttm)
+static void virtio_gpu_ttm_tt_destroy(struct ttm_tt *ttm)
 {
-   struct virtio_gpu_ttm_tt *gtt = (void *)ttm;
+   struct virtio_gpu_ttm_tt *gtt =
+   container_of(ttm, struct virtio_gpu_ttm_tt, ttm.ttm);
 
ttm_dma_tt_fini(>t->ttm);
kfree(gtt);
 }
 
-static struct ttm_backend_func virtio_gpu_backend_func = {
-   .bind = &virtio_gpu_ttm_backend_bind,
-   .unbind = &virtio_gpu_ttm_backend_unbind,
-   .destroy = &virtio_gpu_ttm_backend_destroy,
+static struct ttm_backend_func virtio_gpu_tt_func = {
+   .bind = &virtio_gpu_ttm_tt_bind,
+   .unbind = &virtio_gpu_ttm_tt_unbind,
+   .destroy = &virtio_gpu_ttm_tt_destroy,
 };
 
 static struct ttm_tt *virtio_gpu_ttm_tt_create(struct ttm_buffer_object *bo,
@@ -242,8 +245,8 @@ static struct ttm_tt *virtio_gpu_ttm_tt_create(struct 
ttm_buffer_object *bo,
gtt = kzalloc(sizeof(struct virtio_gpu_ttm_tt), GFP_KERNEL);
if (gtt == NULL)
return NULL;
-   gtt->ttm.ttm.func = &virtio_gpu_backend_func;
-   gtt->vgdev = vgdev;
+   gtt->ttm.ttm.func = &virtio_gpu_tt_func;
+   gtt->obj = container_of(bo, struct virtio_gpu_object, tbo);
if (ttm_dma_tt_init(>t->ttm, bo, page_flags)) {
kfree(gtt);
return NULL;
@@ -251,51 +254,6 @@ static struct ttm_tt *virtio_gpu_ttm_tt_create(struct 
ttm_buffer_object *bo,
return >t->ttm.ttm;
 }
 
-static void virtio_gpu_move_null(struct ttm_buffer_object *bo,
-struct ttm_mem_reg *new_mem)
-{
-   struct ttm_mem_reg *old_mem = &bo->mem;
-
-   BUG_ON(old_mem->mm_node != NULL);
-   *old_mem = *new_mem;
-   new_mem->mm_node = NULL;
-}
-
-static int virtio_gpu_bo_move(struct ttm_buffer_object *bo, bool evict,
- struct ttm_operation_ctx *ctx,
- struct ttm_mem_reg *new_mem)
-{
-   int ret;
-
-   ret = ttm_bo_wait(bo, ctx->interruptible, ctx->no_wait_gpu);
-   if (ret)
-   return ret;
-
-   virtio_gpu_move_null(bo, new_mem);
-   return 0;
-}
-
-static void virtio_gpu_bo_move_notify(struct ttm_buffer_object *tbo,
- bool evict,
- struct ttm_mem_reg *new_mem)
-{
-   struct virtio_gpu_object *bo;
-   struct virtio_gpu_device *vgdev;
-
-   bo = container_of(tbo, struct virtio_gpu_object, tbo);
-   vgdev = (struct virtio_gpu_device *)bo->gem_base.dev->dev_private;
-
-   if (!new_mem || (new_mem->placement & TTM_PL_FLAG_SYSTEM))

[PATCH v2 2/6] drm/virtio: use struct to pass params to virtio_gpu_object_create()

2019-01-30 Thread Gerd Hoffmann
Create virtio_gpu_object_params, use that to pass object parameters to
virtio_gpu_object_create.  This is just the first step, followup patches
will add more parameters to the struct.  The plan is to use the struct
for all object parameters.

Also drop unused "kernel" parameter for virtio_gpu_alloc_object(), it is
unused and always false.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_drv.h| 15 ++-
 drivers/gpu/drm/virtio/virtgpu_gem.c| 17 ++---
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  | 11 ++-
 drivers/gpu/drm/virtio/virtgpu_object.c | 22 +-
 4 files changed, 35 insertions(+), 30 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index d577cb76f5..40928980a2 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -50,6 +50,11 @@
 #define DRIVER_MINOR 1
 #define DRIVER_PATCHLEVEL 0
 
+struct virtio_gpu_object_params {
+   unsigned long size;
+   bool pinned;
+};
+
 struct virtio_gpu_object {
struct drm_gem_object gem_base;
uint32_t hw_res_handle;
@@ -217,16 +222,16 @@ int virtio_gpu_gem_init(struct virtio_gpu_device *vgdev);
 void virtio_gpu_gem_fini(struct virtio_gpu_device *vgdev);
 int virtio_gpu_gem_create(struct drm_file *file,
  struct drm_device *dev,
- uint64_t size,
+ struct virtio_gpu_object_params *params,
  struct drm_gem_object **obj_p,
  uint32_t *handle_p);
 int virtio_gpu_gem_object_open(struct drm_gem_object *obj,
   struct drm_file *file);
 void virtio_gpu_gem_object_close(struct drm_gem_object *obj,
 struct drm_file *file);
-struct virtio_gpu_object *virtio_gpu_alloc_object(struct drm_device *dev,
- size_t size, bool kernel,
- bool pinned);
+struct virtio_gpu_object*
+virtio_gpu_alloc_object(struct drm_device *dev,
+   struct virtio_gpu_object_params *params);
 int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
struct drm_mode_create_dumb *args);
@@ -342,7 +347,7 @@ void virtio_gpu_fence_event_process(struct 
virtio_gpu_device *vdev,
 
 /* virtio_gpu_object */
 int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
-unsigned long size, bool kernel, bool pinned,
+struct virtio_gpu_object_params *params,
 struct virtio_gpu_object **bo_ptr);
 void virtio_gpu_object_kunmap(struct virtio_gpu_object *bo);
 int virtio_gpu_object_kmap(struct virtio_gpu_object *bo);
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c 
b/drivers/gpu/drm/virtio/virtgpu_gem.c
index f065863939..b5f2d94ce5 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -34,15 +34,15 @@ void virtio_gpu_gem_free_object(struct drm_gem_object 
*gem_obj)
virtio_gpu_object_unref(&obj);
 }
 
-struct virtio_gpu_object *virtio_gpu_alloc_object(struct drm_device *dev,
- size_t size, bool kernel,
- bool pinned)
+struct virtio_gpu_object*
+virtio_gpu_alloc_object(struct drm_device *dev,
+   struct virtio_gpu_object_params *params)
 {
struct virtio_gpu_device *vgdev = dev->dev_private;
struct virtio_gpu_object *obj;
int ret;
 
-   ret = virtio_gpu_object_create(vgdev, size, kernel, pinned, &obj);
+   ret = virtio_gpu_object_create(vgdev, params, &obj);
if (ret)
return ERR_PTR(ret);
 
@@ -51,7 +51,7 @@ struct virtio_gpu_object *virtio_gpu_alloc_object(struct 
drm_device *dev,
 
 int virtio_gpu_gem_create(struct drm_file *file,
  struct drm_device *dev,
- uint64_t size,
+ struct virtio_gpu_object_params *params,
  struct drm_gem_object **obj_p,
  uint32_t *handle_p)
 {
@@ -59,7 +59,7 @@ int virtio_gpu_gem_create(struct drm_file *file,
int ret;
u32 handle;
 
-   obj = virtio_gpu_alloc_object(dev, size, false, false);
+   obj = virtio_gpu_alloc_object(dev, params);
if (IS_ERR(obj))
return PTR_ERR(obj);
 
@@ -85,6 +85,7 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
struct virtio_gpu_device *vgdev = dev->dev_private;
struct drm_gem_object *gobj;
struct virtio_gpu_object *obj;
+   struct virtio_gpu_object_params params = { 0 };
int ret;
uint32_t pitch;
uint32_t format;
@@ -96,7 +97,9 @@ int virtio_gpu_mode_dumb_create(struc

[PATCH v2 0/6] drm/virtio: ttm improvements

2019-01-30 Thread Gerd Hoffmann
changes in v2:
 - some cleanup patches are reviewed and merged already, dropped them.
 - more verbose commit messages.

Gerd Hoffmann (6):
  drm/virtio: move virtio_gpu_object_{attach,detach} calls.
  drm/virtio: use struct to pass params to virtio_gpu_object_create()
  drm/virtio: params struct for virtio_gpu_cmd_create_resource()
  drm/virtio: params struct for virtio_gpu_cmd_create_resource_3d()
  drm/virtio: drop fencing in virtio_gpu_resource_create_ioctl
  drm/virtio: move virtio_gpu_cmd_create_resource call into
virtio_gpu_object_create

 drivers/gpu/drm/virtio/virtgpu_drv.h| 34 +---
 drivers/gpu/drm/virtio/virtgpu_gem.c| 35 +---
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  | 97 +++--
 drivers/gpu/drm/virtio/virtgpu_object.c | 30 +-
 drivers/gpu/drm/virtio/virtgpu_ttm.c| 92 ---
 drivers/gpu/drm/virtio/virtgpu_vq.c | 30 ++
 6 files changed, 117 insertions(+), 201 deletions(-)

-- 
2.9.3

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v2 3/6] drm/virtio: params struct for virtio_gpu_cmd_create_resource()

2019-01-30 Thread Gerd Hoffmann
Add format, width and height fields to the virtio_gpu_object_params
struct.  With that in place we can use the parameter struct for
virtio_gpu_cmd_create_resource() calls too.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_drv.h   |  7 ---
 drivers/gpu/drm/virtio/virtgpu_gem.c   |  8 
 drivers/gpu/drm/virtio/virtgpu_ioctl.c |  6 --
 drivers/gpu/drm/virtio/virtgpu_vq.c| 10 --
 4 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 40928980a2..a40215c10e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -51,6 +51,9 @@
 #define DRIVER_PATCHLEVEL 0
 
 struct virtio_gpu_object_params {
+   uint32_t format;
+   uint32_t width;
+   uint32_t height;
unsigned long size;
bool pinned;
 };
@@ -248,9 +251,7 @@ int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev);
 void virtio_gpu_free_vbufs(struct virtio_gpu_device *vgdev);
 void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object *bo,
-   uint32_t format,
-   uint32_t width,
-   uint32_t height);
+   struct virtio_gpu_object_params *params);
 void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
   uint32_t resource_id);
 void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c 
b/drivers/gpu/drm/virtio/virtgpu_gem.c
index b5f2d94ce5..3a63ffcd4b 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -88,7 +88,6 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
struct virtio_gpu_object_params params = { 0 };
int ret;
uint32_t pitch;
-   uint32_t format;
 
if (args->bpp != 32)
return -EINVAL;
@@ -98,16 +97,17 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
args->size = ALIGN(args->size, PAGE_SIZE);
 
params.pinned = false,
+   params.format = virtio_gpu_translate_format(DRM_FORMAT_HOST_XRGB);
+   params.width = args->width;
+   params.height = args->height;
params.size = args->size;
ret = virtio_gpu_gem_create(file_priv, dev, ¶ms, &gobj,
&args->handle);
if (ret)
goto fail;
 
-   format = virtio_gpu_translate_format(DRM_FORMAT_HOST_XRGB);
obj = gem_to_virtio_gpu_obj(gobj);
-   virtio_gpu_cmd_create_resource(vgdev, obj, format,
-  args->width, args->height);
+   virtio_gpu_cmd_create_resource(vgdev, obj, ¶ms);
 
/* attach the object to the resource */
ret = virtio_gpu_object_attach(vgdev, obj, NULL);
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c 
b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index fa7b958ca2..84c2216fd4 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -303,6 +303,9 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
memset(&mainbuf, 0, sizeof(struct ttm_validate_buffer));
 
params.pinned = false,
+   params.format = rc->format;
+   params.width = rc->width;
+   params.height = rc->height;
params.size = rc->size;
 
/* allocate a single page size object */
@@ -315,8 +318,7 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
obj = &qobj->gem_base;
 
if (!vgdev->has_virgl_3d) {
-   virtio_gpu_cmd_create_resource(vgdev, qobj, rc->format,
-  rc->width, rc->height);
+   virtio_gpu_cmd_create_resource(vgdev, qobj, ¶ms);
 
ret = virtio_gpu_object_attach(vgdev, qobj, NULL);
} else {
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c 
b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 6bc2008b0d..363b8b8577 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -376,9 +376,7 @@ static int virtio_gpu_queue_cursor(struct virtio_gpu_device 
*vgdev,
 /* create a basic resource */
 void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object *bo,
-   uint32_t format,
-   uint32_t width,
-   uint32_t height)
+   struct virtio_gpu_object_params *params)
 {
struct virtio_gpu_resource_create_2d *cmd_p;
struct virtio_gpu_vbuffer *vbuf;
@@ -388,9 +386,9 @@ void virtio_gpu_cmd_create_resource(struct 
virtio_gpu_device *vgdev,
 
cmd_p->hdr.type =

[PATCH v2 4/6] drm/virtio: params struct for virtio_gpu_cmd_create_resource_3d()

2019-01-30 Thread Gerd Hoffmann
Add 3d resource parameters to virtio_gpu_object_params struct.  With
that in place we can use it for virtio_gpu_cmd_resource_create_3d()
calls.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_drv.h   | 10 +-
 drivers/gpu/drm/virtio/virtgpu_ioctl.c | 25 ++---
 drivers/gpu/drm/virtio/virtgpu_vq.c| 16 +---
 3 files changed, 32 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index a40215c10e..3265e62725 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -56,6 +56,14 @@ struct virtio_gpu_object_params {
uint32_t height;
unsigned long size;
bool pinned;
+   /* 3d */
+   uint32_t target;
+   uint32_t bind;
+   uint32_t depth;
+   uint32_t array_size;
+   uint32_t last_level;
+   uint32_t nr_samples;
+   uint32_t flags;
 };
 
 struct virtio_gpu_object {
@@ -310,7 +318,7 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct 
virtio_gpu_device *vgdev,
 void
 virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
  struct virtio_gpu_object *bo,
- struct virtio_gpu_resource_create_3d *rc_3d);
+ struct virtio_gpu_object_params *params);
 void virtio_gpu_ctrl_ack(struct virtqueue *vq);
 void virtio_gpu_cursor_ack(struct virtqueue *vq);
 void virtio_gpu_fence_ack(struct virtqueue *vq);
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c 
b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 84c2216fd4..431e5d767e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -283,7 +283,6 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
struct ttm_validate_buffer mainbuf;
struct virtio_gpu_fence *fence = NULL;
struct ww_acquire_ctx ticket;
-   struct virtio_gpu_resource_create_3d rc_3d;
struct virtio_gpu_object_params params = { 0 };
 
if (vgdev->has_virgl_3d == false) {
@@ -307,7 +306,15 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
params.width = rc->width;
params.height = rc->height;
params.size = rc->size;
-
+   if (vgdev->has_virgl_3d) {
+   params.target = rc->target;
+   params.bind = rc->bind;
+   params.depth = rc->depth;
+   params.array_size = rc->array_size;
+   params.last_level = rc->last_level;
+   params.nr_samples = rc->nr_samples;
+   params.flags = rc->flags;
+   }
/* allocate a single page size object */
if (params.size == 0)
params.size = PAGE_SIZE;
@@ -333,25 +340,13 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
goto fail_unref;
}
 
-   rc_3d.resource_id = cpu_to_le32(qobj->hw_res_handle);
-   rc_3d.target = cpu_to_le32(rc->target);
-   rc_3d.format = cpu_to_le32(rc->format);
-   rc_3d.bind = cpu_to_le32(rc->bind);
-   rc_3d.width = cpu_to_le32(rc->width);
-   rc_3d.height = cpu_to_le32(rc->height);
-   rc_3d.depth = cpu_to_le32(rc->depth);
-   rc_3d.array_size = cpu_to_le32(rc->array_size);
-   rc_3d.last_level = cpu_to_le32(rc->last_level);
-   rc_3d.nr_samples = cpu_to_le32(rc->nr_samples);
-   rc_3d.flags = cpu_to_le32(rc->flags);
-
fence = virtio_gpu_fence_alloc(vgdev);
if (!fence) {
ret = -ENOMEM;
goto fail_backoff;
}
 
-   virtio_gpu_cmd_resource_create_3d(vgdev, qobj, &rc_3d);
+   virtio_gpu_cmd_resource_create_3d(vgdev, qobj, ¶ms);
ret = virtio_gpu_object_attach(vgdev, qobj, fence);
if (ret) {
dma_fence_put(&fence->f);
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c 
b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 363b8b8577..ca93ec6ca3 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -826,7 +826,7 @@ void virtio_gpu_cmd_context_detach_resource(struct 
virtio_gpu_device *vgdev,
 void
 virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
  struct virtio_gpu_object *bo,
- struct virtio_gpu_resource_create_3d *rc_3d)
+ struct virtio_gpu_object_params *params)
 {
struct virtio_gpu_resource_create_3d *cmd_p;
struct virtio_gpu_vbuffer *vbuf;
@@ -834,9 +834,19 @@ virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device 
*vgdev,
cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
memset(cmd_p, 0, sizeof(*

[PATCH v2 5/6] drm/virtio: drop fencing in virtio_gpu_resource_create_ioctl

2019-01-30 Thread Gerd Hoffmann
There is no need to wait for completion here.

The host will process commands in submit order, so commands can
reference the new resource just fine even when queued up before
completion.

On the guest side there is no need to wait for completion too.  Which
btw is different from resource destroy, where we have to make sure the
host has seen the destroy and thus doesn't use it any more before
releasing the pages backing the resource.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_ioctl.c | 51 +-
 1 file changed, 1 insertion(+), 50 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c 
b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 431e5d767e..da06ebbb3a 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -279,10 +279,6 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
struct virtio_gpu_object *qobj;
struct drm_gem_object *obj;
uint32_t handle = 0;
-   struct list_head validate_list;
-   struct ttm_validate_buffer mainbuf;
-   struct virtio_gpu_fence *fence = NULL;
-   struct ww_acquire_ctx ticket;
struct virtio_gpu_object_params params = { 0 };
 
if (vgdev->has_virgl_3d == false) {
@@ -298,9 +294,6 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
return -EINVAL;
}
 
-   INIT_LIST_HEAD(&validate_list);
-   memset(&mainbuf, 0, sizeof(struct ttm_validate_buffer));
-
params.pinned = false,
params.format = rc->format;
params.width = rc->width;
@@ -329,62 +322,20 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
 
ret = virtio_gpu_object_attach(vgdev, qobj, NULL);
} else {
-   /* use a gem reference since unref list undoes them */
-   drm_gem_object_get(&qobj->gem_base);
-   mainbuf.bo = &qobj->tbo;
-   list_add(&mainbuf.head, &validate_list);
-
-   ret = virtio_gpu_object_list_validate(&ticket, &validate_list);
-   if (ret) {
-   DRM_DEBUG("failed to validate\n");
-   goto fail_unref;
-   }
-
-   fence = virtio_gpu_fence_alloc(vgdev);
-   if (!fence) {
-   ret = -ENOMEM;
-   goto fail_backoff;
-   }
-
virtio_gpu_cmd_resource_create_3d(vgdev, qobj, ¶ms);
-   ret = virtio_gpu_object_attach(vgdev, qobj, fence);
-   if (ret) {
-   dma_fence_put(&fence->f);
-   goto fail_backoff;
-   }
-   ttm_eu_fence_buffer_objects(&ticket, &validate_list, &fence->f);
+   ret = virtio_gpu_object_attach(vgdev, qobj, NULL);
}
 
ret = drm_gem_handle_create(file_priv, obj, &handle);
if (ret) {
-
drm_gem_object_release(obj);
-   if (vgdev->has_virgl_3d) {
-   virtio_gpu_unref_list(&validate_list);
-   dma_fence_put(&fence->f);
-   }
return ret;
}
drm_gem_object_put_unlocked(obj);
 
rc->res_handle = qobj->hw_res_handle; /* similiar to a VM address */
rc->bo_handle = handle;
-
-   if (vgdev->has_virgl_3d) {
-   virtio_gpu_unref_list(&validate_list);
-   dma_fence_put(&fence->f);
-   }
return 0;
-fail_backoff:
-   ttm_eu_backoff_reservation(&ticket, &validate_list);
-fail_unref:
-   if (vgdev->has_virgl_3d) {
-   virtio_gpu_unref_list(&validate_list);
-   dma_fence_put(&fence->f);
-   }
-//fail_obj:
-// drm_gem_object_handle_unreference_unlocked(obj);
-   return ret;
 }
 
 static int virtio_gpu_resource_info_ioctl(struct drm_device *dev, void *data,
-- 
2.9.3

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v2 6/6] drm/virtio: move virtio_gpu_cmd_create_resource call into virtio_gpu_object_create

2019-01-30 Thread Gerd Hoffmann
Specifically call virtio_gpu_object_create() before ttm_bo_init(), so
the object is already created when ttm calls the
virtio_gpu_ttm_tt_bind() callback (which in turn calls
virtio_gpu_object_attach()).

With that in place virtio_gpu_object_attach() will never be called with
an object which is not yet created, so the extra
virtio_gpu_object_attach() calls done after
virtio_gpu_cmd_create_resource() is not needed any more.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/virtio/virtgpu_drv.h|  2 ++
 drivers/gpu/drm/virtio/virtgpu_gem.c| 12 +---
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  | 10 +-
 drivers/gpu/drm/virtio/virtgpu_object.c | 10 --
 drivers/gpu/drm/virtio/virtgpu_vq.c |  4 ++--
 5 files changed, 14 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 3265e62725..52f3950f82 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -56,7 +56,9 @@ struct virtio_gpu_object_params {
uint32_t height;
unsigned long size;
bool pinned;
+   bool dumb;
/* 3d */
+   bool virgl;
uint32_t target;
uint32_t bind;
uint32_t depth;
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c 
b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 3a63ffcd4b..b5d7df17ac 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -82,9 +82,7 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
struct drm_mode_create_dumb *args)
 {
-   struct virtio_gpu_device *vgdev = dev->dev_private;
struct drm_gem_object *gobj;
-   struct virtio_gpu_object *obj;
struct virtio_gpu_object_params params = { 0 };
int ret;
uint32_t pitch;
@@ -101,20 +99,12 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
params.width = args->width;
params.height = args->height;
params.size = args->size;
+   params.dumb = true;
ret = virtio_gpu_gem_create(file_priv, dev, ¶ms, &gobj,
&args->handle);
if (ret)
goto fail;
 
-   obj = gem_to_virtio_gpu_obj(gobj);
-   virtio_gpu_cmd_create_resource(vgdev, obj, ¶ms);
-
-   /* attach the object to the resource */
-   ret = virtio_gpu_object_attach(vgdev, obj, NULL);
-   if (ret)
-   goto fail;
-
-   obj->dumb = true;
args->pitch = pitch;
return ret;
 
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c 
b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index da06ebbb3a..3a1c447098 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -300,6 +300,7 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
params.height = rc->height;
params.size = rc->size;
if (vgdev->has_virgl_3d) {
+   params.virgl = true;
params.target = rc->target;
params.bind = rc->bind;
params.depth = rc->depth;
@@ -317,15 +318,6 @@ static int virtio_gpu_resource_create_ioctl(struct 
drm_device *dev, void *data,
return PTR_ERR(qobj);
obj = &qobj->gem_base;
 
-   if (!vgdev->has_virgl_3d) {
-   virtio_gpu_cmd_create_resource(vgdev, qobj, ¶ms);
-
-   ret = virtio_gpu_object_attach(vgdev, qobj, NULL);
-   } else {
-   virtio_gpu_cmd_resource_create_3d(vgdev, qobj, ¶ms);
-   ret = virtio_gpu_object_attach(vgdev, qobj, NULL);
-   }
-
ret = drm_gem_handle_create(file_priv, obj, &handle);
if (ret) {
drm_gem_object_release(obj);
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c 
b/drivers/gpu/drm/virtio/virtgpu_object.c
index 62367e3f80..94da9e68d2 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -106,9 +106,15 @@ int virtio_gpu_object_create(struct virtio_gpu_device 
*vgdev,
kfree(bo);
return ret;
}
-   bo->dumb = false;
+   bo->dumb = params->dumb;
+
+   if (params->virgl) {
+   virtio_gpu_cmd_resource_create_3d(vgdev, bo, params);
+   } else {
+   virtio_gpu_cmd_create_resource(vgdev, bo, params);
+   }
+
virtio_gpu_init_ttm_placement(bo, params->pinned);
-
ret = ttm_bo_init(&vgdev->mman.bdev, &bo->tbo, params->size,
  ttm_bo_type_device, &bo->placement, 0,
  true, acc_size, NULL, NULL,
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c 
b/drivers/gpu/drm/virtio/virtgpu_vq.c
index ca93ec6ca3..292663c192 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -932,8 +932,8 @@ int virtio_gpu_object_attach(struct virtio_gpu_device 
*vgd

Re: [PATCH 1/2] drm/imx: imx-tve: depend on COMMON_CLK

2019-01-30 Thread Linus Walleij
On Wed, Jan 30, 2019 at 10:01 AM Philipp Zabel  wrote:
> On Wed, 2019-01-30 at 09:16 +0100, Linus Walleij wrote:
> > On Thu, Jan 24, 2019 at 1:51 PM Philipp Zabel  
> > wrote:
> >
> > > Since the TVE provides a clock to the DI, the driver can only be
> > > compiled if the common clock framework is enabled. With the COMMON_CLK
> > > dependency in place, it will be possible to allow building the other
> > > parts of imx-drm under COMPILE_TEST on architectures that do not select
> > > the common clock framework.
> > >
> > > Signed-off-by: Philipp Zabel 
> >
> > Since the clock framework has stubs I bet it can be *compiled* without
> > COMMON_CLK. But it will certainly not work so if you reword the commit:
> > Reviewed-by: Linus Walleij 
>
> thank you for the review. It indeed does not compile because struct
> imx_tve embeds a struct clk_hw, which is defined in linux/clk-provider.h
> only if COMMON_CLK is enabled.

Oh that one, I see it is a clock driver.
Sorry for my ignorance.
Reviewed-by: Linus Walleij 

Yours,
Linus Walleij
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [v9 3/3] drm/i915: Attach colorspace property and enable modeset

2019-01-30 Thread Maarten Lankhorst
Op 29-01-2019 om 19:50 schreef Uma Shankar:
> This patch attaches the colorspace connector property to the
> hdmi connector. Based on colorspace change, modeset will be
> triggered to switch to new colorspace.
>
> Based on colorspace property value create an infoframe
> with appropriate colorspace. This can be used to send an
> infoframe packet with proper colorspace value set which
> will help to enable wider color gamut like BT2020 on sink.
>
> This patch attaches and enables HDMI colorspace, DP will be
> taken care separately.
>
> v2: Merged the changes of creating infoframe as well to this
> patch as per Maarten's suggestion.
>
> v3: Addressed review comments from Shashank. Separated HDMI
> and DP colorspaces as suggested by Ville and Maarten.
>
> v4: Addressed Chris and Ville's review comments, and created a
> common colorspace property for DP and HDMI, filtered the list
> based on the colorspaces supported by the respective protocol
> standard. Handle the default case properly.
>
> v5: Merged the DP handling along with platform colorspace
> handling as per Shashank's comments.
>
> v6: Addressed Maarten's review comment and limited this currently
> to non lspcon based devices.
>
> v7: Reverted to old design of exposing all colorspaces to
> userspace as per Ville's review comment
>
> Signed-off-by: Uma Shankar 
> ---
>  drivers/gpu/drm/i915/intel_atomic.c|  1 +
>  drivers/gpu/drm/i915/intel_connector.c |  8 
>  drivers/gpu/drm/i915/intel_drv.h   |  1 +
>  drivers/gpu/drm/i915/intel_hdmi.c  | 25 +
>  4 files changed, 35 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/intel_atomic.c 
> b/drivers/gpu/drm/i915/intel_atomic.c
> index 16263ad..76b7114 100644
> --- a/drivers/gpu/drm/i915/intel_atomic.c
> +++ b/drivers/gpu/drm/i915/intel_atomic.c
> @@ -124,6 +124,7 @@ int intel_digital_connector_atomic_check(struct 
> drm_connector *conn,
>*/
>   if (new_conn_state->force_audio != old_conn_state->force_audio ||
>   new_conn_state->broadcast_rgb != old_conn_state->broadcast_rgb ||
> + new_state->colorspace != old_state->colorspace ||

I only care if there's a new version, but can you use xxx_conn_state->base ?

Can fixup if patch is ready otherwise.


Otherwise series looks good, so

Reviewed-by: Maarten Lankhorst  for the 
whole series. :)

>   new_conn_state->base.picture_aspect_ratio != 
> old_conn_state->base.picture_aspect_ratio ||
>   new_conn_state->base.content_type != 
> old_conn_state->base.content_type ||
>   new_conn_state->base.scaling_mode != 
> old_conn_state->base.scaling_mode)
> diff --git a/drivers/gpu/drm/i915/intel_connector.c 
> b/drivers/gpu/drm/i915/intel_connector.c
> index ee16758..ac2aed7 100644
> --- a/drivers/gpu/drm/i915/intel_connector.c
> +++ b/drivers/gpu/drm/i915/intel_connector.c
> @@ -265,3 +265,11 @@ int intel_ddc_get_modes(struct drm_connector *connector,
>   connector->dev->mode_config.aspect_ratio_property,
>   DRM_MODE_PICTURE_ASPECT_NONE);
>  }
> +
> +void
> +intel_attach_colorspace_property(struct drm_connector *connector)
> +{
> + if (!drm_mode_create_colorspace_property(connector))
> + drm_object_attach_property(&connector->base,
> + connector->colorspace_property, 0);
> +}
> diff --git a/drivers/gpu/drm/i915/intel_drv.h 
> b/drivers/gpu/drm/i915/intel_drv.h
> index 85b913e..5178a9a 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -1783,6 +1783,7 @@ int intel_connector_update_modes(struct drm_connector 
> *connector,
>  void intel_attach_force_audio_property(struct drm_connector *connector);
>  void intel_attach_broadcast_rgb_property(struct drm_connector *connector);
>  void intel_attach_aspect_ratio_property(struct drm_connector *connector);
> +void intel_attach_colorspace_property(struct drm_connector *connector);
>  
>  /* intel_csr.c */
>  void intel_csr_ucode_init(struct drm_i915_private *);
> diff --git a/drivers/gpu/drm/i915/intel_hdmi.c 
> b/drivers/gpu/drm/i915/intel_hdmi.c
> index 97a98e1..5c5009d 100644
> --- a/drivers/gpu/drm/i915/intel_hdmi.c
> +++ b/drivers/gpu/drm/i915/intel_hdmi.c
> @@ -498,6 +498,24 @@ static void intel_hdmi_set_avi_infoframe(struct 
> intel_encoder *encoder,
>   else
>   frame.avi.colorspace = HDMI_COLORSPACE_RGB;
>  
> + if (conn_state->colorspace == DRM_MODE_COLORIMETRY_DEFAULT) {
> + /* Set ITU 709 as default for HDMI */
> + frame.avi.colorimetry = DRM_MODE_COLORIMETRY_ITU_709;
> + } else if (conn_state->colorspace < DRM_MODE_COLORIMETRY_XV_YCC_601) {
> + frame.avi.colorimetry = conn_state->colorspace;
> + } else {
> + frame.avi.colorimetry = HDMI_COLORIMETRY_EXTENDED;
> + /*
> +  * Starting from extended list where COLORIMETRY_XV_YCC_601
> +  * is the first extended mode and its value is 0 as per HDMI
> + 

[Bug 108720] Possible GPU hangs with Cemu Emulator

2019-01-30 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=108720

Samuel Pitoiset  changed:

   What|Removed |Added

 Status|NEW |NEEDINFO
  Component|Drivers/Vulkan/radeon   |Drivers/Gallium/radeonsi
   Assignee|mesa-dev@lists.freedesktop. |dri-devel@lists.freedesktop
   |org |.org
Summary|System crash vulkan cemu|Possible GPU hangs with
   ||Cemu Emulator
 QA Contact|mesa-dev@lists.freedesktop. |dri-devel@lists.freedesktop
   |org |.org

--- Comment #3 from Samuel Pitoiset  ---
Moving to RadeonSI because Cemu is GL only.
By the way, it would be nice if you can explain how to reproduce the problem
(hardware setup, dmesg logs, what game you tried, etc).

-- 
You are receiving this mail because:
You are the assignee for the bug.___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 1/5] pci/p2p: add a function to test peer to peer capability

2019-01-30 Thread Christian König

Am 29.01.19 um 21:24 schrieb Logan Gunthorpe:


On 2019-01-29 12:56 p.m., Alex Deucher wrote:

On Tue, Jan 29, 2019 at 12:47 PM  wrote:

From: Jérôme Glisse 

device_test_p2p() return true if two devices can peer to peer to
each other. We add a generic function as different inter-connect
can support peer to peer and we want to genericaly test this no
matter what the inter-connect might be. However this version only
support PCIE for now.


What about something like these patches:
https://cgit.freedesktop.org/~deathsimple/linux/commit/?h=p2p&id=4fab9ff69cb968183f717551441b475fabce6c1c
https://cgit.freedesktop.org/~deathsimple/linux/commit/?h=p2p&id=f90b12d41c277335d08c9dab62433f27c0fadbe5
They are a bit more thorough.

Those new functions seem to have a lot of overlap with the code that is
already upstream in p2pdma Perhaps you should be improving the
p2pdma functions if they aren't suitable for what you want already
instead of creating new ones.


Yeah, well that's what I was suggesting for the very beginning :)

But completely agree the existing functions should be improved instead 
of adding new ones,

Christian.



Logan
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Koenig, Christian
Am 30.01.19 um 09:02 schrieb Christoph Hellwig:
> On Tue, Jan 29, 2019 at 08:58:35PM +, Jason Gunthorpe wrote:
>> On Tue, Jan 29, 2019 at 01:39:49PM -0700, Logan Gunthorpe wrote:
>>
>>> implement the mapping. And I don't think we should have 'special' vma's
>>> for this (though we may need something to ensure we don't get mapping
>>> requests mixed with different types of pages...).
>> I think Jerome explained the point here is to have a 'special vma'
>> rather than a 'special struct page' as, really, we don't need a
>> struct page at all to make this work.
>>
>> If I recall your earlier attempts at adding struct page for BAR
>> memory, it ran aground on issues related to O_DIRECT/sgls, etc, etc.
> Struct page is what makes O_DIRECT work, using sgls or biovecs, etc on
> it work.  Without struct page none of the above can work at all.  That
> is why we use struct page for backing BARs in the existing P2P code.
> Not that I'm a particular fan of creating struct page for this device
> memory, but without major invasive surgery to large parts of the kernel
> it is the only way to make it work.

The problem seems to be that struct page does two things:

1. Memory management for system memory.
2. The object to work with in the I/O layer.

This was done because a good part of that stuff overlaps, like reference 
counting how often a page is used.  The problem now is that this doesn't 
work very well for device memory in some cases.

For example on GPUs you usually have a large amount of memory which is 
not even accessible by the CPU. In other words you can't easily create a 
struct page for it because you can't reference it with a physical CPU 
address.

Maybe struct page should be split up into smaller structures? I mean 
it's really overloaded with data.

Christian.


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 2/4] drm/amdgpu: Only add rqs for initialized rings.

2019-01-30 Thread Christian König

Am 30.01.19 um 02:53 schrieb Bas Nieuwenhuizen:

I don't see another way to figure out if a ring is initialized if
the hardware block might not be initialized.

Entities have been fixed up to handle num_rqs = 0.

Signed-off-by: Bas Nieuwenhuizen 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 11 ---
  1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index d85184b5b35c..30407e55593b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -124,6 +124,7 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev,
struct amdgpu_ring *rings[AMDGPU_MAX_RINGS];
struct drm_sched_rq *rqs[AMDGPU_MAX_RINGS];
unsigned num_rings;
+   unsigned num_rqs = 0;
  
  		switch (i) {

case AMDGPU_HW_IP_GFX:
@@ -166,12 +167,16 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev,
break;
}
  
-		for (j = 0; j < num_rings; ++j)

-   rqs[j] = &rings[j]->sched.sched_rq[priority];
+   for (j = 0; j < num_rings; ++j) {
+   if (rings[j]->adev) {


Better do "if (!ring[j]->adev) continue;".

With that done the patch is Reviewed-by: Christian König 
.


Regards,
Christian.


+   rqs[num_rqs++] =
+   &rings[j]->sched.sched_rq[priority];
+   }
+   }
  
  		for (j = 0; j < amdgpu_ctx_num_entities[i]; ++j)

r = drm_sched_entity_init(&ctx->entities[i][j].entity,
- rqs, num_rings, &ctx->guilty);
+ rqs, num_rqs, &ctx->guilty);
if (r)
goto error_cleanup_entities;
}


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 1/4] drm/sched: Fix entities with 0 rqs.

2019-01-30 Thread Christian König

Am 30.01.19 um 02:53 schrieb Bas Nieuwenhuizen:

Some blocks in amdgpu can have 0 rqs.

Job creation already fails with -ENOENT when entity->rq is NULL,
so jobs cannot be pushed. Without a rq there is no scheduler to
pop jobs, and rq selection already does the right thing with a
list of length 0.

So the operations we need to fix are:
   - Creation, do not set rq to rq_list[0] if the list can have length 0.
   - Do not flush any jobs when there is no rq.
   - On entity destruction handle the rq = NULL case.
   - on set_priority, do not try to change the rq if it is NULL.

Signed-off-by: Bas Nieuwenhuizen 


One minor comment on patch #2, apart from that the series is 
Reviewed-by: Christian König .


I'm going to make the change on #2 and pick them up for inclusion in 
amd-staging-drm-next.


Thanks for the help,
Christian.


---
  drivers/gpu/drm/scheduler/sched_entity.c | 39 
  1 file changed, 26 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_entity.c 
b/drivers/gpu/drm/scheduler/sched_entity.c
index 4463d3826ecb..8e31b6628d09 100644
--- a/drivers/gpu/drm/scheduler/sched_entity.c
+++ b/drivers/gpu/drm/scheduler/sched_entity.c
@@ -52,12 +52,12 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
  {
int i;
  
-	if (!(entity && rq_list && num_rq_list > 0 && rq_list[0]))

+   if (!(entity && rq_list && (num_rq_list == 0 || rq_list[0])))
return -EINVAL;
  
  	memset(entity, 0, sizeof(struct drm_sched_entity));

INIT_LIST_HEAD(&entity->list);
-   entity->rq = rq_list[0];
+   entity->rq = NULL;
entity->guilty = guilty;
entity->num_rq_list = num_rq_list;
entity->rq_list = kcalloc(num_rq_list, sizeof(struct drm_sched_rq *),
@@ -67,6 +67,10 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
  
  	for (i = 0; i < num_rq_list; ++i)

entity->rq_list[i] = rq_list[i];
+
+   if (num_rq_list)
+   entity->rq = rq_list[0];
+
entity->last_scheduled = NULL;
  
  	spin_lock_init(&entity->rq_lock);

@@ -165,6 +169,9 @@ long drm_sched_entity_flush(struct drm_sched_entity 
*entity, long timeout)
struct task_struct *last_user;
long ret = timeout;
  
+	if (!entity->rq)

+   return 0;
+
sched = entity->rq->sched;
/**
 * The client will not queue more IBs during this fini, consume existing
@@ -264,20 +271,24 @@ static void drm_sched_entity_kill_jobs(struct 
drm_sched_entity *entity)
   */
  void drm_sched_entity_fini(struct drm_sched_entity *entity)
  {
-   struct drm_gpu_scheduler *sched;
+   struct drm_gpu_scheduler *sched = NULL;
  
-	sched = entity->rq->sched;

-   drm_sched_rq_remove_entity(entity->rq, entity);
+   if (entity->rq) {
+   sched = entity->rq->sched;
+   drm_sched_rq_remove_entity(entity->rq, entity);
+   }
  
  	/* Consumption of existing IBs wasn't completed. Forcefully

 * remove them here.
 */
if (spsc_queue_peek(&entity->job_queue)) {
-   /* Park the kernel for a moment to make sure it isn't processing
-* our enity.
-*/
-   kthread_park(sched->thread);
-   kthread_unpark(sched->thread);
+   if (sched) {
+   /* Park the kernel for a moment to make sure it isn't 
processing
+* our enity.
+*/
+   kthread_park(sched->thread);
+   kthread_unpark(sched->thread);
+   }
if (entity->dependency) {
dma_fence_remove_callback(entity->dependency,
  &entity->cb);
@@ -362,9 +373,11 @@ void drm_sched_entity_set_priority(struct drm_sched_entity 
*entity,
for (i = 0; i < entity->num_rq_list; ++i)
drm_sched_entity_set_rq_priority(&entity->rq_list[i], priority);
  
-	drm_sched_rq_remove_entity(entity->rq, entity);

-   drm_sched_entity_set_rq_priority(&entity->rq, priority);
-   drm_sched_rq_add_entity(entity->rq, entity);
+   if (entity->rq) {
+   drm_sched_rq_remove_entity(entity->rq, entity);
+   drm_sched_entity_set_rq_priority(&entity->rq, priority);
+   drm_sched_rq_add_entity(entity->rq, entity);
+   }
  
  	spin_unlock(&entity->rq_lock);

  }


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v7 2/5] drm: Add vrr_enabled property to drm CRTC

2019-01-30 Thread Daniel Vetter
On Thu, Nov 8, 2018 at 3:44 PM Nicholas Kazlauskas
 wrote:
>
> This patch introduces the 'vrr_enabled' CRTC property to allow
> dynamic control over variable refresh rate support for a CRTC.
>
> This property should be treated like a content hint to the driver -
> if the hardware or driver is not capable of driving variable refresh
> timings then this is not considered an error.
>
> Capability for variable refresh rate support should be determined
> by querying the vrr_capable drm connector property.
>
> It is worth noting that while the property is intended for atomic use
> it isn't filtered from legacy userspace queries. This allows for Xorg
> userspace drivers to implement support.
>
> Signed-off-by: Nicholas Kazlauskas 
> Reviewed-by: Harry Wentland 
> Cc: Manasi Navare 
> ---
>  drivers/gpu/drm/drm_atomic_uapi.c | 4 
>  drivers/gpu/drm/drm_crtc.c| 2 ++
>  drivers/gpu/drm/drm_mode_config.c | 6 ++
>  include/drm/drm_crtc.h| 9 +
>  include/drm/drm_mode_config.h | 5 +
>  5 files changed, 26 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_atomic_uapi.c 
> b/drivers/gpu/drm/drm_atomic_uapi.c
> index d5b7f315098c..eec396a57b88 100644
> --- a/drivers/gpu/drm/drm_atomic_uapi.c
> +++ b/drivers/gpu/drm/drm_atomic_uapi.c
> @@ -433,6 +433,8 @@ static int drm_atomic_crtc_set_property(struct drm_crtc 
> *crtc,
> ret = drm_atomic_set_mode_prop_for_crtc(state, mode);
> drm_property_blob_put(mode);
> return ret;
> +   } else if (property == config->prop_vrr_enabled) {
> +   state->vrr_enabled = val;
> } else if (property == config->degamma_lut_property) {
> ret = drm_atomic_replace_property_blob_from_id(dev,
> &state->degamma_lut,
> @@ -491,6 +493,8 @@ drm_atomic_crtc_get_property(struct drm_crtc *crtc,
> *val = state->active;
> else if (property == config->prop_mode_id)
> *val = (state->mode_blob) ? state->mode_blob->base.id : 0;
> +   else if (property == config->prop_vrr_enabled)
> +   *val = state->vrr_enabled;
> else if (property == config->degamma_lut_property)
> *val = (state->degamma_lut) ? state->degamma_lut->base.id : 0;
> else if (property == config->ctm_property)
> diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
> index 268a182ae189..6f8ddfcfaba5 100644
> --- a/drivers/gpu/drm/drm_crtc.c
> +++ b/drivers/gpu/drm/drm_crtc.c
> @@ -340,6 +340,8 @@ int drm_crtc_init_with_planes(struct drm_device *dev, 
> struct drm_crtc *crtc,
> drm_object_attach_property(&crtc->base, config->prop_mode_id, 
> 0);
> drm_object_attach_property(&crtc->base,
>config->prop_out_fence_ptr, 0);
> +   drm_object_attach_property(&crtc->base,
> +  config->prop_vrr_enabled, 0);
> }
>
> return 0;
> diff --git a/drivers/gpu/drm/drm_mode_config.c 
> b/drivers/gpu/drm/drm_mode_config.c
> index ee80788f2c40..5670c67f28d4 100644
> --- a/drivers/gpu/drm/drm_mode_config.c
> +++ b/drivers/gpu/drm/drm_mode_config.c
> @@ -310,6 +310,12 @@ static int drm_mode_create_standard_properties(struct 
> drm_device *dev)
> return -ENOMEM;
> dev->mode_config.prop_mode_id = prop;
>
> +   prop = drm_property_create_bool(dev, 0,
> +   "VRR_ENABLED");

VRR_ENABLED doesn't match vrr_enabled in the docs. But the connector
one seems to indeed be lowercase. And casing matters for properties

Can you pls figure out who's right here and fix this confusion up?

Thanks, Daniel

> +   if (!prop)
> +   return -ENOMEM;
> +   dev->mode_config.prop_vrr_enabled = prop;
> +
> prop = drm_property_create(dev,
> DRM_MODE_PROP_BLOB,
> "DEGAMMA_LUT", 0);
> diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h
> index b21437bc95bf..39c3900aab3c 100644
> --- a/include/drm/drm_crtc.h
> +++ b/include/drm/drm_crtc.h
> @@ -290,6 +290,15 @@ struct drm_crtc_state {
>  */
> u32 pageflip_flags;
>
> +   /**
> +* @vrr_enabled:
> +*
> +* Indicates if variable refresh rate should be enabled for the CRTC.
> +* Support for the requested vrr state will depend on driver and
> +* hardware capabiltiy - lacking support is not treated as failure.
> +*/
> +   bool vrr_enabled;
> +
> /**
>  * @event:
>  *
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 928e4172a0bb..49f2fcfdb5fc 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -639,6 +639,11 @@ struct drm_mode_config {
>  * connectors must be of and active must be set to disabled, too.
>  */
> struct drm_property *prop

[PATCH] drm/amdgpu: Transfer fences to dmabuf importer

2019-01-30 Thread Chris Wilson
amdgpu only uses shared-fences internally, but dmabuf importers rely on
implicit write hazard tracking via the reservation_object.fence_excl.
For example, the importer use the write hazard for timing a page flip to
only occur after the exporter has finished flushing its write into the
surface. As such, on exporting a dmabuf, we must either flush all
outstanding fences (for we do not know which are writes and should have
been exclusive) or alternatively create a new exclusive fence that is
the composite of all the existing shared fences, and so will only be
signaled when all earlier fences are signaled (ensuring that we can not
be signaled before the completion of any earlier write).

v2: reservation_object is already locked by amdgpu_bo_reserve()
v3: Replace looping with get_fences_rcu and special case the promotion
of a single shared fence directly to an exclusive fence, bypassing the
fence array.
v4: Drop the fence array ref after assigning to reservation_object

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107341
Testcase: igt/amd_prime/amd-to-i915
References: 8e94a46c1770 ("drm/amdgpu: Attach exclusive fence to prime exported 
bo's. (v5)")
Signed-off-by: Chris Wilson 
Cc: Alex Deucher 
Cc: "Christian König" 
Reviewed-by: "Christian König" 
---
We may disagree on the best long term strategy for fence semantics, but
I think this is still a nice short term solution to the blocking
behaviour on exporting amdgpu to prime.
-Chris
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 59 ---
 1 file changed, 51 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
index 71913a18d142..a38e0fb4a6fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
@@ -38,6 +38,7 @@
 #include "amdgpu_gem.h"
 #include 
 #include 
+#include 
 
 /**
  * amdgpu_gem_prime_get_sg_table - &drm_driver.gem_prime_get_sg_table
@@ -187,6 +188,48 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
return ERR_PTR(ret);
 }
 
+static int
+__reservation_object_make_exclusive(struct reservation_object *obj)
+{
+   struct dma_fence **fences;
+   unsigned int count;
+   int r;
+
+   if (!reservation_object_get_list(obj)) /* no shared fences to convert */
+   return 0;
+
+   r = reservation_object_get_fences_rcu(obj, NULL, &count, &fences);
+   if (r)
+   return r;
+
+   if (count == 0) {
+   /* Now that was unexpected. */
+   } else if (count == 1) {
+   reservation_object_add_excl_fence(obj, fences[0]);
+   dma_fence_put(fences[0]);
+   kfree(fences);
+   } else {
+   struct dma_fence_array *array;
+
+   array = dma_fence_array_create(count, fences,
+  dma_fence_context_alloc(1), 0,
+  false);
+   if (!array)
+   goto err_fences_put;
+
+   reservation_object_add_excl_fence(obj, &array->base);
+   dma_fence_put(&array->base);
+   }
+
+   return 0;
+
+err_fences_put:
+   while (count--)
+   dma_fence_put(fences[count]);
+   kfree(fences);
+   return -ENOMEM;
+}
+
 /**
  * amdgpu_gem_map_attach - &dma_buf_ops.attach implementation
  * @dma_buf: Shared DMA buffer
@@ -218,16 +261,16 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
 
if (attach->dev->driver != adev->dev->driver) {
/*
-* Wait for all shared fences to complete before we switch to 
future
-* use of exclusive fence on this prime shared bo.
+* We only create shared fences for internal use, but importers
+* of the dmabuf rely on exclusive fences for implicitly
+* tracking write hazards. As any of the current fences may
+* correspond to a write, we need to convert all existing
+* fences on the reservation object into a single exclusive
+* fence.
 */
-   r = reservation_object_wait_timeout_rcu(bo->tbo.resv,
-   true, false,
-   MAX_SCHEDULE_TIMEOUT);
-   if (unlikely(r < 0)) {
-   DRM_DEBUG_PRIME("Fence wait failed: %li\n", r);
+   r = __reservation_object_make_exclusive(bo->tbo.resv);
+   if (r)
goto error_unreserve;
-   }
}
 
/* pin buffer into GTT */
-- 
2.20.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[git pull] vmwgfx-fixes-5.0-190130

2019-01-30 Thread Thomas Hellstrom
Dave, Daniel

A fix for a DMA API change from Christoph Hellwig for vmwgfx, and
Two stable fixes for regressions in the vmwgfx driver

The following changes since commit f0e7ce1eef5854584dfb59b3824a67edee37580f:

  Merge tag 'drm-msm-fixes-2019-01-24' of 
git://people.freedesktop.org/~robclark/linux into drm-fixes (2019-01-25 
07:45:00 +1000)

are available in the Git repository at:

  git://people.freedesktop.org/~thomash/linux tags/vmwgfx-fixes-5.0-190130

for you to fetch changes up to 46984feb928ade4af744eb4e4cc8b242ffaf14e3:

  drm/vmwgfx: Also check for crtc status while checking for DU active 
(2019-01-29 13:57:56 +0100)


Pull request of 190130


Christoph Hellwig (4):
  drm/vmwgfx: remove CONFIG_X86 ifdefs
  drm/vmwgfx: remove CONFIG_INTEL_IOMMU ifdefs v2
  drm/vmwgfx: fix the check when to use dma_alloc_coherent
  drm/vmwgfx: unwind spaghetti code in vmw_dma_select_mode

Deepak Rawat (1):
  drm/vmwgfx: Also check for crtc status while checking for DU active

Thomas Hellstrom (1):
  drm/vmwgfx: Fix an uninitialized fence handle value

 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 55 +++--
 drivers/gpu/drm/vmwgfx/vmwgfx_kms.c |  6 ++--
 2 files changed, 13 insertions(+), 48 deletions(-)
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v7 2/5] drm: Add vrr_enabled property to drm CRTC

2019-01-30 Thread Daniel Vetter
On Wed, Jan 30, 2019 at 11:42 AM Daniel Vetter  wrote:
>
> On Thu, Nov 8, 2018 at 3:44 PM Nicholas Kazlauskas
>  wrote:
> >
> > This patch introduces the 'vrr_enabled' CRTC property to allow
> > dynamic control over variable refresh rate support for a CRTC.
> >
> > This property should be treated like a content hint to the driver -
> > if the hardware or driver is not capable of driving variable refresh
> > timings then this is not considered an error.
> >
> > Capability for variable refresh rate support should be determined
> > by querying the vrr_capable drm connector property.
> >
> > It is worth noting that while the property is intended for atomic use
> > it isn't filtered from legacy userspace queries. This allows for Xorg
> > userspace drivers to implement support.
> >
> > Signed-off-by: Nicholas Kazlauskas 
> > Reviewed-by: Harry Wentland 
> > Cc: Manasi Navare 
> > ---
> >  drivers/gpu/drm/drm_atomic_uapi.c | 4 
> >  drivers/gpu/drm/drm_crtc.c| 2 ++
> >  drivers/gpu/drm/drm_mode_config.c | 6 ++
> >  include/drm/drm_crtc.h| 9 +
> >  include/drm/drm_mode_config.h | 5 +
> >  5 files changed, 26 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_atomic_uapi.c 
> > b/drivers/gpu/drm/drm_atomic_uapi.c
> > index d5b7f315098c..eec396a57b88 100644
> > --- a/drivers/gpu/drm/drm_atomic_uapi.c
> > +++ b/drivers/gpu/drm/drm_atomic_uapi.c
> > @@ -433,6 +433,8 @@ static int drm_atomic_crtc_set_property(struct drm_crtc 
> > *crtc,
> > ret = drm_atomic_set_mode_prop_for_crtc(state, mode);
> > drm_property_blob_put(mode);
> > return ret;
> > +   } else if (property == config->prop_vrr_enabled) {
> > +   state->vrr_enabled = val;
> > } else if (property == config->degamma_lut_property) {
> > ret = drm_atomic_replace_property_blob_from_id(dev,
> > &state->degamma_lut,
> > @@ -491,6 +493,8 @@ drm_atomic_crtc_get_property(struct drm_crtc *crtc,
> > *val = state->active;
> > else if (property == config->prop_mode_id)
> > *val = (state->mode_blob) ? state->mode_blob->base.id : 0;
> > +   else if (property == config->prop_vrr_enabled)
> > +   *val = state->vrr_enabled;
> > else if (property == config->degamma_lut_property)
> > *val = (state->degamma_lut) ? state->degamma_lut->base.id : 
> > 0;
> > else if (property == config->ctm_property)
> > diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
> > index 268a182ae189..6f8ddfcfaba5 100644
> > --- a/drivers/gpu/drm/drm_crtc.c
> > +++ b/drivers/gpu/drm/drm_crtc.c
> > @@ -340,6 +340,8 @@ int drm_crtc_init_with_planes(struct drm_device *dev, 
> > struct drm_crtc *crtc,
> > drm_object_attach_property(&crtc->base, 
> > config->prop_mode_id, 0);
> > drm_object_attach_property(&crtc->base,
> >config->prop_out_fence_ptr, 0);
> > +   drm_object_attach_property(&crtc->base,
> > +  config->prop_vrr_enabled, 0);
> > }
> >
> > return 0;
> > diff --git a/drivers/gpu/drm/drm_mode_config.c 
> > b/drivers/gpu/drm/drm_mode_config.c
> > index ee80788f2c40..5670c67f28d4 100644
> > --- a/drivers/gpu/drm/drm_mode_config.c
> > +++ b/drivers/gpu/drm/drm_mode_config.c
> > @@ -310,6 +310,12 @@ static int drm_mode_create_standard_properties(struct 
> > drm_device *dev)
> > return -ENOMEM;
> > dev->mode_config.prop_mode_id = prop;
> >
> > +   prop = drm_property_create_bool(dev, 0,
> > +   "VRR_ENABLED");
>
> VRR_ENABLED doesn't match vrr_enabled in the docs. But the connector
> one seems to indeed be lowercase. And casing matters for properties
>
> Can you pls figure out who's right here and fix this confusion up?

I checked the igt quickly, patch is typed, will send out today or so.
-Daniel

>
> Thanks, Daniel
>
> > +   if (!prop)
> > +   return -ENOMEM;
> > +   dev->mode_config.prop_vrr_enabled = prop;
> > +
> > prop = drm_property_create(dev,
> > DRM_MODE_PROP_BLOB,
> > "DEGAMMA_LUT", 0);
> > diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h
> > index b21437bc95bf..39c3900aab3c 100644
> > --- a/include/drm/drm_crtc.h
> > +++ b/include/drm/drm_crtc.h
> > @@ -290,6 +290,15 @@ struct drm_crtc_state {
> >  */
> > u32 pageflip_flags;
> >
> > +   /**
> > +* @vrr_enabled:
> > +*
> > +* Indicates if variable refresh rate should be enabled for the 
> > CRTC.
> > +* Support for the requested vrr state will depend on driver and
> > +* hardware capabiltiy - lacking support is not treated as failure.
> > +*/
> > +   bool vrr_enabled;
> > +
> > /**
> >  * @event:
> > 

Re: [Intel-gfx] [PATCH v6 04/28] drm/dp: DRM DP helper/macros to get DP sink DSC parameters

2019-01-30 Thread Daniel Vetter
On Wed, Dec 19, 2018 at 7:54 PM Daniel Vetter  wrote:
>
> On Wed, Oct 24, 2018 at 03:28:16PM -0700, Manasi Navare wrote:
> > This patch adds inline functions and helpers for obtaining
> > DP sink's supported DSC parameters like DSC sink support,
> > eDP compressed BPP supported, maximum slice count supported
> > by the sink devices, DSC line buffer bit depth supported on DP sink,
> > DSC sink maximum color depth by parsing corresponding DPCD registers.
> >
> > v4:
> > * Add helper to give line buf bit depth (Manasi)
> > * Correct the bit masking in color depth helper (manasi)
> > v3:
> > * Use SLICE_CAP_2 for DP (Anusha)
> > v2:
> > * Add DSC sink support macro (Jani N)
> >
> > Cc: Gaurav K Singh 
> > Cc: dri-devel@lists.freedesktop.org
> > Cc: Jani Nikula 
> > Cc: Ville Syrjala 
> > Cc: Anusha Srivatsa 
> > Signed-off-by: Manasi Navare 
> > Reviewed-by: Anusha Srivatsa 
> > Reviewed-by: Gaurav K Singh 
> > ---
> >  drivers/gpu/drm/drm_dp_helper.c | 90 +
> >  include/drm/drm_dp_helper.h | 30 +++
> >  2 files changed, 120 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_dp_helper.c 
> > b/drivers/gpu/drm/drm_dp_helper.c
> > index 37c01b6076ec..6d483487f2b4 100644
> > --- a/drivers/gpu/drm/drm_dp_helper.c
> > +++ b/drivers/gpu/drm/drm_dp_helper.c
> > @@ -1352,3 +1352,93 @@ int drm_dp_read_desc(struct drm_dp_aux *aux, struct 
> > drm_dp_desc *desc,
> >   return 0;
> >  }
> >  EXPORT_SYMBOL(drm_dp_read_desc);
> > +
> > +/**
> > + * DRM DP Helpers for DSC
> > + */
>
> This isn't really kerneldoc. Can you pls fix it, including all the other
> new exported functions?

Ping. Still not fixed, and it's well over one month now. Would be nice
to get this sorted. I'd type the docs myself, but I don't really have
an idea about DSC.

Thanks, Daniel

>
> Thanks, Daniel
>
> > +u8 drm_dp_dsc_sink_max_slice_count(const u8 
> > dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
> > +bool is_edp)
> > +{
> > + u8 slice_cap1 = dsc_dpcd[DP_DSC_SLICE_CAP_1 - DP_DSC_SUPPORT];
> > +
> > + if (is_edp) {
> > + /* For eDP, register DSC_SLICE_CAPABILITIES_1 gives slice 
> > count */
> > + if (slice_cap1 & DP_DSC_4_PER_DP_DSC_SINK)
> > + return 4;
> > + if (slice_cap1 & DP_DSC_2_PER_DP_DSC_SINK)
> > + return 2;
> > + if (slice_cap1 & DP_DSC_1_PER_DP_DSC_SINK)
> > + return 1;
> > + } else {
> > + /* For DP, use values from DSC_SLICE_CAP_1 and DSC_SLICE_CAP2 
> > */
> > + u8 slice_cap2 = dsc_dpcd[DP_DSC_SLICE_CAP_2 - DP_DSC_SUPPORT];
> > +
> > + if (slice_cap2 & DP_DSC_24_PER_DP_DSC_SINK)
> > + return 24;
> > + if (slice_cap2 & DP_DSC_20_PER_DP_DSC_SINK)
> > + return 20;
> > + if (slice_cap2 & DP_DSC_16_PER_DP_DSC_SINK)
> > + return 16;
> > + if (slice_cap1 & DP_DSC_12_PER_DP_DSC_SINK)
> > + return 12;
> > + if (slice_cap1 & DP_DSC_10_PER_DP_DSC_SINK)
> > + return 10;
> > + if (slice_cap1 & DP_DSC_8_PER_DP_DSC_SINK)
> > + return 8;
> > + if (slice_cap1 & DP_DSC_6_PER_DP_DSC_SINK)
> > + return 6;
> > + if (slice_cap1 & DP_DSC_4_PER_DP_DSC_SINK)
> > + return 4;
> > + if (slice_cap1 & DP_DSC_2_PER_DP_DSC_SINK)
> > + return 2;
> > + if (slice_cap1 & DP_DSC_1_PER_DP_DSC_SINK)
> > + return 1;
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_dsc_sink_max_slice_count);
> > +
> > +u8 drm_dp_dsc_sink_line_buf_depth(const u8 
> > dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE])
> > +{
> > + u8 line_buf_depth = dsc_dpcd[DP_DSC_LINE_BUF_BIT_DEPTH - 
> > DP_DSC_SUPPORT];
> > +
> > + switch (line_buf_depth & DP_DSC_LINE_BUF_BIT_DEPTH_MASK) {
> > + case DP_DSC_LINE_BUF_BIT_DEPTH_9:
> > + return 9;
> > + case DP_DSC_LINE_BUF_BIT_DEPTH_10:
> > + return 10;
> > + case DP_DSC_LINE_BUF_BIT_DEPTH_11:
> > + return 11;
> > + case DP_DSC_LINE_BUF_BIT_DEPTH_12:
> > + return 12;
> > + case DP_DSC_LINE_BUF_BIT_DEPTH_13:
> > + return 13;
> > + case DP_DSC_LINE_BUF_BIT_DEPTH_14:
> > + return 14;
> > + case DP_DSC_LINE_BUF_BIT_DEPTH_15:
> > + return 15;
> > + case DP_DSC_LINE_BUF_BIT_DEPTH_16:
> > + return 16;
> > + case DP_DSC_LINE_BUF_BIT_DEPTH_8:
> > + return 8;
> > + }
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(drm_dp_dsc_sink_line_buf_depth);
> > +
> > +u8 drm_dp_dsc_sink_max_color_depth(const u8 
> > dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE])
> > +{
> > + u8 color_depth = dsc_dpcd[DP_DSC_DEC_COLOR_DEPTH_CAP - 
> > DP_DSC_SUPPORT];
> > +
> > + if (color_depth & DP_DSC_

[PATCH -next] video: fbdev: Fix potential NULL pointer dereference

2019-01-30 Thread YueHaibing
There is a potential NULL pointer dereference in case
fb_create_modedb() fails and returns NULL.

Signed-off-by: YueHaibing 
---
 drivers/video/fbdev/core/fbmon.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/video/fbdev/core/fbmon.c b/drivers/video/fbdev/core/fbmon.c
index dd31289..3558a70 100644
--- a/drivers/video/fbdev/core/fbmon.c
+++ b/drivers/video/fbdev/core/fbmon.c
@@ -978,6 +978,8 @@ void fb_edid_to_monspecs(unsigned char *edid, struct 
fb_monspecs *specs)
get_monspecs(edid, specs);
 
specs->modedb = fb_create_modedb(edid, &specs->modedb_len, specs);
+   if (!specs->modedb)
+   return;
 
/*
 * Workaround for buggy EDIDs that sets that the first
-- 
2.7.0


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Xen-devel] [PATCH v2] drm/xen-front: Make shmem backed display buffer coherent

2019-01-30 Thread Oleksandr Andrushchenko
Dropped in favor of https://lkml.org/lkml/2019/1/29/910

On 1/24/19 5:02 PM, Julien Grall wrote:
>
>
> On 24/01/2019 14:34, Oleksandr Andrushchenko wrote:
>> Hello, Julien!
>
> Hi,
>
>> On 1/22/19 1:44 PM, Julien Grall wrote:
>>>
>>>
>>> On 1/22/19 10:28 AM, Oleksandr Andrushchenko wrote:
 Hello, Julien!
>>>
>>> Hi,
>>>
 On 1/21/19 7:09 PM, Julien Grall wrote:
 Well, I didn't get the attributes of pages at the backend side, but 
 IMO
 those
 do not matter in my use-case (for simplicity I am not using
 zero-copying at
 backend side):
>>>
>>> They are actually important no matter what is your use case. If you
>>> access the same physical page with different attributes, then you are
>>> asking for trouble.
>> So, we have:
>>
>> DomU: frontend side
>> 
>> !PTE_RDONLY + PTE_PXN + PTE_SHARED + PTE_AF + PTE_UXN +
>> PTE_ATTRINDX(MT_NORMAL)
>
> I still don't understand how you came up with MT_NORMAL when you seem 
> to confirm...
>
>>
>> DomD: backend side
>> 
>> PTE_USER + !PTE_RDONLY + PTE_PXN + PTE_NG + PTE_CONT + PTE_TABLE_BIT +
>> PTE_UXN + PTE_ATTRINDX(MT_NORMAL)
>>
>>   From the above it seems that I don't violate cached/non-cached
>> agreement here
>>>
>>> This is why Xen imposes all the pages shared to have their memory
>>> attributes following some rules. Actually, speaking with Mark R., we
>>> may want to tight a bit more the attributes.
>>>

 1. Frontend device allocates display buffer pages which come from 
 shmem
 and have these attributes:
 !PTE_RDONLY + PTE_PXN + PTE_SHARED + PTE_AF + PTE_UXN +
 PTE_ATTRINDX(MT_NORMAL)
>>>
>>> My knowledge of Xen DRM is inexistent. However, looking at the code in
>>> 5.0-rc2, I don't seem to find the same attributes. For instance
>>> xen_drm_front_gem_prime_vmap and gem_mmap_obj are using
>>> pgprot_writecombine. So it looks like, the mapping will be
>>> non-cacheable on Arm64.
>>>
>>> Can you explain how you came up to these attributes?
>> pgprot_writecombine is PTE_ATTRINDX(MT_NORMAL_NC), so it seems to be
>> applicable here? [1]
>
> ... that MT_NORMAL_NC is used for the frontend pages.
>
> MT_NORMAL_NC is different from MT_NORMAL. The use of the former will 
> result to non-cacheable memory while the latter will result to 
> cacheable memory.
>
> To me, this looks like the exact reason why you see artifact on the 
> display buffer. As the author of this code, can you explain why you 
> decided to use pgprot_writecombine here instead of relying on the 
> default VMA prot?
>
> [...]
>
>>> We actually never required to use cache flush in other PV protocol, so
>>> I still don't understand why the PV DRM should be different here.
>> Well, you are right. But at the same time not flushing the buffer makes
>> troubles,
>> so this is why I am trying to figure out what is wrong here.
>
> The cache flush is likely hiding the real problem rather than solving it.
>
>>>
>>> To me, it looks like that you are either missing some barriers
>> Barriers for the buffer? Not sure what you mean here.
>
> If you share information between two entities, you may need some 
> ordering so the information are seen consistently by the consumer 
> side. This can be achieved by using barriers.
>
>> Even more, we have
>> a use case
>> when the buffer is not touched by CPU in DomD and is solely owned by 
>> the HW.
>
> Memory ordering issues are subtle. The fact that one of your use-case 
> works does not imply that barriers are not necessary. I am not saying 
> there are a missing barriers, I am only pointed out potential reasons.
>
> Anyway, I don't think your problem is a missing barriers here. It is 
> more likely because of mismatch memory attributes (see above).
>
> Cheers,
>
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [Xen-devel][PATCH] drm/xen-front: Fix mmap attributes for display buffers

2019-01-30 Thread Oleksandr Andrushchenko
On 1/29/19 9:07 PM, Julien Grall wrote:
> Hi Oleksandr,
>
> On 1/29/19 3:04 PM, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko 
>>
>> When GEM backing storage is allocated those are normal pages,
>> so there is no point using pgprot_writecombine while mmaping.
>> This fixes mismatch of buffer pages' memory attributes between
>> the frontend and backend which may cause screen artifacts.
>>
>> Fixes: c575b7eeb89f ("drm/xen-front: Add support for Xen PV display 
>> frontend")
>>
>> Signed-off-by: Oleksandr Andrushchenko 
>> 
>> Suggested-by: Julien Grall 
>> ---
>>   drivers/gpu/drm/xen/xen_drm_front_gem.c | 5 ++---
>>   1 file changed, 2 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c 
>> b/drivers/gpu/drm/xen/xen_drm_front_gem.c
>> index d303a2e17f5e..9d5c03d7668d 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
>> @@ -235,8 +235,7 @@ static int gem_mmap_obj(struct xen_gem_object 
>> *xen_obj,
>>   vma->vm_flags &= ~VM_PFNMAP;
>>   vma->vm_flags |= VM_MIXEDMAP;
>>   vma->vm_pgoff = 0;
>> -    vma->vm_page_prot =
>> - pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
>> +    vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>
> The patch looks good to me. It would be worth expanding the comment a 
> bit before to explain that we overwrite vm_page_prot to use cacheable 
> attribute as required by the Xen ABI.
>
Ok, then I'll put:

+   /*
+    * According to Xen on ARM ABI (xen/include/public/arch-arm.h):
+    * all memory which is shared with other entities in the system
+    * (including the hypervisor and other guests) must reside in memory
+    * which is mapped as Normal Inner Write-Back Outer Write-Back
+    * Inner-Shareable.
+    */
     vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
Please let me know if this is not what you want
> With the comment updated:
>
> Acked-by: Julien Grall 
>
> Cheers,
>
Thank you,
Oleksandr
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 108720] Possible GPU hangs with Cemu Emulator

2019-01-30 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=108720

CheatCodesOfLife  changed:

   What|Removed |Added

 Resolution|--- |DUPLICATE
 Status|NEEDINFO|RESOLVED

--- Comment #4 from CheatCodesOfLife  ---
If you want to reproduce this with a 100% success rate, try running Mario Kart
8 with any Vega based card (Vega56,64,2200G).

In fact, I'm guessing he means "Vega 64" when he said "Vulkan 64"

I think this is a duplicate of my existing bug:
https://bugs.freedesktop.org/show_bug.cgi?id=107612

*** This bug has been marked as a duplicate of bug 107612 ***

-- 
You are receiving this mail because:
You are the assignee for the bug.___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 107612] [Vega10] Hard Lock [gfxhub] VMC page fault when opening Mario Kart 8 in Cemu under wine

2019-01-30 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=107612

CheatCodesOfLife  changed:

   What|Removed |Added

 CC||zacharybin...@hotmail.com

--- Comment #3 from CheatCodesOfLife  ---
*** Bug 108720 has been marked as a duplicate of this bug. ***

-- 
You are receiving this mail because:
You are the assignee for the bug.___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 109493] [CI][DRMTIP] igt@gem_cpu_reloc@forked - fail - igt_skip: Assertion `!test_child' failed.

2019-01-30 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=109493

--- Comment #3 from Chris Wilson  ---
Hmm, there are no skips here, so the test failed and we got no error message
from the child. That's quite a nasty igt bug.

-- 
You are receiving this mail because:
You are the assignee for the bug.___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-01-30 Thread Brian Starkey
Hi Liam,

On Tue, Jan 29, 2019 at 03:44:53PM -0800, Liam Mark wrote:
> On Fri, 18 Jan 2019, Liam Mark wrote:
> 
> > On Fri, 18 Jan 2019, Andrew F. Davis wrote:
> > 
> > > On 1/18/19 12:37 PM, Liam Mark wrote:
> > > > The ION begin_cpu_access and end_cpu_access functions use the
> > > > dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> > > > maintenance.
> > > > 
> > > > Currently it is possible to apply cache maintenance, via the
> > > > begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> > > > dma mapped.
> > > > 
> > > > The dma sync sg APIs should not be called on sg lists which have not 
> > > > been
> > > > dma mapped as this can result in cache maintenance being applied to the
> > > > wrong address. If an sg list has not been dma mapped then its 
> > > > dma_address
> > > > field has not been populated, some dma ops such as the swiotlb_dma_ops 
> > > > ops
> > > > use the dma_address field to calculate the address onto which to apply
> > > > cache maintenance.
> > > > 
> > > > Also I don’t think we want CMOs to be applied to a buffer which is not
> > > > dma mapped as the memory should already be coherent for access from the
> > > > CPU. Any CMOs required for device access taken care of in the
> > > > dma_buf_map_attachment and dma_buf_unmap_attachment calls.
> > > > So really it only makes sense for begin_cpu_access and end_cpu_access to
> > > > apply CMOs if the buffer is dma mapped.
> > > > 
> > > > Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> > > > cache maintenance to buffers which are dma mapped.
> > > > 
> > > > Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for 
> > > > syncing and mapping")
> > > > Signed-off-by: Liam Mark 
> > > > ---
> > > >  drivers/staging/android/ion/ion.c | 26 +-
> > > >  1 file changed, 21 insertions(+), 5 deletions(-)
> > > > 
> > > > diff --git a/drivers/staging/android/ion/ion.c 
> > > > b/drivers/staging/android/ion/ion.c
> > > > index 6f5afab7c1a1..1fe633a7fdba 100644
> > > > --- a/drivers/staging/android/ion/ion.c
> > > > +++ b/drivers/staging/android/ion/ion.c
> > > > @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
> > > > struct device *dev;
> > > > struct sg_table *table;
> > > > struct list_head list;
> > > > +   bool dma_mapped;
> > > >  };
> > > >  
> > > >  static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> > > > @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf 
> > > > *dmabuf,
> > > >  
> > > > a->table = table;
> > > > a->dev = attachment->dev;
> > > > +   a->dma_mapped = false;
> > > > INIT_LIST_HEAD(&a->list);
> > > >  
> > > > attachment->priv = a;
> > > > @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct 
> > > > dma_buf_attachment *attachment,
> > > >  {
> > > > struct ion_dma_buf_attachment *a = attachment->priv;
> > > > struct sg_table *table;
> > > > +   struct ion_buffer *buffer = attachment->dmabuf->priv;
> > > >  
> > > > table = a->table;
> > > >  
> > > > +   mutex_lock(&buffer->lock);
> > > > if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> > > > -   direction))
> > > > +   direction)) {
> > > > +   mutex_unlock(&buffer->lock);
> > > > return ERR_PTR(-ENOMEM);
> > > > +   }
> > > > +   a->dma_mapped = true;
> > > > +   mutex_unlock(&buffer->lock);
> > > >  
> > > > return table;
> > > >  }
> > > > @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct 
> > > > dma_buf_attachment *attachment,
> > > >   struct sg_table *table,
> > > >   enum dma_data_direction direction)
> > > >  {
> > > > +   struct ion_dma_buf_attachment *a = attachment->priv;
> > > > +   struct ion_buffer *buffer = attachment->dmabuf->priv;
> > > > +
> > > > +   mutex_lock(&buffer->lock);
> > > > dma_unmap_sg(attachment->dev, table->sgl, table->nents, 
> > > > direction);
> > > > +   a->dma_mapped = false;
> > > > +   mutex_unlock(&buffer->lock);
> > > >  }
> > > >  
> > > >  static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> > > > @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct 
> > > > dma_buf *dmabuf,
> > > >  
> > > > mutex_lock(&buffer->lock);
> > > > list_for_each_entry(a, &buffer->attachments, list) {
> > > 
> > > When no devices are attached then buffer->attachments is empty and the
> > > below does not run, so if I understand this patch correctly then what
> > > you are protecting against is CPU access in the window after
> > > dma_buf_attach but before dma_buf_map.
> > > 
> > 
> > Yes
> > 
> > > This is the kind of thing that again makes me think a couple more
> > > ordering requirements on DMA-BUF ops are needed. DMA-BUFs do not require
> > > the backing memory to be 

Re: [PATCH] drm/amdgpu: Transfer fences to dmabuf importer

2019-01-30 Thread Christian König

Am 30.01.19 um 11:55 schrieb Chris Wilson:

amdgpu only uses shared-fences internally, but dmabuf importers rely on
implicit write hazard tracking via the reservation_object.fence_excl.
For example, the importer use the write hazard for timing a page flip to
only occur after the exporter has finished flushing its write into the
surface. As such, on exporting a dmabuf, we must either flush all
outstanding fences (for we do not know which are writes and should have
been exclusive) or alternatively create a new exclusive fence that is
the composite of all the existing shared fences, and so will only be
signaled when all earlier fences are signaled (ensuring that we can not
be signaled before the completion of any earlier write).

v2: reservation_object is already locked by amdgpu_bo_reserve()
v3: Replace looping with get_fences_rcu and special case the promotion
of a single shared fence directly to an exclusive fence, bypassing the
fence array.
v4: Drop the fence array ref after assigning to reservation_object

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107341
Testcase: igt/amd_prime/amd-to-i915
References: 8e94a46c1770 ("drm/amdgpu: Attach exclusive fence to prime exported 
bo's. (v5)")
Signed-off-by: Chris Wilson 
Cc: Alex Deucher 
Cc: "Christian König" 
Reviewed-by: "Christian König" 
---
We may disagree on the best long term strategy for fence semantics, but
I think this is still a nice short term solution to the blocking
behaviour on exporting amdgpu to prime.


Yeah, I can agree on that. And just pushed the patch to 
amd-staging-drm-next.


Christian.


-Chris
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 59 ---
  1 file changed, 51 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
index 71913a18d142..a38e0fb4a6fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
@@ -38,6 +38,7 @@
  #include "amdgpu_gem.h"
  #include 
  #include 
+#include 
  
  /**

   * amdgpu_gem_prime_get_sg_table - &drm_driver.gem_prime_get_sg_table
@@ -187,6 +188,48 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
return ERR_PTR(ret);
  }
  
+static int

+__reservation_object_make_exclusive(struct reservation_object *obj)
+{
+   struct dma_fence **fences;
+   unsigned int count;
+   int r;
+
+   if (!reservation_object_get_list(obj)) /* no shared fences to convert */
+   return 0;
+
+   r = reservation_object_get_fences_rcu(obj, NULL, &count, &fences);
+   if (r)
+   return r;
+
+   if (count == 0) {
+   /* Now that was unexpected. */
+   } else if (count == 1) {
+   reservation_object_add_excl_fence(obj, fences[0]);
+   dma_fence_put(fences[0]);
+   kfree(fences);
+   } else {
+   struct dma_fence_array *array;
+
+   array = dma_fence_array_create(count, fences,
+  dma_fence_context_alloc(1), 0,
+  false);
+   if (!array)
+   goto err_fences_put;
+
+   reservation_object_add_excl_fence(obj, &array->base);
+   dma_fence_put(&array->base);
+   }
+
+   return 0;
+
+err_fences_put:
+   while (count--)
+   dma_fence_put(fences[count]);
+   kfree(fences);
+   return -ENOMEM;
+}
+
  /**
   * amdgpu_gem_map_attach - &dma_buf_ops.attach implementation
   * @dma_buf: Shared DMA buffer
@@ -218,16 +261,16 @@ static int amdgpu_gem_map_attach(struct dma_buf *dma_buf,
  
  	if (attach->dev->driver != adev->dev->driver) {

/*
-* Wait for all shared fences to complete before we switch to 
future
-* use of exclusive fence on this prime shared bo.
+* We only create shared fences for internal use, but importers
+* of the dmabuf rely on exclusive fences for implicitly
+* tracking write hazards. As any of the current fences may
+* correspond to a write, we need to convert all existing
+* fences on the reservation object into a single exclusive
+* fence.
 */
-   r = reservation_object_wait_timeout_rcu(bo->tbo.resv,
-   true, false,
-   MAX_SCHEDULE_TIMEOUT);
-   if (unlikely(r < 0)) {
-   DRM_DEBUG_PRIME("Fence wait failed: %li\n", r);
+   r = __reservation_object_make_exclusive(bo->tbo.resv);
+   if (r)
goto error_unreserve;
-   }
}
  
  	/* pin buffer into GTT */


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.o

RE: [v9 3/3] drm/i915: Attach colorspace property and enable modeset

2019-01-30 Thread Shankar, Uma


>-Original Message-
>From: dri-devel [mailto:dri-devel-boun...@lists.freedesktop.org] On Behalf Of
>Maarten Lankhorst
>Sent: Wednesday, January 30, 2019 3:27 PM
>To: Shankar, Uma ; intel-...@lists.freedesktop.org;
>dri-devel@lists.freedesktop.org
>Cc: emil.l.veli...@gmail.com; Syrjala, Ville ; 
>Lankhorst,
>Maarten 
>Subject: Re: [v9 3/3] drm/i915: Attach colorspace property and enable modeset
>
>Op 29-01-2019 om 19:50 schreef Uma Shankar:
>> This patch attaches the colorspace connector property to the hdmi
>> connector. Based on colorspace change, modeset will be triggered to
>> switch to new colorspace.
>>
>> Based on colorspace property value create an infoframe with
>> appropriate colorspace. This can be used to send an infoframe packet
>> with proper colorspace value set which will help to enable wider color
>> gamut like BT2020 on sink.
>>
>> This patch attaches and enables HDMI colorspace, DP will be taken care
>> separately.
>>
>> v2: Merged the changes of creating infoframe as well to this patch as
>> per Maarten's suggestion.
>>
>> v3: Addressed review comments from Shashank. Separated HDMI and DP
>> colorspaces as suggested by Ville and Maarten.
>>
>> v4: Addressed Chris and Ville's review comments, and created a common
>> colorspace property for DP and HDMI, filtered the list based on the
>> colorspaces supported by the respective protocol standard. Handle the
>> default case properly.
>>
>> v5: Merged the DP handling along with platform colorspace handling as
>> per Shashank's comments.
>>
>> v6: Addressed Maarten's review comment and limited this currently to
>> non lspcon based devices.
>>
>> v7: Reverted to old design of exposing all colorspaces to userspace as
>> per Ville's review comment
>>
>> Signed-off-by: Uma Shankar 
>> ---
>>  drivers/gpu/drm/i915/intel_atomic.c|  1 +
>>  drivers/gpu/drm/i915/intel_connector.c |  8 
>>  drivers/gpu/drm/i915/intel_drv.h   |  1 +
>>  drivers/gpu/drm/i915/intel_hdmi.c  | 25 +
>>  4 files changed, 35 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/intel_atomic.c
>> b/drivers/gpu/drm/i915/intel_atomic.c
>> index 16263ad..76b7114 100644
>> --- a/drivers/gpu/drm/i915/intel_atomic.c
>> +++ b/drivers/gpu/drm/i915/intel_atomic.c
>> @@ -124,6 +124,7 @@ int intel_digital_connector_atomic_check(struct
>drm_connector *conn,
>>   */
>>  if (new_conn_state->force_audio != old_conn_state->force_audio ||
>>  new_conn_state->broadcast_rgb != old_conn_state->broadcast_rgb
>> ||
>> +new_state->colorspace != old_state->colorspace ||
>
>I only care if there's a new version, but can you use xxx_conn_state->base ?
>
>Can fixup if patch is ready otherwise.

Sure, I can fix this. Will send out the next version along with your RB and 
Jani's ack.

Thanks for the review.

Regards,
Uma Shankar

>
>Otherwise series looks good, so
>
>Reviewed-by: Maarten Lankhorst  for the
>whole series. :)
>
>>  new_conn_state->base.picture_aspect_ratio != old_conn_state-
>>base.picture_aspect_ratio ||
>>  new_conn_state->base.content_type != old_conn_state-
>>base.content_type ||
>>  new_conn_state->base.scaling_mode !=
>> old_conn_state->base.scaling_mode)
>> diff --git a/drivers/gpu/drm/i915/intel_connector.c
>> b/drivers/gpu/drm/i915/intel_connector.c
>> index ee16758..ac2aed7 100644
>> --- a/drivers/gpu/drm/i915/intel_connector.c
>> +++ b/drivers/gpu/drm/i915/intel_connector.c
>> @@ -265,3 +265,11 @@ int intel_ddc_get_modes(struct drm_connector
>*connector,
>>  connector->dev->mode_config.aspect_ratio_property,
>>  DRM_MODE_PICTURE_ASPECT_NONE);
>>  }
>> +
>> +void
>> +intel_attach_colorspace_property(struct drm_connector *connector) {
>> +if (!drm_mode_create_colorspace_property(connector))
>> +drm_object_attach_property(&connector->base,
>> +connector->colorspace_property, 0); }
>> diff --git a/drivers/gpu/drm/i915/intel_drv.h
>> b/drivers/gpu/drm/i915/intel_drv.h
>> index 85b913e..5178a9a 100644
>> --- a/drivers/gpu/drm/i915/intel_drv.h
>> +++ b/drivers/gpu/drm/i915/intel_drv.h
>> @@ -1783,6 +1783,7 @@ int intel_connector_update_modes(struct
>> drm_connector *connector,  void
>> intel_attach_force_audio_property(struct drm_connector *connector);
>> void intel_attach_broadcast_rgb_property(struct drm_connector
>> *connector);  void intel_attach_aspect_ratio_property(struct
>> drm_connector *connector);
>> +void intel_attach_colorspace_property(struct drm_connector
>> +*connector);
>>
>>  /* intel_csr.c */
>>  void intel_csr_ucode_init(struct drm_i915_private *); diff --git
>> a/drivers/gpu/drm/i915/intel_hdmi.c
>> b/drivers/gpu/drm/i915/intel_hdmi.c
>> index 97a98e1..5c5009d 100644
>> --- a/drivers/gpu/drm/i915/intel_hdmi.c
>> +++ b/drivers/gpu/drm/i915/intel_hdmi.c
>> @@ -498,6 +498,24 @@ static void intel_hdmi_set_avi_infoframe(struct
>intel_encoder *encoder,
>>  else
>>  frame.avi.colorspace = HDMI

[v10 1/3] drm: Add HDMI colorspace property

2019-01-30 Thread Uma Shankar
Create a new connector property to program colorspace to sink
devices. Modern sink devices support more than 1 type of
colorspace like 601, 709, BT2020 etc. This helps to switch
based on content type which is to be displayed. The decision
lies with compositors as to in which scenarios, a particular
colorspace will be picked.

This will be helpful mostly to switch to higher gamut colorspaces
like BT2020 when the media content is encoded as BT2020. Thereby
giving a good visual experience to users.

The expectation from userspace is that it should parse the EDID
and get supported colorspaces. Use this property and switch to the
one supported. Sink supported colorspaces should be retrieved by
userspace from EDID and driver will not explicitly expose them.

Basically the expectation from userspace is:
 - Set up CRTC DEGAMMA/CTM/GAMMA to convert to some sink
   colorspace
 - Set this new property to let the sink know what it
   converted the CRTC output to.

v2: Addressed Maarten and Ville's review comments. Enhanced
the colorspace enum to incorporate both HDMI and DP supported
colorspaces. Also, added a default option for colorspace.

v3: Removed Adobe references from enum definitions as per
Ville, Hans Verkuil and Jonas Karlman suggestions. Changed
Default to an unset state where driver will assign the colorspace
is not chosen by user, suggested by Ville and Maarten. Addressed
other misc review comments from Maarten. Split the changes to
have separate colorspace property for DP and HDMI.

v4: Addressed Chris and Ville's review comments, and created a
common colorspace property for DP and HDMI, filtered the list
based on the colorspaces supported by the respective protocol
standard.

v5: Made the property creation helper accept enum list based on
platform capabilties as suggested by Shashank. Consolidated HDMI
and DP property creation in the common helper.

v6: Addressed Shashank's review comments.

v7: Added defines instead of enum in uapi as per Brian Starkey's
suggestion in order to go with string matching at userspace. Updated
the commit message to add more details as well kernel docs.

v8: Addressed Maarten's review comments.

v9: Removed macro defines from uapi as per Brian Starkey and Daniel
Stone's comments and moved to drm include file. Moved back to older
design with exposing all HDMI colorspaces to userspace since infoframe
capability is there even on legacy platforms, as per Ville's review
comments.

v10: Fixed sparse warnings, updated the RB from Maarten and Jani's ack.

Signed-off-by: Uma Shankar 
Acked-by: Jani Nikula 
Reviewed-by: Shashank Sharma 
Reviewed-by: Maarten Lankhorst 
---
 drivers/gpu/drm/drm_atomic_uapi.c |  4 +++
 drivers/gpu/drm/drm_connector.c   | 75 +++
 include/drm/drm_connector.h   | 46 
 3 files changed, 125 insertions(+)

diff --git a/drivers/gpu/drm/drm_atomic_uapi.c 
b/drivers/gpu/drm/drm_atomic_uapi.c
index 9a1f41a..9b5d44f 100644
--- a/drivers/gpu/drm/drm_atomic_uapi.c
+++ b/drivers/gpu/drm/drm_atomic_uapi.c
@@ -746,6 +746,8 @@ static int drm_atomic_connector_set_property(struct 
drm_connector *connector,
return -EINVAL;
}
state->content_protection = val;
+   } else if (property == connector->colorspace_property) {
+   state->colorspace = val;
} else if (property == config->writeback_fb_id_property) {
struct drm_framebuffer *fb = drm_framebuffer_lookup(dev, NULL, 
val);
int ret = drm_atomic_set_writeback_fb_for_connector(state, fb);
@@ -814,6 +816,8 @@ static int drm_atomic_connector_set_property(struct 
drm_connector *connector,
*val = state->picture_aspect_ratio;
} else if (property == config->content_type_property) {
*val = state->content_type;
+   } else if (property == connector->colorspace_property) {
+   *val = state->colorspace;
} else if (property == connector->scaling_mode_property) {
*val = state->scaling_mode;
} else if (property == connector->content_protection_property) {
diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index 8475396..ed10dd9 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm/drm_connector.c
@@ -826,6 +826,30 @@ int drm_display_info_set_bus_formats(struct 
drm_display_info *info,
 };
 DRM_ENUM_NAME_FN(drm_get_content_protection_name, drm_cp_enum_list)
 
+static const struct drm_prop_enum_list hdmi_colorspaces[] = {
+   /* For Default case, driver will set the colorspace */
+   { DRM_MODE_COLORIMETRY_DEFAULT, "Default" },
+   /* Standard Definition Colorimetry based on CEA 861 */
+   { DRM_MODE_COLORIMETRY_ITU_601, "ITU_601" },
+   { DRM_MODE_COLORIMETRY_ITU_709, "ITU_709" },
+   /* Standard Definition Colorimetry based on IEC 61966-2-4 */
+   { DRM_MODE_COLORIMETRY_XV_YCC_601, "XV_YCC_601" },
+   /* High D

[v10 0/3] Add Colorspace connector property interface

2019-01-30 Thread Uma Shankar
This patch series creates a new connector property to program
colorspace to sink devices. Modern sink devices support more
than 1 type of colorspace like 601, 709, BT2020 etc. This helps
to switch based on content type which is to be displayed. The
decision lies with compositors as to in which scenarios, a
particular colorspace will be picked.

This will be helpful mostly to switch to higher gamut colorspaces
like BT2020 when the media content is encoded as BT2020. Thereby
giving a good visual experience to users.

The expectation from userspace is that it should parse the EDID
and get supported colorspaces. Use this property and switch to the
one supported. Sink supported colorspaces should be retrieved by
userspace from EDID and driver will not explicitly expose them.

Basically the expectation from userspace is:
 - Set up CRTC DEGAMMA/CTM/GAMMA to convert to some sink
   colorspace
 - Set this new property to let the sink know what it
   converted the CRTC output to.
 - This property is just to inform sink what colorspace
   source is trying to drive.

Have tested this using xrandr by using below command:
xrandr --output HDMI2 --set "Colorspace" "BT2020_rgb"

v2: Addressed Ville and Maarten's review comments. Merged the 2nd
and 3rd patch into one common logical patch.

v3: Removed Adobe references from enum definitions as per
Ville, Hans Verkuil and Jonas Karlman suggestions. Changed
default to an unset state where driver will assign the colorspace
when not chosen by user, suggested by Ville and Maarten. Addressed
other misc review comments from Maarten. Split the changes to
have separate colorspace property for DP and HDMI.

v4: Addressed Chris and Ville's review comments, and created a
common colorspace property for DP and HDMI, filtered the list

v5: Modified the colorspace property creation helper to take
platform specific enum list based on the capabilities of the
platform as suggested by Shashank. With this there is no need
for segregation between DP and HDMI.

v6: Addressed Shashank's review comments.

v7: Added defines instead of enum in uapi as per Brian Starkey's
suggestion in order to go with string matching at userspace. Updated
the kernel doc as well with more details.

v8: Addressed Maarten's review comments.

v9: Removed macro defines from uapi as per Brian Starkey and Daniel
Stone's comments and moved to drm include file. Moved back to older
design with exposing all HDMI colorspaces to userspace since infoframe
capability is there even on legacy platforms, as per Ville's review
comments.

v10: Addressed Maarten' review comment, updated the RB from Maarten
and Jani Nikula's ack. Also fixed sparse warnings and checkpatch
complaints.

Uma Shankar (3):
  drm: Add HDMI colorspace property
  drm: Add DP colorspace property
  drm/i915: Attach colorspace property and enable modeset

 drivers/gpu/drm/drm_atomic_uapi.c  |   4 ++
 drivers/gpu/drm/drm_connector.c| 104 +
 drivers/gpu/drm/i915/intel_atomic.c|   1 +
 drivers/gpu/drm/i915/intel_connector.c |   8 +++
 drivers/gpu/drm/i915/intel_drv.h   |   1 +
 drivers/gpu/drm/i915/intel_hdmi.c  |  25 
 include/drm/drm_connector.h|  46 +++
 7 files changed, 189 insertions(+)

-- 
1.9.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[v10 2/3] drm: Add DP colorspace property

2019-01-30 Thread Uma Shankar
This patch adds a DP colorspace property, enabling
userspace to switch to various supported colorspaces.
This will help enable BT2020 along with other colorspaces.

v2: Addressed Maarten and Ville's review comments. Enhanced
the colorspace enum to incorporate both HDMI and DP supported
colorspaces. Also, added a default option for colorspace.

v3: Split the changes to have separate colorspace property for
DP and HDMI.

v4: Addressed Chris and Ville's review comments, and created a
common colorspace property for DP and HDMI, filtered the list
based on the colorspaces supported by the respective protocol
standard.

v5: Merged the DP handling along with platform colorspace
handling as per Shashank's comments.

v6: Reverted to old design of exposing all colorspaces to
userspace as per Ville's review comment

v7: Fixed sparse warnings, updated the RB from Maarten and Jani's ack.

Signed-off-by: Uma Shankar 
Acked-by: Jani Nikula 
Reviewed-by: Maarten Lankhorst 
---
 drivers/gpu/drm/drm_connector.c | 31 +++
 1 file changed, 31 insertions(+)

diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index ed10dd9..b331be8 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm/drm_connector.c
@@ -850,6 +850,29 @@ int drm_display_info_set_bus_formats(struct 
drm_display_info *info,
{ DRM_MODE_COLORIMETRY_BT2020_CYCC, "BT2020_CYCC" },
 };
 
+static const struct drm_prop_enum_list dp_colorspaces[] = {
+   /* For Default case, driver will set the colorspace */
+   { DRM_MODE_COLORIMETRY_DEFAULT, "Default" },
+   /* Standard Definition Colorimetry based on CEA 861 */
+   { DRM_MODE_COLORIMETRY_ITU_601, "ITU_601" },
+   { DRM_MODE_COLORIMETRY_ITU_709, "ITU_709" },
+   /* Standard Definition Colorimetry based on IEC 61966-2-4 */
+   { DRM_MODE_COLORIMETRY_XV_YCC_601, "XV_YCC_601" },
+   /* High Definition Colorimetry based on IEC 61966-2-4 */
+   { DRM_MODE_COLORIMETRY_XV_YCC_709, "XV_YCC_709" },
+   /* Colorimetry based on IEC 61966-2-5 */
+   { DRM_MODE_COLORIMETRY_OPRGB, "opRGB" },
+   /* DP MSA Colorimetry */
+   { DRM_MODE_DP_COLORIMETRY_Y_CBCR_ITU_601, "YCBCR_ITU_601" },
+   { DRM_MODE_DP_COLORIMETRY_Y_CBCR_ITU_709, "YCBCR_ITU_709" },
+   { DRM_MODE_DP_COLORIMETRY_SRGB, "sRGB" },
+   { DRM_MODE_DP_COLORIMETRY_RGB_WIDE_GAMUT, "RGB Wide Gamut" },
+   { DRM_MODE_DP_COLORIMETRY_SCRGB, "scRGB" },
+   { DRM_MODE_DP_COLORIMETRY_DCI_P3, "DCI-P3" },
+   { DRM_MODE_DP_COLORIMETRY_CUSTOM_COLOR_PROFILE, "Custom Profile" },
+};
+
+
 /**
  * DOC: standard connector properties
  *
@@ -1611,6 +1634,14 @@ int drm_mode_create_colorspace_property(struct 
drm_connector *connector)
ARRAY_SIZE(hdmi_colorspaces));
if (!prop)
return -ENOMEM;
+   } else if (connector->connector_type == DRM_MODE_CONNECTOR_eDP ||
+  connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort) 
{
+   prop = drm_property_create_enum(dev, DRM_MODE_PROP_ENUM,
+   "Colorspace", dp_colorspaces,
+   ARRAY_SIZE(dp_colorspaces));
+
+   if (!prop)
+   return -ENOMEM;
} else {
DRM_DEBUG_KMS("Colorspace property not supported\n");
return 0;
-- 
1.9.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[v10 3/3] drm/i915: Attach colorspace property and enable modeset

2019-01-30 Thread Uma Shankar
This patch attaches the colorspace connector property to the
hdmi connector. Based on colorspace change, modeset will be
triggered to switch to new colorspace.

Based on colorspace property value create an infoframe
with appropriate colorspace. This can be used to send an
infoframe packet with proper colorspace value set which
will help to enable wider color gamut like BT2020 on sink.

This patch attaches and enables HDMI colorspace, DP will be
taken care separately.

v2: Merged the changes of creating infoframe as well to this
patch as per Maarten's suggestion.

v3: Addressed review comments from Shashank. Separated HDMI
and DP colorspaces as suggested by Ville and Maarten.

v4: Addressed Chris and Ville's review comments, and created a
common colorspace property for DP and HDMI, filtered the list
based on the colorspaces supported by the respective protocol
standard. Handle the default case properly.

v5: Merged the DP handling along with platform colorspace
handling as per Shashank's comments.

v6: Reverted to old design of exposing all colorspaces to
userspace as per Ville's review comment

v7: Fixed a checkpatch complaint, Addressed  Maarten' review
comment, updated the RB from Maarten and Jani's ack.

Signed-off-by: Uma Shankar 
Acked-by: Jani Nikula 
Reviewed-by: Maarten Lankhorst 
---
 drivers/gpu/drm/i915/intel_atomic.c|  1 +
 drivers/gpu/drm/i915/intel_connector.c |  8 
 drivers/gpu/drm/i915/intel_drv.h   |  1 +
 drivers/gpu/drm/i915/intel_hdmi.c  | 25 +
 4 files changed, 35 insertions(+)

diff --git a/drivers/gpu/drm/i915/intel_atomic.c 
b/drivers/gpu/drm/i915/intel_atomic.c
index 16263ad..a4bb906 100644
--- a/drivers/gpu/drm/i915/intel_atomic.c
+++ b/drivers/gpu/drm/i915/intel_atomic.c
@@ -124,6 +124,7 @@ int intel_digital_connector_atomic_check(struct 
drm_connector *conn,
 */
if (new_conn_state->force_audio != old_conn_state->force_audio ||
new_conn_state->broadcast_rgb != old_conn_state->broadcast_rgb ||
+   new_conn_state->base.colorspace != old_conn_state->base.colorspace 
||
new_conn_state->base.picture_aspect_ratio != 
old_conn_state->base.picture_aspect_ratio ||
new_conn_state->base.content_type != 
old_conn_state->base.content_type ||
new_conn_state->base.scaling_mode != 
old_conn_state->base.scaling_mode)
diff --git a/drivers/gpu/drm/i915/intel_connector.c 
b/drivers/gpu/drm/i915/intel_connector.c
index ee16758..8352d0b 100644
--- a/drivers/gpu/drm/i915/intel_connector.c
+++ b/drivers/gpu/drm/i915/intel_connector.c
@@ -265,3 +265,11 @@ int intel_ddc_get_modes(struct drm_connector *connector,
connector->dev->mode_config.aspect_ratio_property,
DRM_MODE_PICTURE_ASPECT_NONE);
 }
+
+void
+intel_attach_colorspace_property(struct drm_connector *connector)
+{
+   if (!drm_mode_create_colorspace_property(connector))
+   drm_object_attach_property(&connector->base,
+  connector->colorspace_property, 0);
+}
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 85b913e..5178a9a 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -1783,6 +1783,7 @@ int intel_connector_update_modes(struct drm_connector 
*connector,
 void intel_attach_force_audio_property(struct drm_connector *connector);
 void intel_attach_broadcast_rgb_property(struct drm_connector *connector);
 void intel_attach_aspect_ratio_property(struct drm_connector *connector);
+void intel_attach_colorspace_property(struct drm_connector *connector);
 
 /* intel_csr.c */
 void intel_csr_ucode_init(struct drm_i915_private *);
diff --git a/drivers/gpu/drm/i915/intel_hdmi.c 
b/drivers/gpu/drm/i915/intel_hdmi.c
index 97a98e1..5c5009d 100644
--- a/drivers/gpu/drm/i915/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/intel_hdmi.c
@@ -498,6 +498,24 @@ static void intel_hdmi_set_avi_infoframe(struct 
intel_encoder *encoder,
else
frame.avi.colorspace = HDMI_COLORSPACE_RGB;
 
+   if (conn_state->colorspace == DRM_MODE_COLORIMETRY_DEFAULT) {
+   /* Set ITU 709 as default for HDMI */
+   frame.avi.colorimetry = DRM_MODE_COLORIMETRY_ITU_709;
+   } else if (conn_state->colorspace < DRM_MODE_COLORIMETRY_XV_YCC_601) {
+   frame.avi.colorimetry = conn_state->colorspace;
+   } else {
+   frame.avi.colorimetry = HDMI_COLORIMETRY_EXTENDED;
+   /*
+* Starting from extended list where COLORIMETRY_XV_YCC_601
+* is the first extended mode and its value is 0 as per HDMI
+* specification.
+* TODO: This needs to be extended for LSPCON implementation
+* as well. Will be implemented separately.
+*/
+   frame.avi.extended_colorimetry = conn_state->colorspace -
+   DRM_MODE_COL

Re: [PATCH v5 2/2] drm/panel: Add Feiyang FY07024DI26A30-D MIPI-DSI LCD panel

2019-01-30 Thread Jagan Teki
On Tue, Jan 29, 2019 at 8:49 PM Sam Ravnborg  wrote:
>
> Hi Jagan.
>
> > >
> > > I see DRM_MODE_ARG as mode argument, that print all mode timings but
> > > here we need only 3 timings out of it. do we really need? if yes
> > > please suggest an example.
> >
> > fyi: sent v6 for this except this change. Let me know if you have any
> > comments on this.
>
> Drivers looks fine, the above was just a quick suggestion to use some
> exising plumbing.
>
> You have done a nice job following up on all the feedback.

Thanks, hope v6 will be apply soon.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] backlight: pwm_bl: Use gpiod_get_value_cansleep() to get initial state

2019-01-30 Thread Lee Jones
On Sun, 27 Jan 2019, Chen-Yu Tsai wrote:

> gpiod_get_value() gives out a warning if access to the underlying gpiochip
> requires sleeping, which is common for I2C based chips:
> 
> WARNING: CPU: 0 PID: 77 at drivers/gpio/gpiolib.c:2500 
> gpiod_get_value+0xd0/0x100
> Modules linked in:
> CPU: 0 PID: 77 Comm: kworker/0:2 Not tainted 
> 4.14.0-rc3-00589-gf32897915d48-dirty #90
> Hardware name: Allwinner sun4i/sun5i Families
> Workqueue: events deferred_probe_work_func
> [] (unwind_backtrace) from [] (show_stack+0x10/0x14)
> [] (show_stack) from [] (dump_stack+0x88/0x9c)
> [] (dump_stack) from [] (__warn+0xe8/0x100)
> [] (__warn) from [] (warn_slowpath_null+0x20/0x28)
> [] (warn_slowpath_null) from [] 
> (gpiod_get_value+0xd0/0x100)
> [] (gpiod_get_value) from [] 
> (pwm_backlight_probe+0x238/0x508)
> [] (pwm_backlight_probe) from [] 
> (platform_drv_probe+0x50/0xac)
> [] (platform_drv_probe) from [] 
> (driver_probe_device+0x238/0x2e8)
> [] (driver_probe_device) from [] 
> (bus_for_each_drv+0x44/0x94)
> [] (bus_for_each_drv) from [] 
> (__device_attach+0xb0/0x114)
> [] (__device_attach) from [] 
> (bus_probe_device+0x84/0x8c)
> [] (bus_probe_device) from [] 
> (deferred_probe_work_func+0x50/0x14c)
> [] (deferred_probe_work_func) from [] 
> (process_one_work+0x1ec/0x414)
> [] (process_one_work) from [] 
> (worker_thread+0x2b0/0x5a0)
> [] (worker_thread) from [] (kthread+0x14c/0x154)
> [] (kthread) from [] (ret_from_fork+0x14/0x24)
> 
> This was missed in commit 0c9501f823a4 ("backlight: pwm_bl: Handle gpio
> that can sleep"). The code was then moved to a separate function in
> commit 7613c922315e ("backlight: pwm_bl: Move the checks for initial power
> state to a separate function").
> 
> The only usage of gpiod_get_value() is during the probe stage, which is
> safe to sleep in. Switch to gpiod_get_value_cansleep().
> 
> Fixes: 0c9501f823a4 ("backlight: pwm_bl: Handle gpio that can sleep")
> Signed-off-by: Chen-Yu Tsai 
> ---
>  drivers/video/backlight/pwm_bl.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Applied, thanks.

-- 
Lee Jones [李琼斯]
Linaro Services Technical Lead
Linaro.org │ Open source software for ARM SoCs
Follow Linaro: Facebook | Twitter | Blog
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v7 2/5] drm: Add vrr_enabled property to drm CRTC

2019-01-30 Thread Kazlauskas, Nicholas
On 1/30/19 6:02 AM, Daniel Vetter wrote:
> On Wed, Jan 30, 2019 at 11:42 AM Daniel Vetter  wrote:
>>
>> On Thu, Nov 8, 2018 at 3:44 PM Nicholas Kazlauskas
>>  wrote:
>>>
>>> This patch introduces the 'vrr_enabled' CRTC property to allow
>>> dynamic control over variable refresh rate support for a CRTC.
>>>
>>> This property should be treated like a content hint to the driver -
>>> if the hardware or driver is not capable of driving variable refresh
>>> timings then this is not considered an error.
>>>
>>> Capability for variable refresh rate support should be determined
>>> by querying the vrr_capable drm connector property.
>>>
>>> It is worth noting that while the property is intended for atomic use
>>> it isn't filtered from legacy userspace queries. This allows for Xorg
>>> userspace drivers to implement support.
>>>
>>> Signed-off-by: Nicholas Kazlauskas 
>>> Reviewed-by: Harry Wentland 
>>> Cc: Manasi Navare 
>>> ---
>>>   drivers/gpu/drm/drm_atomic_uapi.c | 4 
>>>   drivers/gpu/drm/drm_crtc.c| 2 ++
>>>   drivers/gpu/drm/drm_mode_config.c | 6 ++
>>>   include/drm/drm_crtc.h| 9 +
>>>   include/drm/drm_mode_config.h | 5 +
>>>   5 files changed, 26 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/drm_atomic_uapi.c 
>>> b/drivers/gpu/drm/drm_atomic_uapi.c
>>> index d5b7f315098c..eec396a57b88 100644
>>> --- a/drivers/gpu/drm/drm_atomic_uapi.c
>>> +++ b/drivers/gpu/drm/drm_atomic_uapi.c
>>> @@ -433,6 +433,8 @@ static int drm_atomic_crtc_set_property(struct drm_crtc 
>>> *crtc,
>>>  ret = drm_atomic_set_mode_prop_for_crtc(state, mode);
>>>  drm_property_blob_put(mode);
>>>  return ret;
>>> +   } else if (property == config->prop_vrr_enabled) {
>>> +   state->vrr_enabled = val;
>>>  } else if (property == config->degamma_lut_property) {
>>>  ret = drm_atomic_replace_property_blob_from_id(dev,
>>>  &state->degamma_lut,
>>> @@ -491,6 +493,8 @@ drm_atomic_crtc_get_property(struct drm_crtc *crtc,
>>>  *val = state->active;
>>>  else if (property == config->prop_mode_id)
>>>  *val = (state->mode_blob) ? state->mode_blob->base.id : 0;
>>> +   else if (property == config->prop_vrr_enabled)
>>> +   *val = state->vrr_enabled;
>>>  else if (property == config->degamma_lut_property)
>>>  *val = (state->degamma_lut) ? state->degamma_lut->base.id 
>>> : 0;
>>>  else if (property == config->ctm_property)
>>> diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
>>> index 268a182ae189..6f8ddfcfaba5 100644
>>> --- a/drivers/gpu/drm/drm_crtc.c
>>> +++ b/drivers/gpu/drm/drm_crtc.c
>>> @@ -340,6 +340,8 @@ int drm_crtc_init_with_planes(struct drm_device *dev, 
>>> struct drm_crtc *crtc,
>>>  drm_object_attach_property(&crtc->base, 
>>> config->prop_mode_id, 0);
>>>  drm_object_attach_property(&crtc->base,
>>> config->prop_out_fence_ptr, 0);
>>> +   drm_object_attach_property(&crtc->base,
>>> +  config->prop_vrr_enabled, 0);
>>>  }
>>>
>>>  return 0;
>>> diff --git a/drivers/gpu/drm/drm_mode_config.c 
>>> b/drivers/gpu/drm/drm_mode_config.c
>>> index ee80788f2c40..5670c67f28d4 100644
>>> --- a/drivers/gpu/drm/drm_mode_config.c
>>> +++ b/drivers/gpu/drm/drm_mode_config.c
>>> @@ -310,6 +310,12 @@ static int drm_mode_create_standard_properties(struct 
>>> drm_device *dev)
>>>  return -ENOMEM;
>>>  dev->mode_config.prop_mode_id = prop;
>>>
>>> +   prop = drm_property_create_bool(dev, 0,
>>> +   "VRR_ENABLED");
>>
>> VRR_ENABLED doesn't match vrr_enabled in the docs. But the connector
>> one seems to indeed be lowercase. And casing matters for properties
>>
>> Can you pls figure out who's right here and fix this confusion up?
> 
> I checked the igt quickly, patch is typed, will send out today or so.
> -Daniel

Yeah, it should be "VRR_ENABLED" in the docs (like the other CRTC 
default properties, eg. explicit fencing "IN_FENCE_FD” and "OUT_FENCE_PTR".

It looks a bit weird next to "vrr_capable" but it makes sense in the 
usage context for CRTC vs connector at least.

This should be fixed in the docs and it sounds like you have a patch so 
I'll drop my R-B when it's up.

Thanks!

Nicholas Kazlauskas

> 
>>
>> Thanks, Daniel
>>
>>> +   if (!prop)
>>> +   return -ENOMEM;
>>> +   dev->mode_config.prop_vrr_enabled = prop;
>>> +
>>>  prop = drm_property_create(dev,
>>>  DRM_MODE_PROP_BLOB,
>>>  "DEGAMMA_LUT", 0);
>>> diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h
>>> index b21437bc95bf..39c3900aab3c 100644
>>> --- a/include/drm/drm_crtc.h
>>> +++ b/include/drm/drm_crtc.h
>>> @@ -290,6 +290,15

[Bug 202449] vrr_capable not showing up in xrandr with eDP display.

2019-01-30 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=202449

--- Comment #5 from Nicholas Kazlauskas (nicholas.kazlaus...@amd.com) ---
Created attachment 280873
  --> https://bugzilla.kernel.org/attachment.cgi?id=280873&action=edit
0001-drm-amd-display-Attach-VRR-properties-for-eDP-connec.patch

I'm not sure if this will actually let you use variable refresh rate features
but this should attach the properties for your eDP panel at least.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 202445] amdgpu/dc: framerate dropping below adaptive sync range causes screen flickering

2019-01-30 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=202445

--- Comment #4 from Nicholas Kazlauskas (nicholas.kazlaus...@amd.com) ---
Below the range support didn't make it into 5.0. That patches for this are in
amd-staging-drm-next:

https://cgit.freedesktop.org/~agd5f/linux/log/?h=amd-staging-drm-next

They should show up in the next release I think.

Nicholas Kazlauskas

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-01-30 Thread Andrew F. Davis
On 1/29/19 5:44 PM, Liam Mark wrote:
> On Fri, 18 Jan 2019, Liam Mark wrote:
> 
>> On Fri, 18 Jan 2019, Andrew F. Davis wrote:
>>
>>> On 1/18/19 12:37 PM, Liam Mark wrote:
 The ION begin_cpu_access and end_cpu_access functions use the
 dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
 maintenance.

 Currently it is possible to apply cache maintenance, via the
 begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
 dma mapped.

 The dma sync sg APIs should not be called on sg lists which have not been
 dma mapped as this can result in cache maintenance being applied to the
 wrong address. If an sg list has not been dma mapped then its dma_address
 field has not been populated, some dma ops such as the swiotlb_dma_ops ops
 use the dma_address field to calculate the address onto which to apply
 cache maintenance.

 Also I don’t think we want CMOs to be applied to a buffer which is not
 dma mapped as the memory should already be coherent for access from the
 CPU. Any CMOs required for device access taken care of in the
 dma_buf_map_attachment and dma_buf_unmap_attachment calls.
 So really it only makes sense for begin_cpu_access and end_cpu_access to
 apply CMOs if the buffer is dma mapped.

 Fix the ION begin_cpu_access and end_cpu_access functions to only apply
 cache maintenance to buffers which are dma mapped.

 Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing 
 and mapping")
 Signed-off-by: Liam Mark 
 ---
  drivers/staging/android/ion/ion.c | 26 +-
  1 file changed, 21 insertions(+), 5 deletions(-)

 diff --git a/drivers/staging/android/ion/ion.c 
 b/drivers/staging/android/ion/ion.c
 index 6f5afab7c1a1..1fe633a7fdba 100644
 --- a/drivers/staging/android/ion/ion.c
 +++ b/drivers/staging/android/ion/ion.c
 @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
struct device *dev;
struct sg_table *table;
struct list_head list;
 +  bool dma_mapped;
  };
  
  static int ion_dma_buf_attach(struct dma_buf *dmabuf,
 @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
  
a->table = table;
a->dev = attachment->dev;
 +  a->dma_mapped = false;
INIT_LIST_HEAD(&a->list);
  
attachment->priv = a;
 @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct 
 dma_buf_attachment *attachment,
  {
struct ion_dma_buf_attachment *a = attachment->priv;
struct sg_table *table;
 +  struct ion_buffer *buffer = attachment->dmabuf->priv;
  
table = a->table;
  
 +  mutex_lock(&buffer->lock);
if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
 -  direction))
 +  direction)) {
 +  mutex_unlock(&buffer->lock);
return ERR_PTR(-ENOMEM);
 +  }
 +  a->dma_mapped = true;
 +  mutex_unlock(&buffer->lock);
  
return table;
  }
 @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct 
 dma_buf_attachment *attachment,
  struct sg_table *table,
  enum dma_data_direction direction)
  {
 +  struct ion_dma_buf_attachment *a = attachment->priv;
 +  struct ion_buffer *buffer = attachment->dmabuf->priv;
 +
 +  mutex_lock(&buffer->lock);
dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
 +  a->dma_mapped = false;
 +  mutex_unlock(&buffer->lock);
  }
  
  static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
 @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf 
 *dmabuf,
  
mutex_lock(&buffer->lock);
list_for_each_entry(a, &buffer->attachments, list) {
>>>
>>> When no devices are attached then buffer->attachments is empty and the
>>> below does not run, so if I understand this patch correctly then what
>>> you are protecting against is CPU access in the window after
>>> dma_buf_attach but before dma_buf_map.
>>>
>>
>> Yes
>>
>>> This is the kind of thing that again makes me think a couple more
>>> ordering requirements on DMA-BUF ops are needed. DMA-BUFs do not require
>>> the backing memory to be allocated until map time, this is why the
>>> dma_address field would still be null as you note in the commit message.
>>> So why should the CPU be performing accesses on a buffer that is not
>>> actually backed yet?
>>>
>>> I can think of two solutions:
>>>
>>> 1) Only allow CPU access (mmap, kmap, {begin,end}_cpu_access) while at
>>> least one device is mapped.
>>>
>>
>> Would be quite limiting to clients.
>>

I can agree with that, option two seems more reasonable.

>>> 2) Treat the CPU access request like the a device map request and
>>> trigger the 

Re: [PATCH] drm/amd/powerplay: Remove duplicate header

2019-01-30 Thread Deucher, Alexander
It was already fixed a while ago:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=7e07834c12b96214e95a473f7b14fc03b20e2e7a


Alex


From: Brajeswar Ghosh 
Sent: Wednesday, January 30, 2019 8:58:52 AM
To: Souptick Joarder
Cc: Zhu, Rex; Quan, Evan; Deucher, Alexander; Koenig, Christian; Zhou, 
David(ChunMing); airl...@linux.ie; Zhang, Hawking; Huang, Ray; 
amd-...@lists.freedesktop.org; dri-devel@lists.freedesktop.org; 
linux-ker...@vger.kernel.org; Sabyasachi Gupta
Subject: Re: [PATCH] drm/amd/powerplay: Remove duplicate header

On Fri, Dec 21, 2018 at 6:06 PM Souptick Joarder  wrote:
>
> On Fri, Dec 21, 2018 at 2:49 PM Brajeswar Ghosh
>  wrote:
> >
> > Remove hwmgr_ppt.h which is included more than once
> >
> > Signed-off-by: Brajeswar Ghosh 
> > ---
> Acked-by: Souptick Joarder 

If no further comment, can we get this patch in queue for 5.1 ?

>
> >  drivers/gpu/drm/amd/powerplay/inc/hwmgr.h | 1 -
> >  1 file changed, 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h 
> > b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
> > index e5a60aa44b5d..07d180ce4d18 100644
> > --- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
> > +++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
> > @@ -28,7 +28,6 @@
> >  #include "hardwaremanager.h"
> >  #include "hwmgr_ppt.h"
> >  #include "ppatomctrl.h"
> > -#include "hwmgr_ppt.h"
> >  #include "power_state.h"
> >  #include "smu_helper.h"
> >
> > --
> > 2.17.1
> >
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 3/5] drm/i2c: tda998x: add support for writing SPD

2019-01-30 Thread Brian Starkey
Hi Russell,

These did eventually reach me on Saturday evening.

On Fri, Jan 25, 2019 at 09:43:19AM +, Russell King wrote:
> Add support for writing the SPD infoframe to the TDA998x.  Identify us
> as "Generic" vendor "PC" product, and as "PC general" source device
> information.
> 
> Signed-off-by: Russell King 
> ---

As this infoframe is optional, and is intended to provide a "useful"
name to the user, I wonder if there's really much value in just
sending "Generic"/"PC"? It seems that it might be better to just not
send the SPD infoframe until we have a way to put something more
useful there (e.g. specified by the host driver).

Thanks,
-Brian

>  drivers/gpu/drm/i2c/tda998x_drv.c | 18 ++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i2c/tda998x_drv.c 
> b/drivers/gpu/drm/i2c/tda998x_drv.c
> index c399a7b73e2b..dad7396ebe2b 100644
> --- a/drivers/gpu/drm/i2c/tda998x_drv.c
> +++ b/drivers/gpu/drm/i2c/tda998x_drv.c
> @@ -845,6 +845,23 @@ static int tda998x_write_aif(struct tda998x_priv *priv,
>   return 0;
>  }
>  
> +static void tda998x_write_spd(struct tda998x_priv *priv)
> +{
> + union hdmi_infoframe frame;
> + int ret;
> +
> + ret = hdmi_spd_infoframe_init(&frame.spd, "Generic", "PC");
> + if (ret < 0) {
> + dev_err(&priv->hdmi->dev, "failed to fill SPD infoframe: %d\n",
> + ret);
> + return;
> + }
> +
> + frame.spd.sdi = HDMI_SPD_SDI_PC;
> +
> + tda998x_write_if(priv, DIP_IF_FLAGS_IF3, REG_IF3_HB0, &frame);
> +}
> +
>  static void
>  tda998x_write_avi(struct tda998x_priv *priv, const struct drm_display_mode 
> *mode)
>  {
> @@ -1554,6 +1571,7 @@ static void tda998x_bridge_mode_set(struct drm_bridge 
> *bridge,
>   reg_set(priv, REG_TX33, TX33_HDMI);
>  
>   tda998x_write_avi(priv, adjusted_mode);
> + tda998x_write_spd(priv);
>  
>   if (priv->audio_params.format != AFMT_UNUSED &&
>   priv->sink_has_audio)
> -- 
> 2.7.4
> 
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Jerome Glisse
On Wed, Jan 30, 2019 at 04:30:27AM +, Jason Gunthorpe wrote:
> On Tue, Jan 29, 2019 at 07:08:06PM -0500, Jerome Glisse wrote:
> > On Tue, Jan 29, 2019 at 11:02:25PM +, Jason Gunthorpe wrote:
> > > On Tue, Jan 29, 2019 at 03:44:00PM -0500, Jerome Glisse wrote:
> > > 
> > > > > But this API doesn't seem to offer any control - I thought that
> > > > > control was all coming from the mm/hmm notifiers triggering 
> > > > > p2p_unmaps?
> > > > 
> > > > The control is within the driver implementation of those callbacks. 
> > > 
> > > Seems like what you mean by control is 'the exporter gets to choose
> > > the physical address at the instant of map' - which seems reasonable
> > > for GPU.
> > > 
> > > 
> > > > will only allow p2p map to succeed for objects that have been tagged by 
> > > > the
> > > > userspace in some way ie the userspace application is in control of what
> > > > can be map to peer device.
> > > 
> > > I would have thought this means the VMA for the object is created
> > > without the map/unmap ops? Or are GPU objects and VMAs unrelated?
> > 
> > GPU object and VMA are unrelated in all open source GPU driver i am
> > somewhat familiar with (AMD, Intel, NVidia). You can create a GPU
> > object and never map it (and thus never have it associated with a
> > vma) and in fact this is very common. For graphic you usualy only
> > have hand full of the hundreds of GPU object your application have
> > mapped.
> 
> I mean the other way does every VMA with a p2p_map/unmap point to
> exactly one GPU object?
> 
> ie I'm surprised you say that p2p_map needs to have policy, I would
> have though the policy is applied when the VMA is created (ie objects
> that are not for p2p do not have p2p_map set), and even for GPU
> p2p_map should really only have to do with window allocation and pure
> 'can I even do p2p' type functionality.

All userspace API to enable p2p happens after object creation and in
some case they are mutable ie you can decide to no longer share the
object (userspace application decision). The BAR address space is a
resource from GPU driver point of view and thus from userspace point
of view. As such decissions that affect how it is use an what object
can use it, can change over application lifetime.

This is why i would like to allow kernel driver to apply any such
access policy, decided by the application on its object (on top of
which the kernel GPU driver can apply its own policy for GPU resource
sharing by forcing some object to main memory).


> 
> > Idea is that we can only ask exporter to be predictable and still allow
> > them to fail if things are really going bad.
> 
> I think hot unplug / PCI error recovery is one of the 'really going
> bad' cases..

GPU can hang and all data becomes _undefined_, it can also be suspended
to save power (think laptop with discret GPU for instance). GPU threads
can be kill ... So they are few cases i can think of where either you
want to kill the p2p mapping and make sure the importer is aware and
might have a change to report back through its own userspace API, or
at very least fallback to dummy pages. In some of the above cases, for
instance suspend, you just want to move thing around to allow to shut
down device memory.


> > I think i put it in the comment above the ops but in any cases i should
> > write something in documentation with example and thorough guideline.
> > Note that there won't be any mmu notifier to mmap of a device file
> > unless the device driver calls for it or there is a syscall like munmap
> > or mremap or mprotect well any syscall that work on vma.
> 
> This is something we might need to explore, does calling
> zap_vma_ptes() invoke enough notifiers that a MMU notifiers or HMM
> mirror consumer will release any p2p maps on that VMA?

Yes it does.

> 
> > If we ever want to support full pin then we might have to add a
> > flag so that GPU driver can refuse an importer that wants things
> > pin forever.
> 
> This would become interesting for VFIO and RDMA at least - I don't
> think VFIO has anything like SVA so it would want to import a p2p_map
> and indicate that it will not respond to MMU notifiers.
> 
> GPU can refuse, but maybe RDMA would allow it...

Ok i will add a flag field in next post. GPU could allow pin but they
would most likely use main memory for any such object, hence it is no
longer really p2p but at least both device look at the same data.

Cheers,
Jérôme
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [v7 3/3] dt-bindings: msm/disp: Introduce interconnect bindings for MDSS on SDM845

2019-01-30 Thread Rob Herring
On Thu, 24 Jan 2019 10:39:52 +0530, Jayant Shekhar wrote:
> Add interconnect properties such as interconnect provider specifier
> , the edge source and destination ports which are required by the
> interconnect API to configure interconnect path for MDSS.
> 
> Changes in v2:
>   - None
> 
> Changes in v3:
>   - Remove common property definitions (Rob Herring)
> 
> Changes in v4:
>   - Use port macros and change port string names (Georgi Djakov)
> 
> Changes in v5-v7:
>   - None
> 
> Signed-off-by: Sravanthi Kollukuduru 
> Signed-off-by: Jayant Shekhar 
> ---
>  Documentation/devicetree/bindings/display/msm/dpu.txt | 10 ++
>  1 file changed, 10 insertions(+)
> 

Please add Acked-by/Reviewed-by tags when posting new versions. However,
there's no need to repost patches *only* to add the tags. The upstream
maintainer will do that for acks received on the version they apply.

If a tag was not added on purpose, please state why and what changed.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Jerome Glisse
On Wed, Jan 30, 2019 at 09:00:06AM +0100, Christoph Hellwig wrote:
> On Wed, Jan 30, 2019 at 04:18:48AM +, Jason Gunthorpe wrote:
> > Every attempt to give BAR memory to struct page has run into major
> > trouble, IMHO, so I like that this approach avoids that.
> 
> Way less problems than not having struct page for doing anything
> non-trivial.  If you map the BAR to userspace with remap_pfn_range
> and friends the mapping is indeed very simple.  But any operation
> that expects a page structure, which is at least everything using
> get_user_pages won't work.
> 
> So you can't do direct I/O to your remapped BAR, you can't create MRs
> on it, etc, etc.

We do not want direct I/O, in fact at least for GPU we want to seldomly
allow access to object vma, so the less thing can access it the happier
we are :) All the GPU userspace driver API (OpenGL, OpenCL, Vulkan, ...)
that expose any such mapping with the application are very clear on the
limitation which is often worded: the only valid thing is direct CPU
access (no syscall can be use with those pointers).

So application developer already have low expectation on what is valid
and allowed to do.

Cheers,
Jérôme
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 5/5] drm/i2c: tda998x: improve correctness of quantisation range

2019-01-30 Thread Brian Starkey
Hi Russell,

On Fri, Jan 25, 2019 at 09:43:29AM +, Russell King wrote:
> CEA-861 says: "A Source shall not send a non-zero Q value that does
> not correspond to the default RGB Quantization Range for the
> transmitted Picture unless the Sink indicates support for the Q bit
> in a Video Capabilities Data Block."
> 
> Make TDA998x compliant by using the helper to set the quantisation
> range in the infoframe, and using the TDA998x's colour scaling to
> appropriately adjust the RGB values sent to the monitor.
> 
> This ensures that monitors that do not support the Q bit are sent
> RGB values that are within the expected range.  Monitors with
> support for the Q bit will be sent full-range RGB.
> 
> Signed-off-by: Russell King 
> ---
>  drivers/gpu/drm/i2c/tda998x_drv.c | 39 
> ++-
>  1 file changed, 34 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i2c/tda998x_drv.c 
> b/drivers/gpu/drm/i2c/tda998x_drv.c
> index b0ed2ef49c62..7d9aea79aff2 100644
> --- a/drivers/gpu/drm/i2c/tda998x_drv.c
> +++ b/drivers/gpu/drm/i2c/tda998x_drv.c
> @@ -50,6 +50,8 @@ struct tda998x_priv {
>   bool is_on;
>   bool supports_infoframes;
>   bool sink_has_audio;
> + bool has_rgb_quant;
> + enum hdmi_quantization_range rgb_quant_range;
>   u8 vip_cntrl_0;
>   u8 vip_cntrl_1;
>   u8 vip_cntrl_2;
> @@ -869,7 +871,9 @@ tda998x_write_avi(struct tda998x_priv *priv, const struct 
> drm_display_mode *mode
>  
>   drm_hdmi_avi_infoframe_from_display_mode(&frame.avi,
>&priv->connector, mode);
> - frame.avi.quantization_range = HDMI_QUANTIZATION_RANGE_FULL;
> + drm_hdmi_avi_infoframe_quant_range(&frame.avi, mode,
> +priv->rgb_quant_range,
> +priv->has_rgb_quant, false);
>  
>   tda998x_write_if(priv, DIP_IF_FLAGS_IF2, REG_IF2_HB0, &frame);
>  }
> @@ -1259,6 +1263,7 @@ static int tda998x_connector_get_modes(struct 
> drm_connector *connector)
>   mutex_lock(&priv->audio_mutex);
>   n = drm_add_edid_modes(connector, edid);
>   priv->sink_has_audio = drm_detect_monitor_audio(edid);
> + priv->has_rgb_quant = drm_rgb_quant_range_selectable(edid);
>   mutex_unlock(&priv->audio_mutex);
>  
>   kfree(edid);
> @@ -1385,6 +1390,15 @@ static void tda998x_bridge_mode_set(struct drm_bridge 
> *bridge,
>   u8 reg, div, rep, sel_clk;
>  
>   /*
> +  * Since we are "computer" like, our source invariably produces
> +  * full-range RGB.  If the monitor supports full-range, then use
> +  * it, otherwise reduce to limited-range.
> +  */
> + priv->rgb_quant_range = priv->has_rgb_quant ?
> + HDMI_QUANTIZATION_RANGE_FULL :
> + drm_default_rgb_quant_range(adjusted_mode);
> +
> + /*
>* Internally TDA998x is using ITU-R BT.656 style sync but
>* we get VESA style sync. TDA998x is using a reference pixel
>* relative to ITU to sync to the input frame and for output
> @@ -1499,10 +1513,25 @@ static void tda998x_bridge_mode_set(struct drm_bridge 
> *bridge,
>   reg_write(priv, REG_PLL_SERIAL_2, PLL_SERIAL_2_SRL_NOSC(div) |
>   PLL_SERIAL_2_SRL_PR(rep));
>  
> - /* set color matrix bypass flag: */
> - reg_write(priv, REG_MAT_CONTRL, MAT_CONTRL_MAT_BP |
> - MAT_CONTRL_MAT_SC(1));
> - reg_set(priv, REG_FEAT_POWERDOWN, FEAT_POWERDOWN_CSC);
> + /* set color matrix according to output rgb quant range */
> + if (priv->rgb_quant_range == HDMI_QUANTIZATION_RANGE_LIMITED) {
> + static u8 tda998x_full_to_limited_range[] = {
> + MAT_CONTRL_MAT_SC(2),
> + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> + 0x03, 0x6f, 0x00, 0x00, 0x00, 0x00,
> + 0x00, 0x00, 0x03, 0x6f, 0x00, 0x00,
> + 0x00, 0x00, 0x00, 0x00, 0x03, 0x6f,
> + 0x00, 0x40, 0x00, 0x40, 0x00, 0x40
> + };

I couldn't figure out from the datasheet I have what the expected data
bit-depth is here, but I assume from this offset that it's 12-bit?

Should you also set HVF_CNTRL_1_VQR to 0b01 when using limited range?

Cheers,
-Brian

> + reg_clear(priv, REG_FEAT_POWERDOWN, FEAT_POWERDOWN_CSC);
> + reg_write_range(priv, REG_MAT_CONTRL,
> + tda998x_full_to_limited_range,
> + sizeof(tda998x_full_to_limited_range));
> + } else {
> + reg_write(priv, REG_MAT_CONTRL, MAT_CONTRL_MAT_BP |
> + MAT_CONTRL_MAT_SC(1));
> + reg_set(priv, REG_FEAT_POWERDOWN, FEAT_POWERDOWN_CSC);
> + }
>  
>   /* set BIAS tmds value: */
>   reg_write(priv, REG_ANA_GENERAL, 0x09);
> -- 
> 2.7.4
> 
> ___
> dri-devel mailing list
> dri-devel

Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

2019-01-30 Thread Jerome Glisse
On Wed, Jan 30, 2019 at 10:33:39AM +, Koenig, Christian wrote:
> Am 30.01.19 um 09:02 schrieb Christoph Hellwig:
> > On Tue, Jan 29, 2019 at 08:58:35PM +, Jason Gunthorpe wrote:
> >> On Tue, Jan 29, 2019 at 01:39:49PM -0700, Logan Gunthorpe wrote:
> >>
> >>> implement the mapping. And I don't think we should have 'special' vma's
> >>> for this (though we may need something to ensure we don't get mapping
> >>> requests mixed with different types of pages...).
> >> I think Jerome explained the point here is to have a 'special vma'
> >> rather than a 'special struct page' as, really, we don't need a
> >> struct page at all to make this work.
> >>
> >> If I recall your earlier attempts at adding struct page for BAR
> >> memory, it ran aground on issues related to O_DIRECT/sgls, etc, etc.
> > Struct page is what makes O_DIRECT work, using sgls or biovecs, etc on
> > it work.  Without struct page none of the above can work at all.  That
> > is why we use struct page for backing BARs in the existing P2P code.
> > Not that I'm a particular fan of creating struct page for this device
> > memory, but without major invasive surgery to large parts of the kernel
> > it is the only way to make it work.
> 
> The problem seems to be that struct page does two things:
> 
> 1. Memory management for system memory.
> 2. The object to work with in the I/O layer.
> 
> This was done because a good part of that stuff overlaps, like reference 
> counting how often a page is used.  The problem now is that this doesn't 
> work very well for device memory in some cases.
> 
> For example on GPUs you usually have a large amount of memory which is 
> not even accessible by the CPU. In other words you can't easily create a 
> struct page for it because you can't reference it with a physical CPU 
> address.
> 
> Maybe struct page should be split up into smaller structures? I mean 
> it's really overloaded with data.

I think the simpler answer is that we do not want to allow GUP or any-
thing similar to pin BAR or device memory. Doing so can only hurt us
long term by fragmenting the GPU memory and forbidding us to move thing
around. For transparent use of device memory within a process this is
definitly forbidden to pin.

I do not see any good reasons we would like to pin device memory for
the existing GPU GEM objects. Userspace always had a very low expectation
on what it can do with mmap of those object and i believe it is better
to keep expectation low here and says nothing will work with those
pointer. I just do not see a valid and compelling use case to change
that :)

Even outside GPU driver, device driver like RDMA just want to share their
doorbell to other device and they do not want to see those doorbell page
use in direct I/O or anything similar AFAICT.

Cheers,
Jérôme
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 4/5] drm/i2c: tda998x: add vendor specific infoframe support

2019-01-30 Thread Brian Starkey
On Fri, Jan 25, 2019 at 09:43:24AM +, Russell King wrote:
> Add support for the vendor specific infoframe.
> 
> Signed-off-by: Russell King 

LGTM.

Reviewed-by: Brian Starkey 

> ---
>  drivers/gpu/drm/i2c/tda998x_drv.c | 14 ++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i2c/tda998x_drv.c 
> b/drivers/gpu/drm/i2c/tda998x_drv.c
> index dad7396ebe2b..b0ed2ef49c62 100644
> --- a/drivers/gpu/drm/i2c/tda998x_drv.c
> +++ b/drivers/gpu/drm/i2c/tda998x_drv.c
> @@ -874,6 +874,19 @@ tda998x_write_avi(struct tda998x_priv *priv, const 
> struct drm_display_mode *mode
>   tda998x_write_if(priv, DIP_IF_FLAGS_IF2, REG_IF2_HB0, &frame);
>  }
>  
> +static void tda998x_write_vsi(struct tda998x_priv *priv,
> +   struct drm_display_mode *mode)
> +{
> + union hdmi_infoframe frame;
> +
> + if (drm_hdmi_vendor_infoframe_from_display_mode(&frame.vendor.hdmi,
> + &priv->connector,
> + mode))
> + reg_clear(priv, REG_DIP_IF_FLAGS, DIP_IF_FLAGS_IF1);
> + else
> + tda998x_write_if(priv, DIP_IF_FLAGS_IF1, REG_IF1_HB0, &frame);
> +}
> +
>  /* Audio support */
>  
>  static void tda998x_audio_mute(struct tda998x_priv *priv, bool on)
> @@ -1572,6 +1585,7 @@ static void tda998x_bridge_mode_set(struct drm_bridge 
> *bridge,
>  
>   tda998x_write_avi(priv, adjusted_mode);
>   tda998x_write_spd(priv);
> + tda998x_write_vsi(priv, adjusted_mode);
>  
>   if (priv->audio_params.format != AFMT_UNUSED &&
>   priv->sink_has_audio)
> -- 
> 2.7.4
> 
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 0/2] drm/vkms: Bugfix for igt-tests

2019-01-30 Thread Shayenne Moura
This patchset contains patches to fix the extra frame bug on kms_flip 
igt-test. First patch solves the extra vblank frame that breaks many
tests on kms_flip and second patch solves the race condition caused
by the solution added in the first one.

Shayenne Moura (2):
  drm/vkms: Bugfix extra vblank frame
  drm/vkms: Bugfix racing hrtimer vblank handle

 drivers/gpu/drm/vkms/vkms_crtc.c | 21 +
 1 file changed, 9 insertions(+), 12 deletions(-)

-- 
2.17.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 2/2] drm: vkms: Bugfix racing hrtimer vblank handle

2019-01-30 Thread Shayenne Moura
When the vblank irq happens, kernel time subsystem executes
`vkms_vblank_simulate`. In parallel or not, it prepares all stuff
necessary to the next vblank with arm, and it must flush these
stuff before the next vblank irq. However, vblank counter is ahead
when arm is executed in parallel with handle vblank.

CPU 0:  CPU 1:
 |   |
atomic_commit_tail is ongoing|
 |   |
 |  hrtimer: vkms_vblank_simulate()
 |   |
 |  drm_crtc_handle_vblank()
 |   |
drm_crtc_arm_vblank()|
 |   |
->get_vblank_timestamp() |
 |   |
 |  hrtimer_forward_now()

Then, we should guarantee that the vblank interval time is correct
(not changed) before finish the vblank handle.

Fix the bug including the call to `hrtimer_forward_now()` in the same
lock of `drm_crtc_handle_vblank()` to ensure that the timestamp update
is correct when finish the vblank handle.

Signed-off-by: Shayenne Moura 
Signed-off-by: Daniel Vetter 
---
 drivers/gpu/drm/vkms/vkms_crtc.c | 18 ++
 1 file changed, 6 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
index 23146ff2a25b..5a095610726b 100644
--- a/drivers/gpu/drm/vkms/vkms_crtc.c
+++ b/drivers/gpu/drm/vkms/vkms_crtc.c
@@ -10,13 +10,17 @@
 #include 
 #include 
 
-static void _vblank_handle(struct vkms_output *output)
+static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
 {
+   struct vkms_output *output = container_of(timer, struct vkms_output,
+ vblank_hrtimer);
struct drm_crtc *crtc = &output->crtc;
struct vkms_crtc_state *state = to_vkms_crtc_state(crtc->state);
+   int ret_overrun;
bool ret;
 
spin_lock(&output->lock);
+
ret = drm_crtc_handle_vblank(crtc);
if (!ret)
DRM_ERROR("vkms failure on handling vblank");
@@ -37,19 +41,9 @@ static void _vblank_handle(struct vkms_output *output)
DRM_WARN("failed to queue vkms_crc_work_handle");
}
 
-   spin_unlock(&output->lock);
-}
-
-static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
-{
-   struct vkms_output *output = container_of(timer, struct vkms_output,
- vblank_hrtimer);
-   int ret_overrun;
-
-   _vblank_handle(output);
-
ret_overrun = hrtimer_forward_now(&output->vblank_hrtimer,
  output->period_ns);
+   spin_unlock(&output->lock);
 
return HRTIMER_RESTART;
 }
-- 
2.17.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 1/2] drm/vkms: Bugfix extra vblank frame

2019-01-30 Thread Shayenne Moura
kms_flip tests are breaking on vkms when simulate vblank because vblank
event sequence count returns one extra frame after arm vblank event to
make a page flip.

When vblank interrupt happens, userspace processes the vblank event and
issues the next page flip command. Kernel calls queue_work to call
commit_planes and arm the new page flip. The next vblank picks up the
newly armed vblank event and vblank interrupt happens again.

The arm and vblank event are asynchronous, then, on the next vblank, we
receive x+2 from `get_vblank_timestamp`, instead x+1, although timestamp
and vblank seqno matches.

Function `get_vblank_timestamp` is reached by 2 ways:

  - from `drm_mode_page_flip_ioctl`: driver is doing one atomic operation
to synchronize planes in the same output. There is no vblank simulation,
the `drm_crtc_arm_vblank_event` function adds 1 on vblank count, and the
variable in_vblank_irq is false
  - from `vkms_vblank_simulate`: since the driver is doing a vblank simulation,
the variable in_vblank_irq is true.

Fix this problem subtracting one vblank period from vblank_time when
`get_vblank_timestamp` is called from trace `drm_mode_page_flip_ioctl`,
i.e., is not a real vblank interrupt, and getting the timestamp and vblank
seqno when it is a real vblank interrupt.

The reason for all this is that get_vblank_timestamp always supplies the
timestamp for the next vblank event. The hrtimer is the vblank simulator,
and it needs the correct previous value to present the next vblank. Since
this is how hw timestamp registers work and what the vblank core expects.

Signed-off-by: Shayenne Moura 
Signed-off-by: Daniel Vetter 

---
 drivers/gpu/drm/vkms/vkms_crtc.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
index d44bfc392491..23146ff2a25b 100644
--- a/drivers/gpu/drm/vkms/vkms_crtc.c
+++ b/drivers/gpu/drm/vkms/vkms_crtc.c
@@ -87,6 +87,9 @@ bool vkms_get_vblank_timestamp(struct drm_device *dev, 
unsigned int pipe,
 
*vblank_time = output->vblank_hrtimer.node.expires;
 
+   if (!in_vblank_irq)
+   *vblank_time -= output->period_ns;
+
return true;
 }
 
-- 
2.17.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 109493] [CI][DRMTIP] igt@gem_cpu_reloc@forked - fail - igt_skip: Assertion `!test_child' failed.

2019-01-30 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=109493

--- Comment #4 from Chris Wilson  ---
Found the skip, patch sent.

-- 
You are receiving this mail because:
You are the assignee for the bug.___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 108889] [CI][SHARDS] igt@sw_sync@sync_busy_fork_unixsocket - incomplete - __igt_fork_helper: Assertion `!proc->running' failed.

2019-01-30 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=108889

--- Comment #6 from CI Bug Log  ---
The CI Bug Log issue associated to this bug has been updated.

### New filters associated

* BLB BWR HSW BSW: Incomplete - igt_stop_helper: Assertion
`helper_was_alive(proc, status)' failed
  -
https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_195/fi-blb-e6850/igt@sw_sync@sync_busy_fork.html
  -
https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_195/fi-bsw-n3050/igt@sw_sync@sync_busy_fork.html
  -
https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_189/fi-hsw-peppy/igt@sw_sync@sync_busy_fork.html
  -
https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_194/fi-bwr-2160/igt@sw_sync@sync_busy_fork.html

-- 
You are receiving this mail because:
You are the assignee for the bug.___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 2/3] drm/doc: Drop chapter "KMS Initialization and Cleanup"

2019-01-30 Thread Daniel Vetter
It only talks about crtc, brings up intel as an exampel and I think is
more misleading than useful really. Plus we have lots of discussion
about how your standard kms driver should be initialized/cleaned up,
so maybe better to document this when we have a better idea.

Signed-off-by: Daniel Vetter 
---
 Documentation/gpu/drm-kms.rst | 96 ---
 1 file changed, 96 deletions(-)

diff --git a/Documentation/gpu/drm-kms.rst b/Documentation/gpu/drm-kms.rst
index 75c882e09fee..23a3c986ef6d 100644
--- a/Documentation/gpu/drm-kms.rst
+++ b/Documentation/gpu/drm-kms.rst
@@ -410,102 +410,6 @@ Encoder Functions Reference
 .. kernel-doc:: drivers/gpu/drm/drm_encoder.c
:export:
 
-KMS Initialization and Cleanup
-==
-
-A KMS device is abstracted and exposed as a set of planes, CRTCs,
-encoders and connectors. KMS drivers must thus create and initialize all
-those objects at load time after initializing mode setting.
-
-CRTCs (:c:type:`struct drm_crtc `)
-
-
-A CRTC is an abstraction representing a part of the chip that contains a
-pointer to a scanout buffer. Therefore, the number of CRTCs available
-determines how many independent scanout buffers can be active at any
-given time. The CRTC structure contains several fields to support this:
-a pointer to some video memory (abstracted as a frame buffer object), a
-display mode, and an (x, y) offset into the video memory to support
-panning or configurations where one piece of video memory spans multiple
-CRTCs.
-
-CRTC Initialization
-~~~
-
-A KMS device must create and register at least one struct
-:c:type:`struct drm_crtc ` instance. The instance is
-allocated and zeroed by the driver, possibly as part of a larger
-structure, and registered with a call to :c:func:`drm_crtc_init()`
-with a pointer to CRTC functions.
-
-
-Cleanup

-
-The DRM core manages its objects' lifetime. When an object is not needed
-anymore the core calls its destroy function, which must clean up and
-free every resource allocated for the object. Every
-:c:func:`drm_\*_init()` call must be matched with a corresponding
-:c:func:`drm_\*_cleanup()` call to cleanup CRTCs
-(:c:func:`drm_crtc_cleanup()`), planes
-(:c:func:`drm_plane_cleanup()`), encoders
-(:c:func:`drm_encoder_cleanup()`) and connectors
-(:c:func:`drm_connector_cleanup()`). Furthermore, connectors that
-have been added to sysfs must be removed by a call to
-:c:func:`drm_connector_unregister()` before calling
-:c:func:`drm_connector_cleanup()`.
-
-Connectors state change detection must be cleanup up with a call to
-:c:func:`drm_kms_helper_poll_fini()`.
-
-Output discovery and initialization example

-
-.. code-block:: c
-
-void intel_crt_init(struct drm_device *dev)
-{
-struct drm_connector *connector;
-struct intel_output *intel_output;
-
-intel_output = kzalloc(sizeof(struct intel_output), GFP_KERNEL);
-if (!intel_output)
-return;
-
-connector = &intel_output->base;
-drm_connector_init(dev, &intel_output->base,
-   &intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA);
-
-drm_encoder_init(dev, &intel_output->enc, &intel_crt_enc_funcs,
- DRM_MODE_ENCODER_DAC);
-
-drm_connector_attach_encoder(&intel_output->base,
-  &intel_output->enc);
-
-/* Set up the DDC bus. */
-intel_output->ddc_bus = intel_i2c_create(dev, GPIOA, "CRTDDC_A");
-if (!intel_output->ddc_bus) {
-dev_printk(KERN_ERR, &dev->pdev->dev, "DDC bus registration "
-   "failed.\n");
-return;
-}
-
-intel_output->type = INTEL_OUTPUT_ANALOG;
-connector->interlace_allowed = 0;
-connector->doublescan_allowed = 0;
-
-drm_encoder_helper_add(&intel_output->enc, &intel_crt_helper_funcs);
-drm_connector_helper_add(connector, &intel_crt_connector_helper_funcs);
-
-drm_connector_register(connector);
-}
-
-In the example above (taken from the i915 driver), a CRTC, connector and
-encoder combination is created. A device-specific i2c bus is also
-created for fetching EDID data and performing monitor detection. Once
-the process is complete, the new connector is registered with sysfs to
-make its properties available to applications.
-
 KMS Locking
 ===
 
-- 
2.20.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 3/3] drm/doc: fix VRR_ENABLED casing

2019-01-30 Thread Daniel Vetter
Yes it's inconsitent with vrr_capable, but this is the actual uapi as
exercise by igt.

Fixes: ab7a664f7a2d ("drm: Document variable refresh properties")
Cc: Nicholas Kazlauskas 
Cc: Harry Wentland 
Cc: Pekka Paalanen 
Cc: Alex Deucher 
Signed-off-by: Daniel Vetter 
---
 drivers/gpu/drm/drm_connector.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
index 847539645558..e3ff73695c32 100644
--- a/drivers/gpu/drm/drm_connector.c
+++ b/drivers/gpu/drm/drm_connector.c
@@ -1367,7 +1367,7 @@ EXPORT_SYMBOL(drm_mode_create_scaling_mode_property);
  *
  * Absence of the property should indicate absence of support.
  *
- * "vrr_enabled":
+ * "VRR_ENABLED":
  * Default &drm_crtc boolean property that notifies the driver that the
  * content on the CRTC is suitable for variable refresh rate presentation.
  * The driver will take this property as a hint to enable variable
-- 
2.20.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 1/3] drm/doc: Move hdmi infoframe docs

2019-01-30 Thread Daniel Vetter
.. next to all the other sink helpers. The rect library is more used
for handling plane clipping, so belongs to those imo.

Signed-off-by: Daniel Vetter 
---
 Documentation/gpu/drm-kms-helpers.rst | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/Documentation/gpu/drm-kms-helpers.rst 
b/Documentation/gpu/drm-kms-helpers.rst
index fbd11b2fe5b5..17ca7f8bf3d3 100644
--- a/Documentation/gpu/drm-kms-helpers.rst
+++ b/Documentation/gpu/drm-kms-helpers.rst
@@ -296,18 +296,6 @@ SCDC Helper Functions Reference
 .. kernel-doc:: drivers/gpu/drm/drm_scdc_helper.c
:export:
 
-Rectangle Utilities Reference
-=
-
-.. kernel-doc:: include/drm/drm_rect.h
-   :doc: rect utils
-
-.. kernel-doc:: include/drm/drm_rect.h
-   :internal:
-
-.. kernel-doc:: drivers/gpu/drm/drm_rect.c
-   :export:
-
 HDMI Infoframes Helper Reference
 
 
@@ -322,6 +310,18 @@ libraries and hence is also included here.
 .. kernel-doc:: drivers/video/hdmi.c
:export:
 
+Rectangle Utilities Reference
+=
+
+.. kernel-doc:: include/drm/drm_rect.h
+   :doc: rect utils
+
+.. kernel-doc:: include/drm/drm_rect.h
+   :internal:
+
+.. kernel-doc:: drivers/gpu/drm/drm_rect.c
+   :export:
+
 Flip-work Helper Reference
 ==
 
-- 
2.20.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/msm: dpu: Don't set frame_busy_mask for async updates

2019-01-30 Thread Sean Paul
From: Sean Paul 

The frame_busy mask is used in frame_done event handling, which is not
invoked for async commits. So an async commit will leave the
frame_busy mask populated after it completes and future commits will start
with the busy mask incorrect.

This showed up on disable after cursor move. I was hitting the "this should
not happen" comment in the frame event worker since frame_busy was set,
we queued the event, but there were no frames pending (since async
also doesn't set that).

Signed-off-by: Sean Paul 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
index 36158b7d99cd..1a81c4daabc9 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
@@ -1558,8 +1558,14 @@ static void _dpu_encoder_kickoff_phys(struct 
dpu_encoder_virt *dpu_enc,
if (!ctl)
continue;
 
-   if (phys->split_role != ENC_ROLE_SLAVE)
+   /*
+* This is cleared in frame_done worker, which isn't invoked
+* for async commits. So don't set this for async, since it'll
+* roll over to the next commit.
+*/
+   if (!async && phys->split_role != ENC_ROLE_SLAVE)
set_bit(i, dpu_enc->frame_busy_mask);
+
if (!phys->ops.needs_single_flush ||
!phys->ops.needs_single_flush(phys))
_dpu_encoder_trigger_flush(&dpu_enc->base, phys, 0x0,
-- 
Sean Paul, Software Engineer, Google / Chromium OS

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/savage: mark expected switch fall-throughs

2019-01-30 Thread Daniel Vetter
On Tue, Jan 29, 2019 at 02:20:06PM -0600, Gustavo A. R. Silva wrote:
> In preparation to enabling -Wimplicit-fallthrough, mark switch cases
> where we are expecting to fall through.
> 
> This patch fixes the following warnings:
> 
> drivers/gpu/drm/savage/savage_state.c:301:8: warning: this statement may fall 
> through [-Wimplicit-fallthrough=]
> drivers/gpu/drm/savage/savage_state.c:438:8: warning: this statement may fall 
> through [-Wimplicit-fallthrough=]
> drivers/gpu/drm/savage/savage_state.c:559:8: warning: this statement may fall 
> through [-Wimplicit-fallthrough=]
> drivers/gpu/drm/savage/savage_state.c:697:8: warning: this statement may fall 
> through [-Wimplicit-fallthrough=]
> 
> Warning level 3 was used: -Wimplicit-fallthrough=3
> 
> This patch is part of the ongoing efforts to enabling
> -Wimplicit-fallthrough.
> 
> Signed-off-by: Gustavo A. R. Silva 

This and the via patch merged to drm-misc-next, thanks.
-Daniel

> ---
>  drivers/gpu/drm/savage/savage_state.c | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/gpu/drm/savage/savage_state.c 
> b/drivers/gpu/drm/savage/savage_state.c
> index 7559a820bd43..ebb8b7d32b33 100644
> --- a/drivers/gpu/drm/savage/savage_state.c
> +++ b/drivers/gpu/drm/savage/savage_state.c
> @@ -299,6 +299,7 @@ static int savage_dispatch_dma_prim(drm_savage_private_t 
> * dev_priv,
>   case SAVAGE_PRIM_TRILIST_201:
>   reorder = 1;
>   prim = SAVAGE_PRIM_TRILIST;
> + /* fall through */
>   case SAVAGE_PRIM_TRILIST:
>   if (n % 3 != 0) {
>   DRM_ERROR("wrong number of vertices %u in TRILIST\n",
> @@ -436,6 +437,7 @@ static int savage_dispatch_vb_prim(drm_savage_private_t * 
> dev_priv,
>   case SAVAGE_PRIM_TRILIST_201:
>   reorder = 1;
>   prim = SAVAGE_PRIM_TRILIST;
> + /* fall through */
>   case SAVAGE_PRIM_TRILIST:
>   if (n % 3 != 0) {
>   DRM_ERROR("wrong number of vertices %u in TRILIST\n",
> @@ -557,6 +559,7 @@ static int savage_dispatch_dma_idx(drm_savage_private_t * 
> dev_priv,
>   case SAVAGE_PRIM_TRILIST_201:
>   reorder = 1;
>   prim = SAVAGE_PRIM_TRILIST;
> + /* fall through */
>   case SAVAGE_PRIM_TRILIST:
>   if (n % 3 != 0) {
>   DRM_ERROR("wrong number of indices %u in TRILIST\n", n);
> @@ -695,6 +698,7 @@ static int savage_dispatch_vb_idx(drm_savage_private_t * 
> dev_priv,
>   case SAVAGE_PRIM_TRILIST_201:
>   reorder = 1;
>   prim = SAVAGE_PRIM_TRILIST;
> + /* fall through */
>   case SAVAGE_PRIM_TRILIST:
>   if (n % 3 != 0) {
>   DRM_ERROR("wrong number of indices %u in TRILIST\n", n);
> -- 
> 2.20.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3 2/5] dt-bindings: display: renesas: lvds: Document r8a7744 bindings

2019-01-30 Thread Rob Herring
On Tue, Jan 22, 2019 at 05:44:28PM +0200, Laurent Pinchart wrote:
> Hi Biju,
> 
> Thank you for the patch.
> 
> On Tue, Jan 22, 2019 at 03:25:46PM +, Biju Das wrote:
> > Document the RZ/G1N (R8A7744) LVDS bindings.
> > 
> > Signed-off-by: Biju Das 
> 
> Reviewed-by: Laurent Pinchart 
> 
> and taken in my tree.

But not in linux-next? Or you did some $subject fixups which breaks my 
detecting that. :(

Rob
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3 2/5] dt-bindings: display: renesas: lvds: Document r8a7744 bindings

2019-01-30 Thread Rob Herring
On Tue, 22 Jan 2019 15:25:46 +, Biju Das wrote:
> Document the RZ/G1N (R8A7744) LVDS bindings.
> 
> Signed-off-by: Biju Das 
> ---
>  Documentation/devicetree/bindings/display/bridge/renesas,lvds.txt | 1 +
>  1 file changed, 1 insertion(+)
> 

Reviewed-by: Rob Herring 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 202445] amdgpu/dc: framerate dropping below adaptive sync range causes screen flickering

2019-01-30 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=202445

--- Comment #5 from Clément Guérin (li...@protonmail.com) ---
Can you point me to the commits introducing this feature? Thanks.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


[Bug 202445] amdgpu/dc: framerate dropping below adaptive sync range causes screen flickering

2019-01-30 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=202445

--- Comment #6 from Nicholas Kazlauskas (nicholas.kazlaus...@amd.com) ---
Should be:

https://patchwork.freedesktop.org/series/53559/

"drm/amd/display: Add below the range support for FreeSync"

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 3/3] drm/doc: fix VRR_ENABLED casing

2019-01-30 Thread Kazlauskas, Nicholas
On 1/30/19 11:30 AM, Daniel Vetter wrote:
> Yes it's inconsitent with vrr_capable, but this is the actual uapi as
> exercise by igt.
> 
> Fixes: ab7a664f7a2d ("drm: Document variable refresh properties")
> Cc: Nicholas Kazlauskas 
> Cc: Harry Wentland 
> Cc: Pekka Paalanen 
> Cc: Alex Deucher 
> Signed-off-by: Daniel Vetter 

Reviewed-by: Nicholas Kazlauskas 

It's what xf86-video-amdgpu expects too - the casing is intentional to 
match up with the other default CRTC properties.

Thanks!

Nicholas Kazlauskas

> ---
>   drivers/gpu/drm/drm_connector.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
> index 847539645558..e3ff73695c32 100644
> --- a/drivers/gpu/drm/drm_connector.c
> +++ b/drivers/gpu/drm/drm_connector.c
> @@ -1367,7 +1367,7 @@ EXPORT_SYMBOL(drm_mode_create_scaling_mode_property);
>*
>*  Absence of the property should indicate absence of support.
>*
> - * "vrr_enabled":
> + * "VRR_ENABLED":
>*  Default &drm_crtc boolean property that notifies the driver that the
>*  content on the CRTC is suitable for variable refresh rate presentation.
>*  The driver will take this property as a hint to enable variable
> 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 3/3] drm/doc: fix VRR_ENABLED casing

2019-01-30 Thread Daniel Vetter
On Wed, Jan 30, 2019 at 04:57:48PM +, Kazlauskas, Nicholas wrote:
> On 1/30/19 11:30 AM, Daniel Vetter wrote:
> > Yes it's inconsitent with vrr_capable, but this is the actual uapi as
> > exercise by igt.
> > 
> > Fixes: ab7a664f7a2d ("drm: Document variable refresh properties")
> > Cc: Nicholas Kazlauskas 
> > Cc: Harry Wentland 
> > Cc: Pekka Paalanen 
> > Cc: Alex Deucher 
> > Signed-off-by: Daniel Vetter 
> 
> Reviewed-by: Nicholas Kazlauskas 
> 
> It's what xf86-video-amdgpu expects too - the casing is intentional to 
> match up with the other default CRTC properties.
> 
> Thanks!

Applied, thanks for your review. Can I motivate you to take a look at the
other patches in this series too?

Thanks, Daniel

> 
> Nicholas Kazlauskas
> 
> > ---
> >   drivers/gpu/drm/drm_connector.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_connector.c 
> > b/drivers/gpu/drm/drm_connector.c
> > index 847539645558..e3ff73695c32 100644
> > --- a/drivers/gpu/drm/drm_connector.c
> > +++ b/drivers/gpu/drm/drm_connector.c
> > @@ -1367,7 +1367,7 @@ EXPORT_SYMBOL(drm_mode_create_scaling_mode_property);
> >*
> >*Absence of the property should indicate absence of support.
> >*
> > - * "vrr_enabled":
> > + * "VRR_ENABLED":
> >*Default &drm_crtc boolean property that notifies the driver 
> > that the
> >*content on the CRTC is suitable for variable refresh rate 
> > presentation.
> >*The driver will take this property as a hint to enable variable
> > 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


  1   2   >