Hi,
On Fri, 4 Feb 2022 15:33:37 +0100, Neil Armstrong wrote:
> When the dw-hdmi bridge is in first place of the bridge chain, this
> means there is no way to select an input format of the dw-hdmi HW
> component.
>
> Since introduction of display-connector, negotiation was broken since
> the dw-hd
On Fri, Feb 11, 2022 at 11:19:03AM +0530, Ramalingam C wrote:
> From: Matt Roper
>
> DG2 is the first platform, that supports TC but not TBT.
> interrupt code is updated to avoid trying to process
> TBT-specific bits and registers.
Is that a real concern?
>
> Cc: Swathi Dhanavanthri
> Signed-
Hi Sam
Am 10.02.22 um 22:16 schrieb Sam Ravnborg:
Hi Thomas,
On Thu, Feb 10, 2022 at 03:11:13PM +0100, Thomas Zimmermann wrote:
Fbdev's deferred I/O sorts all dirty pages by default, which incurs a
significant overhead. Make the sorting step optional and update the few
drivers that require it.
Hi
Am 11.02.22 um 08:58 schrieb Dan Carpenter:
On Thu, Feb 10, 2022 at 10:16:45PM +0100, Sam Ravnborg wrote:
diff --git a/drivers/video/fbdev/core/fb_defio.c
b/drivers/video/fbdev/core/fb_defio.c
index 3727b1ca87b1..1f672cf253b2 100644
--- a/drivers/video/fbdev/core/fb_defio.c
+++ b/drivers/vi
On 2022-01-27 00:29:37 [+0100], Mario Kleiner wrote:
> Hi, first thank you for implementing these preempt disables according to
Hi Mario,
> the markers i left long ago. And sorry for the rather late reply.
>
> I had a look at the code, as of Linux 5.16, and did also a little test run
> (of a stan
On 2/9/22 13:37, Javier Martinez Canillas wrote:
[snip]
>
>> There is still an issue with the cursor, though.
>> After doing "echo hello > /dev/tty0", the text appears, but the cursor
>> is gone. "clear > /dev/tty0" brings it back.
>>
>
> Hmm, I was able to reproduce this too. Thanks for pointi
Hi,
On 2/10/22 23:43, Mario Limonciello wrote:
> Currently `pci_is_thunderbolt_attached` is used to indicate a device
> is connected externally.
>
> The PCI core now marks such devices as removable and downstream drivers
> can use this instead.
>
> Signed-off-by: Mario Limonciello
Thanks, this
On 2/9/22 17:26, Javier Martinez Canillas wrote:
> On 2/9/22 17:08, Andy Shevchenko wrote:
>
> [snip]
>
>>> Agreed, as mentioned I'll give it a try to sending all the data as a
>>> bulk write with regmap.
>>
>> Ah, it might be that it should be noinc bulk op. Need to be checked anyway.
>>
>
> Ye
On 10/02/2022 21:47, john.c.harri...@intel.com wrote:
From: John Harrison
It is possible for reset notifications to arrive for a context that is
in the process of being banned. So don't flag these as an error, just
report it as informational (because it is still useful to know that
resets are
On Fri, Feb 11, 2022 at 12:43 AM Mario Limonciello
wrote:
>
> Currently `pci_is_thunderbolt_attached` is used to indicate a device
> is connected externally.
>
> The PCI core now marks such devices as removable and downstream drivers
> can use this instead.
>
> Signed-off-by: Mario Limonciello
>
Hi Sam
Am 10.02.22 um 22:00 schrieb Sam Ravnborg:
Hi Thomas,
On Thu, Feb 10, 2022 at 03:11:11PM +0100, Thomas Zimmermann wrote:
Return early if a page is already in the list of dirty pages for
deferred I/O. This can be detected if the page's list head is not
empty. Keep the list head initializ
On 10/02/22 9:40 pm, Matthew Auld wrote:
> On 27/01/2022 14:11, Arunpravin wrote:
>> - Make drm_buddy_alloc a single function to handle
>>range allocation and non-range allocation demands
>>
>> - Implemented a new function alloc_range() which allocates
>>the requested power-of-two block
This patch series adds a DRM driver for the Solomon OLED SSD1305, SSD1306,
SSD1307 and SSD1309 displays. It is a port of the ssd1307fb fbdev driver.
Using the DRM fbdev emulation, all the tests from Geert Uytterhoeven repo
(https://git.kernel.org/pub/scm/linux/kernel/git/geert/fbtest.git) passes.
Pull the per-line conversion logic into a separate helper function.
This will allow to do line-by-line conversion in other helpers that
convert to a gray8 format.
Suggested-by: Thomas Zimmermann
Signed-off-by: Javier Martinez Canillas
---
(no changes since v3)
Changes in v3:
- Add a drm_fb_xr
Add support to convert from XR24 to reversed monochrome for drivers that
control monochromatic display panels, that only have 1 bit per pixel.
The function does a line-by-line conversion doing an intermediate step
first from XR24 to 8-bit grayscale and then to reversed monochrome.
The drm_fb_gray
The ssd130x driver only provides the core support for these devices but it
does not have any bus transport logic. Add a driver to interface over I2C.
Signed-off-by: Javier Martinez Canillas
---
Changes in v4:
- Remove unnecessary casting (Geert Uytterhoeven)
- Remove redundant blank lines (Andy
This adds a DRM driver for SSD1305, SSD1306, SSD1307 and SSD1309 Solomon
OLED display controllers.
It's only the core part of the driver and a bus specific driver is needed
for each transport interface supported by the display controllers.
Signed-off-by: Javier Martinez Canillas
---
Changes in
On 2/10/22 13:13, Matthew Auld wrote:
On devices with non-mappable LMEM ensure we always allocate the pages
within the mappable portion. For now we assume that all LMEM buffers
will require CPU access, which is also inline with pretty much all
current kernel internal users. In the next patch we
To make sure that tools like the get_maintainer.pl script will suggest
to Cc me if patches are posted for this driver.
Also include the Device Tree binding for the old ssd1307fb fbdev driver
since the new DRM driver was made compatible with the existing binding.
Signed-off-by: Javier Martinez Can
Hi,
On 2/11/22 10:00, Yehezkel Bernat wrote:
> On Fri, Feb 11, 2022 at 12:43 AM Mario Limonciello
> wrote:
>>
>> Currently `pci_is_thunderbolt_attached` is used to indicate a device
>> is connected externally.
>>
>> The PCI core now marks such devices as removable and downstream drivers
>> can us
The ssd130x DRM driver also makes use of this Device Tree binding to allow
existing users of the fbdev driver to migrate without the need to change
their Device Trees.
Add myself as another maintainer of the binding, to make sure that I will
be on Cc when patches are proposed for it.
Suggested-by
On 2/10/22 13:13, Matthew Auld wrote:
If the user doesn't require CPU access for the buffer, then
ALLOC_GPU_ONLY should be used, in order to prioritise allocating in the
non-mappable portion of LMEM, on devices with small BAR.
v2(Thomas):
- The BO_ALLOC_TOPDOWN naming here is poor, since th
On 2/10/22 13:13, Matthew Auld wrote:
Track the total amount of available visible memory, and also track
per-resource the amount of used visible memory. For now this is useful
for our debug output, and deciding if it is even worth calling into the
buddy allocator. In the future tracking the per
Am 11.02.22 um 10:19 schrieb Javier Martinez Canillas:
Pull the per-line conversion logic into a separate helper function.
This will allow to do line-by-line conversion in other helpers that
convert to a gray8 format.
Suggested-by: Thomas Zimmermann
Signed-off-by: Javier Martinez Canillas
On 2/10/22 13:13, Matthew Auld wrote:
Exercise each of the migration scenarios, verifying that the final
placement and buffer contents match our expectations.
v2(Thomas): Replace for_i915_gem_ww() block with simpler object_lock()
Signed-off-by: Matthew Auld
Cc: Thomas Hellström
Reviewed-b
On 2/10/22 13:13, Matthew Auld wrote:
If we have to contend with non-mappable LMEM, then we need to ensure the
object fits within the mappable portion, like in the selftests, where we
later try to CPU access the pages. However if it can't then we need to
gracefully handle this, without throwing
On 2/11/22 04:27, zhaoxiao wrote:
> platform_get_resource(pdev, IORESOURCE_IRQ, ..) relies on static
> allocation of IRQ resources in DT core code, this causes an issue
> when using hierarchical interrupt domains using "interrupts" property
> in the node as this bypassed the hierarchical setup and
Remove the bubble sort from fbdev defered-I/O page tracking. Most
drivers only want to know which pages have been written to. The exact
order is not important.
Tested on simpledrm.
v2:
* make sorted page lists the special case (Sam)
* improve several comments (Sam)
Thomas Zimmerm
Return early if a page is already in the list of dirty pages for
deferred I/O. This can be detected if the page's list head is not
empty. Keep the list head initialized while the page is not enlisted
to make this work reliably.
v2:
* update comment and fix spelling (Sam)
Signed-off-by: Th
Fbdev's deferred I/O sorts all dirty pages by default, which incurs a
significant overhead. Make the sorting step optional and update the few
drivers that require it. Use a FIFO list by default.
Most fbdev drivers with deferred I/O build a bounding rectangle around
the dirty pages or simply flush
On 2/10/22 13:13, Matthew Auld wrote:
Starting from DG2+, when dealing with LMEM, we assume that by default
all userspace allocations should be placed in the non-mappable portion
of LMEM. Note that dumb buffers are not included here, since these are
not "GPU accelerated" and likely need CPU ac
On Fri, 11 Feb 2022 at 09:49, Thomas Hellström
wrote:
>
>
> On 2/10/22 13:13, Matthew Auld wrote:
> > Starting from DG2+, when dealing with LMEM, we assume that by default
> > all userspace allocations should be placed in the non-mappable portion
> > of LMEM. Note that dumb buffers are not includ
On 2/11/22 10:52, Matthew Auld wrote:
On Fri, 11 Feb 2022 at 09:49, Thomas Hellström
wrote:
On 2/10/22 13:13, Matthew Auld wrote:
Starting from DG2+, when dealing with LMEM, we assume that by default
all userspace allocations should be placed in the non-mappable portion
of LMEM. Note that
On 2/10/22 13:13, Matthew Auld wrote:
If set, force the allocation to be placed in the mappable portion of
LMEM. One big restriction here is that system memory must be given as a
potential placement for the object, that way we can always spill the
object into system memory if we can't make spac
On Fri, 11 Feb 2022 at 09:56, Thomas Hellström
wrote:
>
>
> On 2/11/22 10:52, Matthew Auld wrote:
> > On Fri, 11 Feb 2022 at 09:49, Thomas Hellström
> > wrote:
> >>
> >> On 2/10/22 13:13, Matthew Auld wrote:
> >>> Starting from DG2+, when dealing with LMEM, we assume that by default
> >>> all use
On 2/10/22 13:13, Matthew Auld wrote:
On platforms where there might be non-mappable LMEM, force userspace to
mark the buffers with the correct hint. When dumping the BO contents
during capture we need CPU access. Note this only applies to buffers
that can be placed in LMEM, and also doesn't im
On 2/10/22 13:13, Matthew Auld wrote:
Just pass along the probed io_size. The backend should be able to
utilize the entire range here, even if some of it is non-mappable.
It does leave open with what to do with stolen local-memory.
Signed-off-by: Matthew Auld
Cc: Thomas Hellström
Reviewed
On Wed, 09 Feb 2022, Ville Syrjälä wrote:
> On Wed, Feb 09, 2022 at 11:09:41AM +0200, Jani Nikula wrote:
>> On Tue, 08 Feb 2022, Ville Syrjälä wrote:
>> > On Thu, Feb 03, 2022 at 11:03:55AM +0200, Jani Nikula wrote:
>> >> Abstract link status check to a function that takes 128b/132b and 8b/10b
>>
On 2/11/22 11:00, Matthew Auld wrote:
On Fri, 11 Feb 2022 at 09:56, Thomas Hellström
wrote:
On 2/11/22 10:52, Matthew Auld wrote:
On Fri, 11 Feb 2022 at 09:49, Thomas Hellström
wrote:
On 2/10/22 13:13, Matthew Auld wrote:
Starting from DG2+, when dealing with LMEM, we assume that by defa
On Fri, Feb 11, 2022 at 10:19:22AM +0100, Javier Martinez Canillas wrote:
> Pull the per-line conversion logic into a separate helper function.
>
> This will allow to do line-by-line conversion in other helpers that
> convert to a gray8 format.
...
> +static void drm_fb_xrgb_to_gray8_line(u8
Hello Andy,
On 2/11/22 11:28, Andy Shevchenko wrote:
> On Fri, Feb 11, 2022 at 10:19:22AM +0100, Javier Martinez Canillas wrote:
>> Pull the per-line conversion logic into a separate helper function.
>>
>> This will allow to do line-by-line conversion in other helpers that
>> convert to a gray8 fo
From: Raphael Gallais-Pou
This patch adds the CRC hashing feature supported by some recent hardware
versions of the LTDC. This is useful for test suite such as IGT-GPU-tools
[1] where a CRTC output frame can be compared to a test reference frame
thanks to their respective CRC hash.
[1] https://c
On Tue, Feb 08, 2022 at 11:44:32AM -0800, Abhinav Kumar wrote:
> There are cases where depending on the size of the devcoredump and the speed
> at which the usermode reads the dump, it can take longer than the current 5
> mins
> timeout.
>
> This can lead to incomplete dumps as the device is dele
On Tue, Feb 08, 2022 at 05:55:18PM -0800, Abhinav Kumar wrote:
> Hi Johannes
>
> On 2/8/2022 1:54 PM, Johannes Berg wrote:
> > On Tue, 2022-02-08 at 13:40 -0800, Abhinav Kumar wrote:
> > > >
> > > I am checking what usermode sees and will get back ( I didnt see an
> > > error do most likely it wa
On Fri, Feb 11, 2022 at 10:19:23AM +0100, Javier Martinez Canillas wrote:
> Add support to convert from XR24 to reversed monochrome for drivers that
> control monochromatic display panels, that only have 1 bit per pixel.
>
> The function does a line-by-line conversion doing an intermediate step
>
On Fri, Feb 11, 2022 at 11:40:13AM +0100, Javier Martinez Canillas wrote:
> On 2/11/22 11:28, Andy Shevchenko wrote:
> > On Fri, Feb 11, 2022 at 10:19:22AM +0100, Javier Martinez Canillas wrote:
...
> >> +static void drm_fb_xrgb_to_gray8_line(u8 *dst, const u32 *src,
> >> unsigned int pixels
On Fri, Feb 11, 2022 at 10:19:25AM +0100, Javier Martinez Canillas wrote:
> The ssd130x driver only provides the core support for these devices but it
> does not have any bus transport logic. Add a driver to interface over I2C.
Reviewed-by: Andy Shevchenko
> Signed-off-by: Javier Martinez Canill
Starting from DG2 we will have resizable BAR support for device local-memory,
but in some cases the final BAR size might still be smaller than the total
local-memory size. In such cases only part of local-memory will be CPU
accessible, while the remainder is only accessible via the GPU. This series
With small LMEM-BAR we need to be able to differentiate between the
total size of LMEM, and how much of it is CPU mappable. The end goal is
to be able to utilize the entire range, even if part of is it not CPU
accessible.
v2: also update intelfb_create
Signed-off-by: Matthew Auld
Cc: Thomas Hell
On devices with non-mappable LMEM ensure we always allocate the pages
within the mappable portion. For now we assume that all LMEM buffers
will require CPU access, which is also inline with pretty much all
current kernel internal users. In the next patch we will introduce a new
flag to override thi
Track the total amount of available visible memory, and also track
per-resource the amount of used visible memory. For now this is useful
for our debug output, and deciding if it is even worth calling into the
buddy allocator. In the future tracking the per-resource visible usage
will be useful for
Otherwise we get -EINVAL, instead of the more useful -E2BIG if the
allocation doesn't fit within the pfn range, like with mappable lmem.
The hugepages selftest, for example, needs this to know if a smaller
size is needed.
Signed-off-by: Matthew Auld
Cc: Thomas Hellström
Reviewed-by: Thomas Hells
If the user doesn't require CPU access for the buffer, then
ALLOC_GPU_ONLY should be used, in order to prioritise allocating in the
non-mappable portion of LMEM, on devices with small BAR.
v2(Thomas):
- The BO_ALLOC_TOPDOWN naming here is poor, since this is pure lies on
systems that don't e
Differentiate between mappable vs non-mappable resources, also if this
is an actual range allocation ensure we set res->start as the starting
pfn. Later when we need to do non-mappable -> mappable moves then we
want TTM to see that the current placement is not compatible, which
should result in an
Check that mappable vs non-mappable matches our expectations.
Signed-off-by: Matthew Auld
Cc: Thomas Hellström
Reviewed-by: Thomas Hellström
---
.../drm/i915/selftests/intel_memory_region.c | 143 ++
1 file changed, 143 insertions(+)
diff --git a/drivers/gpu/drm/i915/selftest
Just pass along the probed io_size. The backend should be able to
utilize the entire range here, even if some of it is non-mappable.
It does leave open with what to do with stolen local-memory.
Signed-off-by: Matthew Auld
Cc: Thomas Hellström
Reviewed-by: Thomas Hellström
---
drivers/gpu/drm/
If we have to contend with non-mappable LMEM, then we need to ensure the
object fits within the mappable portion, like in the selftests, where we
later try to CPU access the pages. However if it can't then we need to
gracefully handle this, without throwing an error.
Also it looks like TTM will re
Exercise each of the migration scenarios, verifying that the final
placement and buffer contents match our expectations.
v2(Thomas): Replace for_i915_gem_ww() block with simpler object_lock()
Signed-off-by: Matthew Auld
Cc: Thomas Hellström
Reviewed-by: Thomas Hellström
---
.../drm/i915/gem/s
If we need to make room for some mappable object, then we should
only victimize objects that have one or pages that occupy the visible
portion of LMEM. Let's also create a new priority hint for objects that
are placed in mappable memory, where we know that CPU access was
requested, that way we hope
On Fri, Feb 11, 2022 at 10:19:24AM +0100, Javier Martinez Canillas wrote:
> This adds a DRM driver for SSD1305, SSD1306, SSD1307 and SSD1309 Solomon
> OLED display controllers.
>
> It's only the core part of the driver and a bus specific driver is needed
> for each transport interface supported by
The end goal is to have userspace tell the kernel what buffers will
require CPU access, however if we ever reach the CPU fault handler, and
the current resource is not mappable, then we should attempt to migrate
the buffer to the mappable portion of LMEM, or even system memory, if the
allowable pla
Starting from DG2+, when dealing with LMEM, we assume that by default
all userspace allocations should be placed in the non-mappable portion
of LMEM. Note that dumb buffers are not included here, since these are
not "GPU accelerated" and likely need CPU access. We choose to just
always set GPU_ONL
On platforms where there might be non-mappable LMEM, force userspace to
mark the buffers with the correct hint. When dumping the BO contents
during capture we need CPU access. Note this only applies to buffers
that can be placed in LMEM, and also doesn't impact DG1.
v2(Reported-by: kernel test rob
If set, force the allocation to be placed in the mappable portion of
LMEM. One big restriction here is that system memory must be given as a
potential placement for the object, that way we can always spill the
object into system memory if we can't make space.
XXX: Still needs IGTs. Including now j
On Fri, Feb 11, 2022 at 10:21:57AM +0100, Javier Martinez Canillas wrote:
> To make sure that tools like the get_maintainer.pl script will suggest
> to Cc me if patches are posted for this driver.
>
> Also include the Device Tree binding for the old ssd1307fb fbdev driver
> since the new DRM drive
On Fri, Feb 11, 2022 at 10:22:53AM +0100, Javier Martinez Canillas wrote:
> The ssd130x DRM driver also makes use of this Device Tree binding to allow
> existing users of the fbdev driver to migrate without the need to change
> their Device Trees.
>
> Add myself as another maintainer of the bindin
Hello Andy,
Thanks for your feedback.
On 2/11/22 12:10, Andy Shevchenko wrote:
[snip]
>> +static void drm_fb_gray8_to_mono_reversed_line(u8 *dst, const u8 *src,
>> unsigned int pixels,
>> + unsigned int start_offset,
>> unsigned int end_len)
>> +{
>>
Hi
Am 11.02.22 um 12:12 schrieb Andy Shevchenko:
On Fri, Feb 11, 2022 at 11:40:13AM +0100, Javier Martinez Canillas wrote:
On 2/11/22 11:28, Andy Shevchenko wrote:
On Fri, Feb 11, 2022 at 10:19:22AM +0100, Javier Martinez Canillas wrote:
...
+static void drm_fb_xrgb_to_gray8_line(u8 *d
Hi
Am 11.02.22 um 12:10 schrieb Andy Shevchenko:
On Fri, Feb 11, 2022 at 10:19:23AM +0100, Javier Martinez Canillas wrote:
Add support to convert from XR24 to reversed monochrome for drivers that
control monochromatic display panels, that only have 1 bit per pixel.
The function does a line-by-
On Fri, 11 Feb 2022, Thomas Zimmermann wrote:
> Hi
>
> Am 11.02.22 um 12:12 schrieb Andy Shevchenko:
>> On Fri, Feb 11, 2022 at 11:40:13AM +0100, Javier Martinez Canillas wrote:
>>> On 2/11/22 11:28, Andy Shevchenko wrote:
On Fri, Feb 11, 2022 at 10:19:22AM +0100, Javier Martinez Canillas wro
On 2/11/22 12:33, Andy Shevchenko wrote:
> On Fri, Feb 11, 2022 at 10:19:24AM +0100, Javier Martinez Canillas wrote:
>> This adds a DRM driver for SSD1305, SSD1306, SSD1307 and SSD1309 Solomon
>> OLED display controllers.
>>
>> It's only the core part of the driver and a bus specific driver is need
Hello Jani,
On 2/11/22 13:05, Jani Nikula wrote:
[snip]
I don't see why a while loop would be an improvement here TBH.
>>>
>>> Less letters to parse when reading the code.
>>
>> It's a simple refactoring of code that has worked well so far. Let's
>> leave it as-is for now.
>
> IMO *always
https://bugzilla.kernel.org/show_bug.cgi?id=201957
Ilia (infer...@gmail.com) changed:
What|Removed |Added
CC||infer...@gmail.com
--- Commen
Hi Javier,
On Fri, Feb 11, 2022 at 1:06 PM Javier Martinez Canillas
wrote:
> On 2/11/22 12:33, Andy Shevchenko wrote:
> > On Fri, Feb 11, 2022 at 10:19:24AM +0100, Javier Martinez Canillas wrote:
> >> This adds a DRM driver for SSD1305, SSD1306, SSD1307 and SSD1309 Solomon
> >> OLED display contr
Hello Geert,
On 2/11/22 13:23, Geert Uytterhoeven wrote:
[snip]
+if (IS_ERR(bl)) {
>>>
+ret = PTR_ERR(bl);
+dev_err_probe(dev, ret, "Unable to register backlight
device\n");
+return ERR_PTR(ret);
>>>
>>> dev_err_prob
Hi Jani,
On Fri, Feb 11, 2022 at 1:06 PM Jani Nikula wrote:
> On Fri, 11 Feb 2022, Thomas Zimmermann wrote:
> > Am 11.02.22 um 12:12 schrieb Andy Shevchenko:
> >> On Fri, Feb 11, 2022 at 11:40:13AM +0100, Javier Martinez Canillas wrote:
> >>> On 2/11/22 11:28, Andy Shevchenko wrote:
> On Fr
Hi
Am 11.02.22 um 10:19 schrieb Javier Martinez Canillas:
...
+
+static void ssd130x_display_pipe_enable(struct drm_simple_display_pipe *pipe,
+ struct drm_crtc_state *crtc_state,
+ struct drm_plane_state *plane_state)
+
Hi
Am 11.02.22 um 10:19 schrieb Javier Martinez Canillas:
Add support to convert from XR24 to reversed monochrome for drivers that
control monochromatic display panels, that only have 1 bit per pixel.
The function does a line-by-line conversion doing an intermediate step
first from XR24 to 8-bi
On 2/11/22 12:34, Matthew Auld wrote:
Starting from DG2+, when dealing with LMEM, we assume that by default
all userspace allocations should be placed in the non-mappable portion
of LMEM. Note that dumb buffers are not included here, since these are
not "GPU accelerated" and likely need CPU ac
Hi guys,
by now that should be a rather well known set of changes.
The basic idea is that instead of the fixed exclusive/shared classes we now
attach an usage to each fence in the dma_resv object describing how the
operation represented by the fence is using the resources protected by
the dma_res
This function allows to replace fences from the shared fence list when
we can gurantee that the operation represented by the original fence has
finished or no accesses to the resources protected by the dma_resv
object any more when the new fence finishes.
Then use this function in the amdkfd code
Drivers should never touch this directly.
v2: drop kerneldoc for now internal handling
Signed-off-by: Christian König
Reviewed-by: Daniel Vetter
---
drivers/dma-buf/dma-resv.c | 11 +++
include/linux/dma-resv.h | 26 +-
2 files changed, 12 insertions(+), 25 de
Instead use the new dma_resv_get_singleton function.
Signed-off-by: Christian König
Reviewed-by: Daniel Vetter
Cc: VMware Graphics
Cc: Zack Rusin
---
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgf
Instead use the new dma_resv_get_singleton function.
Signed-off-by: Christian König
Reviewed-by: Daniel Vetter
Cc: Ben Skeggs
Cc: Karol Herbst
Cc: Lyude Paul
Cc: nouv...@lists.freedesktop.org
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 9 -
1 file changed, 8 insertions(+), 1 deletion(
Instead use the new dma_resv_get_singleton function.
Signed-off-by: Christian König
Reviewed-by: Daniel Vetter
Cc: amd-...@lists.freedesktop.org
---
drivers/gpu/drm/radeon/radeon_display.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/radeon/radeon_di
Add a function to simplify getting a single fence for all the fences in
the dma_resv object.
v2: fix ref leak in error handling
Signed-off-by: Christian König
---
drivers/dma-buf/dma-resv.c | 52 ++
include/linux/dma-resv.h | 2 ++
2 files changed, 54 inse
Use dma_resv_wait() instead of extracting the exclusive fence and
waiting on it manually.
Signed-off-by: Christian König
Reviewed-by: Daniel Vetter
Cc: Jason Gunthorpe
Cc: Leon Romanovsky
Cc: Maor Gottlieb
Cc: Gal Pressman
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
--
Audit all the users of dma_resv_add_excl_fence() and make sure they
reserve a shared slot also when only trying to add an exclusive fence.
This is the next step towards handling the exclusive fence like a
shared one.
v2: fix missed case in amdgpu
v3: and two more radeon, rename function
Signed-o
Instead of distingting between shared and exclusive fences specify
the fence usage while adding fences.
Rework all drivers to use this interface instead and deprecate the old one.
v2: some kerneldoc comments suggested by Daniel
v3: fix a missing case in radeon
v4: rebase on nouveau changes, fix l
Drivers should never touch this directly.
v2: fix rebase clash
Signed-off-by: Christian König
---
drivers/dma-buf/dma-resv.c | 6 ++
include/linux/dma-resv.h | 17 -
2 files changed, 6 insertions(+), 17 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-b
On 2/11/22 12:34, Matthew Auld wrote:
On platforms where there might be non-mappable LMEM, force userspace to
mark the buffers with the correct hint. When dumping the BO contents
during capture we need CPU access. Note this only applies to buffers
that can be placed in LMEM, and also doesn't im
Use dma_resv_get_singleton() here to eventually get more than one write
fence as single fence.
Signed-off-by: Christian König
Reviewed-by: Daniel Vetter
---
drivers/gpu/drm/drm_gem_atomic_helper.c | 18 +++---
1 file changed, 7 insertions(+), 11 deletions(-)
diff --git a/drivers/gp
This change adds the dma_resv_usage enum and allows us to specify why a
dma_resv object is queried for its containing fences.
Additional to that a dma_resv_usage_rw() helper function is added to aid
retrieving the fences for a read or write userspace submission.
This is then deployed to the diffe
So far we had the approach of using a directed acyclic
graph with the dma_resv obj.
This turned out to have many downsides, especially it means
that every single driver and user of this interface needs
to be aware of this restriction when adding fences. If the
rules for the DAG are not followed th
That should now be handled by the common dma_resv framework.
Signed-off-by: Christian König
Cc: intel-...@lists.freedesktop.org
---
drivers/gpu/drm/i915/gem/i915_gem_object.c | 29 ++--
drivers/gpu/drm/i915/gem/i915_gem_object.h | 5 ++--
drivers/gpu/drm/i915/gem/i915_gem_tt
Use dma_resv_get_singleton() here to eventually get more than one write
fence as single fence.
Signed-off-by: Christian König
Reviewed-by: Daniel Vetter
Cc: Thomas Zimmermann
Cc: Laurent Pinchart
Cc: Maxime Ripard
Cc: Lyude Paul
Cc: nouv...@lists.freedesktop.org
---
drivers/gpu/drm/nouveau/
This is now handled by the DMA-buf framework in the dma_resv obj.
Signed-off-by: Christian König
---
.../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 13 ---
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c| 7 ++--
drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c| 11 +++---
drivers/gpu/drm/amd
We can get the excl fence together with the shared ones as well.
Signed-off-by: Christian König
Reviewed-by: Daniel Vetter
Cc: Lucas Stach
Cc: Russell King
Cc: Christian Gmeiner
Cc: etna...@lists.freedesktop.org
---
drivers/gpu/drm/etnaviv/etnaviv_gem.h| 1 -
drivers/gpu/drm/etnaviv
Makes the code a bit more simpler.
Signed-off-by: Christian König
Reviewed-by: Daniel Vetter
Cc: amd-...@lists.freedesktop.org
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 23 +++
1 file changed, 3 insertions(+), 20 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdg
Add an usage for kernel submissions. Waiting for those
are mandatory for dynamic DMA-bufs.
Signed-off-by: Christian König
---
drivers/dma-buf/st-dma-resv.c| 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 2 +-
drivers/
1 - 100 of 234 matches
Mail list logo