On Wed, Mar 31, 2021 at 3:58 AM Dmitry Baryshkov
wrote:
>
> The 7nm, 10nm and 14nm drivers would store interim data used during
> VCO/PLL rate setting in the global dsi_pll_Nnm structure. Move this data
> structures to the onstack storage. While we are at it, drop
> unused/static 'config' data, un
Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
Add a helper function to get a single fence representing
all fences in a dma_resv object.
This fence is either the only one in the object or all not
signaled fences of the object in a flatted out dma_fence_array.
v2 (Jason Ekstrand):
- Take referen
On Thu, 10 Jun 2021 17:38:24 -0300
Leandro Ribeiro wrote:
> Add a small description and document struct fields of
> drm_mode_get_plane.
>
> Signed-off-by: Leandro Ribeiro
> ---
> include/uapi/drm/drm_mode.h | 35 +++
> 1 file changed, 35 insertions(+)
>
> diff
On Fri, Jun 11, 2021 at 8:55 AM Christian König
wrote:
>
> Am 10.06.21 um 22:42 schrieb Daniel Vetter:
> > On Thu, Jun 10, 2021 at 10:10 PM Jason Ekstrand
> > wrote:
> >> On Thu, Jun 10, 2021 at 8:35 AM Jason Ekstrand
> >> wrote:
> >>> On Thu, Jun 10, 2021 at 6:30 AM Daniel Vetter
> >>> wrot
Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds
documentation for DMA_BUF_IOCTL_SYNC.
v2 (Daniel Vetter):
- Fix a couple typos
- Add commentary about synchronization with other devices
- Use item list format for describi
On Fri, Jun 11, 2021 at 4:18 AM Desmond Cheong Zhi Xi
wrote:
On 11/6/21 12:48 am, Daniel Vetter wrote:
> > On Thu, Jun 10, 2021 at 11:21:39PM +0800, Desmond Cheong Zhi Xi wrote:
> >> On 10/6/21 6:10 pm, Daniel Vetter wrote:
> >>> On Wed, Jun 09, 2021 at 05:21:19PM +0800, Desmond Cheong Zhi Xi wro
https://bugzilla.kernel.org/show_bug.cgi?id=213391
--- Comment #9 from Michel Dänzer (mic...@daenzer.net) ---
If you can, reverting to an older version of the files under
/lib/firmware/amdgpu/ may avoid the hangs.
--
You may reply to this email to add a comment.
You are receiving this mail beca
On Fri, Jun 11, 2021 at 9:20 AM Pekka Paalanen wrote:
>
> On Thu, 10 Jun 2021 17:38:24 -0300
> Leandro Ribeiro wrote:
>
> > Add a small description and document struct fields of
> > drm_mode_get_plane.
> >
> > Signed-off-by: Leandro Ribeiro
> > ---
> > include/uapi/drm/drm_mode.h | 35 +
Am 11.06.21 um 09:20 schrieb Daniel Vetter:
On Fri, Jun 11, 2021 at 8:55 AM Christian König
wrote:
Am 10.06.21 um 22:42 schrieb Daniel Vetter:
On Thu, Jun 10, 2021 at 10:10 PM Jason Ekstrand wrote:
On Thu, Jun 10, 2021 at 8:35 AM Jason Ekstrand wrote:
On Thu, Jun 10, 2021 at 6:30 AM Daniel
Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
For dma-buf sync_file import, we want to get all the fences on a
dma_resv plus one more. We could wrap the fence we get back in an array
fence or we could make dma_resv_get_singleton_unlocked take "one more"
to make this case easier.
Signed-off-by: J
On Thu, Jun 10, 2021 at 11:17:54AM +0200, Christian König wrote:
> The callback and the irq work are never used at the same
> time. Putting them into an union saves us 24 bytes and
> makes the structure only 120 bytes in size.
Yeah pushing below 128 bytes makes sense.
>
> Signed-off-by: Christia
On Thu, Jun 10, 2021 at 11:17:55AM +0200, Christian König wrote:
> Add a common allocation helper. Cleaning up the mix of kzalloc/kmalloc
> and some unused code in the selftest.
>
> Signed-off-by: Christian König
> ---
> drivers/dma-buf/st-dma-fence-chain.c | 16 --
> driver
On Thu, Jun 10, 2021 at 11:17:56AM +0200, Christian König wrote:
> Exercise the newly added functions.
>
> Signed-off-by: Christian König
I have honestly no idea what this checks. Spawning a few threads to
validate kmalloc/kfree feels a bit silly. Now testing whether we correctly
rcu-delay the f
Hi Daniel,
Le mar., juin 1 2021 at 17:48:12 +0200, Daniel Vetter
a écrit :
On Fri, May 28, 2021 at 12:20:58AM +0100, Paul Cercueil wrote:
This information is carried from the ".atomic_check" to the
".atomic_commit_tail"; as such it is state-specific, and should be
moved
to the private st
On Thursday, June 10th, 2021 at 23:00, Daniel Vetter
wrote:
> If there's a strong consensus that we really need this then I'm not
> going to nack this, but this really needs a pile of acks from
> compositor folks that they're willing to live with the resulting
> fallout this will likely bring. Y
On Thu, Jun 10, 2021 at 03:44:37PM -0700, Daniele Ceraolo Spurio wrote:
>
>
> On 6/2/2021 11:14 AM, Rodrigo Vivi wrote:
> > On Mon, May 24, 2021 at 10:47:58PM -0700, Daniele Ceraolo Spurio wrote:
> > > Now that we can handle destruction and re-creation of the arb session,
> > > we can postpone th
On Thu, Jun 10, 2021 at 03:58:13PM -0700, Daniele Ceraolo Spurio wrote:
>
>
> On 6/2/2021 9:20 AM, Rodrigo Vivi wrote:
> > On Mon, May 24, 2021 at 10:47:59PM -0700, Daniele Ceraolo Spurio wrote:
> > > From: "Huang, Sean Z"
> > >
> > > During the power event S3+ sleep/resume, hardware will lose
On Fri, Jun 11, 2021 at 08:09:00AM +0200, Zbigniew Kempczyński wrote:
> On Thu, Jun 10, 2021 at 10:36:12AM -0400, Rodrigo Vivi wrote:
> > On Thu, Jun 10, 2021 at 12:39:55PM +0200, Zbigniew Kempczyński wrote:
> > > We have established previously we stop using relocations starting
> > > from gen12 pl
On Thu, Jun 10, 2021 at 11:17:57AM +0200, Christian König wrote:
> Add some rather sophisticated lockless garbage collection
> for dma_fence_chain objects.
>
> For this keep all initialized dma_fence_chain nodes an a
> queue and trigger garbage collection before a new one is
> allocated.
>
> Sign
Hi,
On Fri, 11 Jun 2021 at 10:07, John Stultz wrote:
>
> On Wed, Mar 31, 2021 at 3:58 AM Dmitry Baryshkov
> wrote:
> >
> > The 7nm, 10nm and 14nm drivers would store interim data used during
> > VCO/PLL rate setting in the global dsi_pll_Nnm structure. Move this data
> > structures to the onstac
On Thu, Jun 10, 2021 at 04:09:19PM +0100, Paul Cercueil wrote:
> Hi Daniel,
>
> Le mar., juin 1 2021 at 17:48:12 +0200, Daniel Vetter a
> écrit :
> > On Fri, May 28, 2021 at 12:20:58AM +0100, Paul Cercueil wrote:
> > > This information is carried from the ".atomic_check" to the
> > > ".atomic_c
On Thu, Jun 10, 2021 at 11:17:59AM +0200, Christian König wrote:
> Unwrap a the explicit fence if it is a dma_fence_chain and
> sync to the first fence not matching the owner rules.
>
> Signed-off-by: Christian König
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 118 +--
Hi all, this patch series implement MIPI rx DPI feature. Please help to review.
This is the v7 version, rebase DT on the latest code,
removed HDCP patch(I'll upload HDCP feature by a new patch).
Any mistakes, please let me know, I'll fix it in the next series.
Change history:
v7:
- Rebase DT on
Add 'bus-type' and 'data-lanes' define for port0. Define DP tx lane0,
lane1 swing register array define, and audio enable flag.
Signed-off-by: Xin Ji
---
.../display/bridge/analogix,anx7625.yaml | 57 ++-
1 file changed, 56 insertions(+), 1 deletion(-)
diff --git
a/Documen
At some time, the original code may return non zero value, force return 0
if operation finished.
Reviewed-by: Robert Foss
Signed-off-by: Xin Ji
---
drivers/gpu/drm/bridge/analogix/anx7625.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/bridge/analogix/a
Add MIPI rx DPI input feature support.
Reviewed-by: Robert Foss
Signed-off-by: Xin Ji
---
drivers/gpu/drm/bridge/analogix/anx7625.c | 245 --
drivers/gpu/drm/bridge/analogix/anx7625.h | 18 +-
2 files changed, 203 insertions(+), 60 deletions(-)
diff --git a/drivers/gpu/drm
Add audio HDMI codec function support, enable it through device true flag
"analogix,audio-enable".
Reviewed-by: Robert Foss
Signed-off-by: Xin Ji
---
drivers/gpu/drm/bridge/analogix/anx7625.c | 227 ++
drivers/gpu/drm/bridge/analogix/anx7625.h | 5 +
2 files changed, 232 i
On Thu, Jun 10, 2021 at 11:18:00AM +0200, Christian König wrote:
> Drop the workaround and instead implement a better solution.
>
> Basically we are now chaining all submissions using a dma_fence_chain
> container and adding them as exclusive fence to the dma_resv object.
>
> This way other drive
Pull request for drm-misc-next and drm-intel-gt-next.
topic/i915-ttm-2021-06-11:
drm-misc and drm-intel pull request for topic/i915-ttm:
- Convert i915 lmem handling to ttm.
- Add a patch to temporarily add a driver_private member to vma_node.
- Use this to allow mixed object mmap handling for i91
On Fri, Jun 11, 2021 at 09:42:07AM +0200, Christian König wrote:
> Am 11.06.21 um 09:20 schrieb Daniel Vetter:
> > On Fri, Jun 11, 2021 at 8:55 AM Christian König
> > wrote:
> > > Am 10.06.21 um 22:42 schrieb Daniel Vetter:
> > > > On Thu, Jun 10, 2021 at 10:10 PM Jason Ekstrand
> > > > wrote:
>
On Thu, Jun 10, 2021 at 02:36:59PM -0700, Dongwon Kim wrote:
> Render clients should be able to create/destroy dumb object to import
> and use it as render buffer in case the default DRM device is different
> from the render device (i.e. kmsro).
>
> Signed-off-by: Dongwon Kim
Uh no.
Well I know
Remove unnecessary SIGNAL_TYPE_HDMI_TYPE_A check that was performed in the
drm_mode_is_420_only() case, but not in the drm_mode_is_420_also() &&
force_yuv420_output case.
Without further knowledge if YCbCr 4:2:0 is supported outside of HDMI, there is
no reason to use RGB when the display reports d
Am 11.06.21 um 11:33 schrieb Daniel Vetter:
On Fri, Jun 11, 2021 at 09:42:07AM +0200, Christian König wrote:
Am 11.06.21 um 09:20 schrieb Daniel Vetter:
On Fri, Jun 11, 2021 at 8:55 AM Christian König
wrote:
Am 10.06.21 um 22:42 schrieb Daniel Vetter:
On Thu, Jun 10, 2021 at 10:10 PM Jason E
Am 11.06.21 um 09:58 schrieb Daniel Vetter:
On Thu, Jun 10, 2021 at 11:17:56AM +0200, Christian König wrote:
Exercise the newly added functions.
Signed-off-by: Christian König
I have honestly no idea what this checks. Spawning a few threads to
validate kmalloc/kfree feels a bit silly. Now tes
Am 11.06.21 um 10:58 schrieb Daniel Vetter:
On Thu, Jun 10, 2021 at 11:17:57AM +0200, Christian König wrote:
Add some rather sophisticated lockless garbage collection
for dma_fence_chain objects.
For this keep all initialized dma_fence_chain nodes an a
queue and trigger garbage collection befor
Am 11.06.21 um 11:07 schrieb Daniel Vetter:
On Thu, Jun 10, 2021 at 11:17:59AM +0200, Christian König wrote:
Unwrap a the explicit fence if it is a dma_fence_chain and
sync to the first fence not matching the owner rules.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_s
On 10/06/2021 14:06, Chunyou Tang wrote:
> Hi Steven,
Hi Chunyou,
For some reason I'm not directly receiving your emails (only via the
list) - can you double check your email configuration?
>>> The GPU exception fault status register(0x3C),the low 8 bit is the
>>> EXCEPTION_TYPE.We can see the d
Am 11.06.21 um 11:17 schrieb Daniel Vetter:
On Thu, Jun 10, 2021 at 11:18:00AM +0200, Christian König wrote:
Drop the workaround and instead implement a better solution.
Basically we are now chaining all submissions using a dma_fence_chain
container and adding them as exclusive fence to the dma
On 08/06/2021 15:38, Wei Yongjun wrote:
> Fix the missing clk_disable_unprepare() before return
> from panfrost_clk_init() in the error handling case.
>
> Fixes: b681af0bc1cc ("drm: panfrost: add optional bus_clock")
> Reported-by: Hulk Robot
> Signed-off-by: Wei Yongjun
Reviewed-by: Steven Pri
?? Fri, 11 Jun 2021 11:10:16 +0100
Steven Price :
> On 10/06/2021 14:06, Chunyou Tang wrote:
> > Hi Steven,
>
> Hi Chunyou,
>
> For some reason I'm not directly receiving your emails (only via the
> list) - can you double check your email configuration?
>
> >>> The GPU exception fault stat
On Fri, 11 Jun 2021 at 10:47, Daniel Vetter wrote:
>
> On Thu, Jun 10, 2021 at 02:36:59PM -0700, Dongwon Kim wrote:
> > Render clients should be able to create/destroy dumb object to import
> > and use it as render buffer in case the default DRM device is different
> > from the render device (i.e.
Quoting Maarten Lankhorst (2021-06-11 12:27:15)
> Pull request for drm-misc-next and drm-intel-gt-next.
>
> topic/i915-ttm-2021-06-11:
> drm-misc and drm-intel pull request for topic/i915-ttm:
> - Convert i915 lmem handling to ttm.
> - Add a patch to temporarily add a driver_private member to vma_
Quoting Joonas Lahtinen (2021-06-11 13:40:56)
> Quoting Maarten Lankhorst (2021-06-11 12:27:15)
> > Pull request for drm-misc-next and drm-intel-gt-next.
> >
> > topic/i915-ttm-2021-06-11:
> > drm-misc and drm-intel pull request for topic/i915-ttm:
> > - Convert i915 lmem handling to ttm.
> > - Ad
Am 11.06.21 um 07:34 schrieb Thomas Hellström (Intel):
Hi, Christian,
I know you have a lot on your plate, and that the drm community is a
bit lax about following the kernel patch submitting guidelines, but
now that we're also spinning up a number of Intel developers on TTM
could we please
Am 11.06.21 um 09:54 schrieb Daniel Vetter:
On Thu, Jun 10, 2021 at 11:17:55AM +0200, Christian König wrote:
Add a common allocation helper. Cleaning up the mix of kzalloc/kmalloc
and some unused code in the selftest.
Signed-off-by: Christian König
---
drivers/dma-buf/st-dma-fence-chain.c
As the name implies if testing all fences is requested we
should indeed test all fences and not skip the exclusive
one because we see shared ones.
Signed-off-by: Christian König
---
drivers/dma-buf/dma-resv.c | 33 -
1 file changed, 12 insertions(+), 21 deletions(
Drop the workaround and instead implement a better solution.
Basically we are now chaining all submissions using a dma_fence_chain
container and adding them as exclusive fence to the dma_resv object.
This way other drivers can still sync to the single exclusive fence
while amdgpu only sync to fen
Add a common allocation helper. Cleaning up the mix of kzalloc/kmalloc
and some unused code in the selftest.
v2: polish kernel doc a bit
Signed-off-by: Christian König
Reviewed-by: Daniel Vetter
---
drivers/dma-buf/st-dma-fence-chain.c | 16 -
drivers/gpu/drm/amd/amdgpu/am
The callback and the irq work are never used at the same
time. Putting them into an union saves us 24 bytes and
makes the structure only 120 bytes in size.
Signed-off-by: Christian König
Reviewed-by: Daniel Vetter
---
drivers/dma-buf/dma-fence-chain.c | 2 +-
include/linux/dma-fence-chain.h
Unwrap the explicit fence if it is a dma_fence_chain and
sync to the first fence not matching the owner rules.
Signed-off-by: Christian König
Acked-by: Daniel Vetter
---
drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 118 +--
1 file changed, 68 insertions(+), 50 deletions(-)
di
On Fri, Jun 11, 2021 at 08:14:59AM +, Simon Ser wrote:
> On Thursday, June 10th, 2021 at 23:00, Daniel Vetter
> wrote:
>
> > If there's a strong consensus that we really need this then I'm not
> > going to nack this, but this really needs a pile of acks from
> > compositor folks that they're
Am 10.06.21 um 19:59 schrieb Christian König:
Am 10.06.21 um 19:50 schrieb Ondrej Zary:
[SNIP]
I can't see how this is called from the nouveau code, only
possibility I
see is that it is maybe called through the AGP code somehow.
Yes, you're right:
[ 13.192663] Call Trace:
[ 13.192678]
On Mon, Jun 07 2021, Jason Gunthorpe wrote:
> For some reason the vfio_mdev shim mdev_driver has its own module and
> kconfig. As the next patch requires access to it from mdev.ko merge the
> two modules together and remove VFIO_MDEV_DEVICE.
>
> A later patch deletes this driver entirely.
>
> Sig
> What I'm expected to see in the future is new functionality that gets
> implemented by
> one hardware vendor and the kernel developers trying to enable that for
> userspace. It
> could be that the new property is generic, but there is no way of testing
> that on
> more than one implementation
From: Tvrtko Ursulin
Just tidy one instance of incorrect context parameter name and a stray
sentence ending from before reporting was converted to be class based.
Signed-off-by: Tvrtko Ursulin
---
include/uapi/drm/i915_drm.h | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff
On Fri, Jun 11, 2021 at 08:56:04AM -0400, Alyssa Rosenzweig wrote:
> > What I'm expected to see in the future is new functionality that gets
> > implemented by
> > one hardware vendor and the kernel developers trying to enable that for
> > userspace. It
> > could be that the new property is gener
On Thu, Jun 10, 2021 at 02:44:12PM -0700, Rob Clark wrote:
> From: Rob Clark
>
> Add, via the adreno-smmu-priv interface, a way for the GPU to request
> the SMMU to stall translation on faults, and then later resume the
> translation, either retrying or terminating the current translation.
>
> T
On Thu, Jun 10, 2021 at 02:44:13PM -0700, Rob Clark wrote:
> From: Rob Clark
>
> Wire up support to stall the SMMU on iova fault, and collect a devcore-
> dump snapshot for easier debugging of faults.
>
> Currently this is a6xx-only, but mostly only because so far it is the
> only one using adre
If memory allocation fails, `node->base.imem` does not get populated,
causing a NULL pointer dereference on instobj destruction. Fix this
by dereferencing it only if the allocation was successful.
Signed-off-by: Mikko Perttunen
---
drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 4 ++--
1
On Fri, 11 Jun 2021 at 14:22, Tvrtko Ursulin
wrote:
>
> From: Tvrtko Ursulin
>
> Just tidy one instance of incorrect context parameter name and a stray
> sentence ending from before reporting was converted to be class based.
>
> Signed-off-by: Tvrtko Ursulin
Reviewed-by: Matthew Auld
On Fri, Jun 11, 2021 at 1:48 PM Christian König
wrote:
>
> Am 11.06.21 um 09:54 schrieb Daniel Vetter:
> > On Thu, Jun 10, 2021 at 11:17:55AM +0200, Christian König wrote:
> >> Add a common allocation helper. Cleaning up the mix of kzalloc/kmalloc
> >> and some unused code in the selftest.
> >>
>
On Fri, Jun 11, 2021 at 02:02:57PM +0200, Christian König wrote:
> As the name implies if testing all fences is requested we
> should indeed test all fences and not skip the exclusive
> one because we see shared ones.
>
> Signed-off-by: Christian König
Hm I thought we've had the rule that when b
On Fri, Jun 11, 2021 at 02:02:59PM +0200, Christian König wrote:
> Add a common allocation helper. Cleaning up the mix of kzalloc/kmalloc
> and some unused code in the selftest.
>
> v2: polish kernel doc a bit
>
> Signed-off-by: Christian König
> Reviewed-by: Daniel Vetter
Given how absolutely
Am 11.06.21 um 16:47 schrieb Daniel Vetter:
On Fri, Jun 11, 2021 at 02:02:57PM +0200, Christian König wrote:
As the name implies if testing all fences is requested we
should indeed test all fences and not skip the exclusive
one because we see shared ones.
Signed-off-by: Christian König
Hm
Am 11.06.21 um 16:52 schrieb Daniel Vetter:
On Fri, Jun 11, 2021 at 02:02:59PM +0200, Christian König wrote:
Add a common allocation helper. Cleaning up the mix of kzalloc/kmalloc
and some unused code in the selftest.
v2: polish kernel doc a bit
Signed-off-by: Christian König
Reviewed-by:
Early implementation of moving system memory for discrete cards over to
TTM. We first add the notion of objects being migratable under the object
lock to i915 gem, and add some asserts to verify that objects are either
locked or pinned when the placement is checked by the gem code.
Patch 2 and 3 d
The object ops i915_GEM_OBJECT_HAS_IOMEM and the object
I915_BO_ALLOC_STRUCT_PAGE flags are considered immutable by
much of our code. Introduce a new mem_flags member to hold these
and make sure checks for these flags being set are either done
under the object lock or with pages properly pinned. Th
Instead of relying on a static placement, calculate at get_pages() time.
This should work for LMEM regions and system for now. For stolen we need
to take preallocated range into account. That well be added later.
Signed-off-by: Thomas Hellström
---
v2:
- Fixed a style issue (Reported by Matthew A
After a TTM move we need to update the i915 gem flags and caching
settings to reflect the new placement.
Also introduce gpu_binds_iomem() and cpu_maps_iomem() to clean up the
various ways we previously used to detect this.
Finally, initialize the TTM object reserved to be able to update
flags and c
For discrete, use TTM for both cached and WC system memory. That means
we currently rely on the TTM memory accounting / shrinker. For cached
system memory we should consider remaining shmem-backed, which can be
implemented from our ttm_tt_populate calback. We can then also reuse our
own very elabor
On Fri, Jun 11, 2021 at 04:53:11PM +0200, Christian König wrote:
>
>
> Am 11.06.21 um 16:47 schrieb Daniel Vetter:
> > On Fri, Jun 11, 2021 at 02:02:57PM +0200, Christian König wrote:
> > > As the name implies if testing all fences is requested we
> > > should indeed test all fences and not skip
On Fri, Jun 11, 2021 at 02:03:01PM +0200, Christian König wrote:
> Drop the workaround and instead implement a better solution.
>
> Basically we are now chaining all submissions using a dma_fence_chain
> container and adding them as exclusive fence to the dma_resv object.
>
> This way other drive
On Fri, Jun 11, 2021 at 01:43:20PM +1000, Alistair Popple wrote:
> On Friday, 11 June 2021 11:00:34 AM AEST Peter Xu wrote:
> > On Fri, Jun 11, 2021 at 09:17:14AM +1000, Alistair Popple wrote:
> > > On Friday, 11 June 2021 9:04:19 AM AEST Peter Xu wrote:
> > > > On Fri, Jun 11, 2021 at 12:21:26AM +
On Fri, Jun 11, 2021 at 12:03:31PM +0200, Christian König wrote:
> Am 11.06.21 um 11:33 schrieb Daniel Vetter:
> > On Fri, Jun 11, 2021 at 09:42:07AM +0200, Christian König wrote:
> > > Am 11.06.21 um 09:20 schrieb Daniel Vetter:
> > > > On Fri, Jun 11, 2021 at 8:55 AM Christian König
> > > > wrot
Am 11.06.21 um 16:56 schrieb Daniel Vetter:
On Fri, Jun 11, 2021 at 02:03:01PM +0200, Christian König wrote:
Drop the workaround and instead implement a better solution.
Basically we are now chaining all submissions using a dma_fence_chain
container and adding them as exclusive fence to the dma
On Fri, Jun 11, 2021 at 12:07:00PM +0200, Christian König wrote:
> Am 11.06.21 um 10:58 schrieb Daniel Vetter:
> > On Thu, Jun 10, 2021 at 11:17:57AM +0200, Christian König wrote:
> > > Add some rather sophisticated lockless garbage collection
> > > for dma_fence_chain objects.
> > >
> > > For thi
On Fri, Jun 11, 2021 at 12:09:19PM +0200, Christian König wrote:
> Am 11.06.21 um 11:07 schrieb Daniel Vetter:
> > On Thu, Jun 10, 2021 at 11:17:59AM +0200, Christian König wrote:
> > > Unwrap a the explicit fence if it is a dma_fence_chain and
> > > sync to the first fence not matching the owner r
On Fri, Jun 11, 2021 at 12:12:45PM +0200, Christian König wrote:
> Am 11.06.21 um 11:17 schrieb Daniel Vetter:
> > On Thu, Jun 10, 2021 at 11:18:00AM +0200, Christian König wrote:
> > > Drop the workaround and instead implement a better solution.
> > >
> > > Basically we are now chaining all submi
On Fri, Jun 11, 2021 at 09:53:19AM +0300, Tomi Valkeinen wrote:
> On 11/06/2021 08:54, Maxime Ripard wrote:
> > Hi,
> >
> > On Thu, Jun 10, 2021 at 11:00:05PM +0200, Daniel Vetter wrote:
> > > On Thu, Jun 10, 2021 at 7:47 PM Maxime Ripard wrote:
> > > >
> > > > New KMS properties come with a bun
This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.
For example, we plan to use the PCI-e bus for Wi
Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.
Signed-off-by: Claire Chang
---
kernel/dma/swiotlb.c | 53 ++--
1 file changed, 27 insertions(+), 26 deletions(-)
diff --git a/kernel/dma/swio
Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.
Signed-off-by: Claire Chang
---
kernel/dma/swiotlb.c | 23 ---
1 file changed, 16 insertions(+), 7 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/ke
Always have the pointer to the swiotlb pool used in struct device. This
could help simplify the code for other pools.
Signed-off-by: Claire Chang
---
drivers/of/device.c | 3 +++
include/linux/device.h | 4
include/linux/swiotlb.h | 8
kernel/dma/swiotlb.c| 8
4 f
Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.
Signed-off-by: Claire Chang
---
include/linux/swiotlb.h | 3 +-
kernel/dma/Kconfig | 14
kernel/dma/swiotlb.c| 75 +
3 files changed, 91
Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for restricted DMA pool.
Signed-off-by: Claire Chang
---
drivers/iommu/dma-iommu.c | 12 ++--
drivers/xen/swiotlb-xen.c | 2 +-
include/linux/swiotlb.h | 7 ---
kernel/dma/direct.c
Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for restricted DMA pool.
Signed-off-by: Claire Chang
---
drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_ttm.c| 2 +-
drivers/pci/xen-pcifront.c
Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.
The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
Move the maintenance of alloc_size to find_slots for better code
reusability later.
Signed-off-by: Claire Chang
---
kernel/dma/swiotlb.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index e5ccc198d0a7..364c6c822063 100
Add a new function, release_slots, to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.
Signed-off-by: Claire Chang
---
kernel/dma/swiotlb.c | 35 ---
1 file changed, 20 insertions(+), 15 deletions(-)
diff --git a/kern
Add a new wrapper __dma_direct_free_pages() that will be useful later
for swiotlb_free().
Signed-off-by: Claire Chang
---
kernel/dma/direct.c | 14 ++
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 078f7087e466..eb409832
Add the functions, swiotlb_{alloc,free} to support the memory allocation
from restricted DMA pool.
Signed-off-by: Claire Chang
---
include/linux/swiotlb.h | 15 +++
kernel/dma/swiotlb.c| 35 +--
2 files changed, 48 insertions(+), 2 deletions(-)
di
The restricted DMA pool is preferred if available.
The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock
Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.
Signed-off-by: Claire Chang
---
.../reserved-memory/reserved-memory.txt | 36
If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.
Signed-off-by: Claire Chang
---
drivers/of/address.c| 33 +
drivers/of/device.c | 3 +++
drivers/of/of_private.h | 6 +
v9 here: https://lore.kernel.org/patchwork/cover/1445081/
On Mon, Jun 7, 2021 at 11:28 AM Claire Chang wrote:
>
> On Sat, Jun 5, 2021 at 1:48 AM Will Deacon wrote:
> >
> > Hi Claire,
> >
> > On Thu, May 27, 2021 at 08:58:30PM +0800, Claire Chang wrote:
> > > This series implements mitigations fo
I'm not sure if this would break arch/x86/pci/sta2x11-fixup.c
swiotlb_late_init_with_default_size is called here
https://elixir.bootlin.com/linux/v5.13-rc5/source/arch/x86/pci/sta2x11-fixup.c#L60
On Fri, Jun 11, 2021 at 11:27 PM Claire Chang wrote:
>
> Always have the pointer to the swiotlb pool
I don't have the HW to verify the change. Hopefully I use the right
device struct for is_swiotlb_active.
On Fri, 11 Jun 2021 at 15:55, Thomas Hellström
wrote:
>
> Instead of relying on a static placement, calculate at get_pages() time.
> This should work for LMEM regions and system for now. For stolen we need
> to take preallocated range into account. That well be added later.
That will be
>
> Sign
On Fri, 11 Jun 2021 at 15:55, Thomas Hellström
wrote:
>
> The object ops i915_GEM_OBJECT_HAS_IOMEM and the object
> I915_BO_ALLOC_STRUCT_PAGE flags are considered immutable by
> much of our code. Introduce a new mem_flags member to hold these
> and make sure checks for these flags being set are ei
On Fri, 11 Jun 2021 at 15:55, Thomas Hellström
wrote:
>
> After a TTM move we need to update the i915 gem flags and caching
> settings to reflect the new placement.
> Also introduce gpu_binds_iomem() and cpu_maps_iomem() to clean up the
> various ways we previously used to detect this.
> Finally,
1 - 100 of 145 matches
Mail list logo