On 2/3/21 4:29 PM, Daniel Vetter wrote:
Recently there was a fairly long thread about recoreable hardware page
faults, how they can deadlock, and what to do about that.
While the discussion is still fresh I figured good time to try and
document the conclusions a bit. This documentation section
Hi Sumera,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on linus/master]
[also build test WARNING on v5.12-rc1 next-20210303]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--bas
Am 03.03.21 um 18:19 schrieb Daniel Vetter:
On Wed, Mar 3, 2021 at 4:57 PM Christian König
wrote:
QXL indeed unrefs pinned BOs and the warnings are spamming peoples log files.
Make sure we warn only once until the QXL driver is fixed.
Signed-off-by: Christian König
Can you pls add FIXME c
On Wed, Mar 03, 2021 at 10:47:44AM +0200, Pekka Paalanen wrote:
> On Tue, 2 Mar 2021 12:41:32 -0800
> Manasi Navare wrote:
>
> > In case of a modeset where a mode gets split across mutiple CRTCs
> > in the driver specific implementation (bigjoiner in i915) we wrongly count
> > the affected CRTCs
On Wed, Mar 3, 2021 at 9:36 PM Christian König
wrote:
>
>
>
> Am 03.03.21 um 18:19 schrieb Daniel Vetter:
> > On Wed, Mar 3, 2021 at 4:57 PM Christian König
> > wrote:
> >> QXL indeed unrefs pinned BOs and the warnings are spamming peoples log
> >> files.
> >>
> >> Make sure we warn only once un
[AMD Official Use Only - Internal Distribution Only]
Hi Christian,
Can you explain why __iomem annotation is mandatory for amdgpu driver? If this
is the case, we can't switch to memremap. The only fix seems to me is add a
#ifdef __x86_64__ to the ioremap_cache codes.
Regards,
Oak
From: Christ
If tbo.mem.bus.caching is cached, buffer is intended to be mapped
as cached from CPU. Map it with ioremap_cache.
This wasn't necessary before as device memory was never mapped
as cached from CPU side. It becomes necessary for aldebaran as
device memory is mapped cached from CPU.
Signed-off-by: Oa
Hi Jagan,
On Wed, Mar 03, 2021 at 08:08:35PM +0530, Jagan Teki wrote:
> On Wed, Feb 24, 2021 at 6:44 PM Laurent Pinchart wrote:
> > On Wed, Feb 24, 2021 at 06:07:43PM +0530, Jagan Teki wrote:
> > > On Mon, Feb 15, 2021 at 5:48 PM Laurent Pinchart wrote:
> > > > On Sun, Feb 14, 2021 at 11:22:10PM +
Hi, Rob:
Rob Herring 於 2021年2月24日 週三 上午5:51寫道:
>
> Update the mediatek,dpi binding to use the graph schema. Missed
> this one from the mass conversion since it's not part of drm-misc.
Applied to mediatek-drm-next [1], thanks.
[1]
https://git.kernel.org/pub/scm/linux/kernel/git/chunkuang.hu/lin
On 2021-03-01 13:41, Dmitry Baryshkov wrote:
if GPU components have failed to bind, shutdown callback would fail
with
the following backtrace. Add safeguard check to stop that oops from
happening and allow the board to reboot.
[ 66.617046] Unable to handle kernel NULL pointer dereference at
v
A recent patch renaming MIPI_DSI_MODE_EOT_PACKET to
MIPI_DSI_MODE_NO_EOT_PACKET brought to light the
misunderstanding in the current MCDE driver and all
its associated panel drivers that MIPI_DSI_MODE_EOT_PACKET
would mean "use EOT packet" when in fact it means the
reverse.
Fix it up by implementi
Radeon Card:
Caicos[Radeon HD 6450/7450/8450 /R5 230 OEM]
there is no gray screen when echo 4>/sys/module/drm/parameters/debug,
so the WREG32 function after DRM_DEBUG_KMS may have wrong when going
into hibernation.the delay of msleep(50) just can fix gray screen.
Signed-off-by: wangjingyu
Sign
On Thu, Mar 4, 2021 at 8:41 AM Linus Walleij wrote:
>
> A recent patch renaming MIPI_DSI_MODE_EOT_PACKET to
> MIPI_DSI_MODE_NO_EOT_PACKET brought to light the
> misunderstanding in the current MCDE driver and all
> its associated panel drivers that MIPI_DSI_MODE_EOT_PACKET
> would mean "use EOT pa
Hi Robert,
On Wed, 2021-03-03 at 16:34 +0100, Robert Foss wrote:
> On Wed, 3 Mar 2021 at 08:23, Liu Ying wrote:
> > Hi Robert,
> >
> > On Tue, 2021-03-02 at 15:22 +0100, Robert Foss wrote:
> > > Hey Liu,
> > >
> > > Thanks for submitting this patch.
> >
> > Thanks for reviewing this patch.
> >
On Tuesday, 2 March 2021 3:10:49 AM AEDT Jason Gunthorpe wrote:
> > + while (page_vma_mapped_walk(&pvmw)) {
> > + /*
> > +* If the page is mlock()d, we cannot swap it out.
> > +* If it's recently referenced (perhaps page_referenced
> > +
Hi Dave, Daniel,
Fixes for 5.12.
The following changes since commit ea3b4242bc9ca197762119382b37e125815bd67f:
drm/amd/display: Fix system hang after multiple hotplugs (v3) (2021-02-24
09:48:46 -0500)
are available in the Git repository at:
https://gitlab.freedesktop.org/agd5f/linux.git
t
On Tuesday, 2 March 2021 11:41:52 PM AEDT Jason Gunthorpe wrote:
> > However try_to_protect() scans the PTEs again under the PTL so checking
the
> > mapping of interest actually gets replaced during the rmap walk seems like
a
> > reasonable solution. Thanks for the comments.
>
> It does seem c
This is the forth version of a series to add support to Nouveau for atomic
memory operations on OpenCL shared virtual memory (SVM) regions. This is
achieved using the atomic PTE bits on the GPU to only permit atomic
operations to system memory when a page is not mapped in userspace on the
CPU. The
Remove the migration and device private entry_to_page() and
entry_to_pfn() inline functions and instead open code them directly.
This results in shorter code which is easier to understand.
Signed-off-by: Alistair Popple
---
v4:
* Added pfn_swap_entry_to_page()
* Reinstated check that migration
Both migration and device private pages use special swap entries that
are manipluated by a range of inline functions. The arguments to these
are somewhat inconsitent so rework them to remove flag type arguments
and to make the arguments similar for both read and write entry
creation.
Signed-off-by
The behaviour of try_to_unmap_one() is difficult to follow because it
performs different operations based on a fairly large set of flags used
in different combinations.
TTU_MUNLOCK is one such flag. However it is exclusively used by
try_to_munlock() which specifies no other flags. Therefore rather
Migration is currently implemented as a mode of operation for
try_to_unmap_one() generally specified by passing the TTU_MIGRATION flag
or in the case of splitting a huge anonymous page TTU_SPLIT_FREEZE.
However it does not have much in common with the rest of the unmap
functionality of try_to_unma
Some devices require exclusive write access to shared virtual
memory (SVM) ranges to perform atomic operations on that memory. This
requires CPU page tables to be updated to deny access whilst atomic
operations are occurring.
In order to do this introduce a new swap entry
type (SWP_DEVICE_EXCLUSIV
Adds some selftests for exclusive device memory.
Signed-off-by: Alistair Popple
Acked-by: Jason Gunthorpe
Tested-by: Ralph Campbell
Reviewed-by: Ralph Campbell
---
lib/test_hmm.c | 124 ++
lib/test_hmm_uapi.h| 2 +
tools/testing/selfte
Call mmu_interval_notifier_insert() as part of nouveau_range_fault().
This doesn't introduce any functional change but makes it easier for a
subsequent patch to alter the behaviour of nouveau_range_fault() to
support GPU atomic operations.
Signed-off-by: Alistair Popple
---
drivers/gpu/drm/nouve
Some NVIDIA GPUs do not support direct atomic access to system memory
via PCIe. Instead this must be emulated by granting the GPU exclusive
access to the memory. This is achieved by replacing CPU page table
entries with special swap entries that fault on userspace access.
The driver then grants th
(cc'ing Gerd)
This might be related to the recent clean-up patches for the BO handling
in qxl.
Am 03.03.21 um 16:07 schrieb Petr Mladek:
On Wed 2021-03-03 15:34:09, Petr Mladek wrote:
Hi,
the following warning is filling my kernel log buffer
with 5.12-rc1+ kernels:
[ 941.070598] WARNING:
Hi Oak,
as far as I know some architectures like PowerPC/ARM/MIPS need that. And
we at least officially support PowerPC and ARM and MIPS is best effort
and shouldn't break if possible.
Thomas just recently had a whole bunch of DMA-buf patches to also fix
that up for DMA-bufs vmap as well, pr
I think we should check for CONFIG_X86 instead, but in general it sounds
like the right approach to me for now.
Regards,
Christian.
Am 03.03.21 um 22:12 schrieb Oak Zeng:
If tbo.mem.bus.caching is cached, buffer is intended to be mapped
as cached from CPU. Map it with ioremap_cache.
This wasn
I also already send a patch to the list to mitigate the warnings into a
WARN_ON_ONCE().
Christian.
Am 04.03.21 um 08:42 schrieb Thomas Zimmermann:
(cc'ing Gerd)
This might be related to the recent clean-up patches for the BO
handling in qxl.
Am 03.03.21 um 16:07 schrieb Petr Mladek:
On We
101 - 130 of 130 matches
Mail list logo