On 9/13/19 5:18 PM, Matthew Wilcox wrote:
On Fri, Sep 13, 2019 at 11:32:09AM +0200, Thomas Hellström (VMware) wrote:
+vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
+ pgprot_t prot,
+ pgoff_t num_prefault
On 9/13/19 3:40 PM, Hillf Danton wrote:
On Fri, 13 Sep 2019 11:32:09 +0200
err = ttm_mem_io_lock(man, true);
- if (unlikely(err != 0)) {
- ret = VM_FAULT_NOPAGE;
- goto out_unlock;
- }
+ if (unlikely(err != 0))
+ return VM_FAULT
On 9/16/19 12:35 PM, Rohan Garg wrote:
DRM_IOCTL_BO_SET_LABEL lets you label GEM objects, making it
easier to debug issues in userspace applications.
Signed-off-by: Rohan Garg
---
drivers/gpu/drm/drm_gem.c | 51 ++
drivers/gpu/drm/drm_internal.h | 2 ++
On 9/17/19 5:05 PM, Rohan Garg wrote:
Hi
We're not closing a device, are we?
Ah, yes, I'll fix this in v2.
Do we have a mechanism in place to stop a malicious unprivileged app
from allocating all kernel memory to gem labels?
I'm unsure why this is a concern since a malicious app could a
From: Thomas Hellstrom
With the vmwgfx dirty tracking, the default TTM fault handler is not
completely sufficient (vmwgfx need to modify the vma->vm_flags member,
and also needs to restrict the number of prefaults).
We also want to replicate the new ttm_bo_vm_reserve() functionality
So start tu
From: Thomas Hellström
Graphics APIs like OpenGL 4.4 and Vulkan require the graphics driver
to provide coherent graphics memory, meaning that the GPU sees any
content written to the coherent memory on the next GPU operation that
touches that memory, and the CPU sees any content written by the GPU
From: Thomas Hellstrom
The explicit typcasts are meaningless, so remove them.
Suggested-by: Matthew Wilcox
Signed-off-by: Thomas Hellstrom
---
drivers/gpu/drm/ttm/ttm_bo_vm.c | 8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/
on large accesses into small memory regions.
The added file "as_dirty_helpers.c" is initially listed as maintained by
VMware under our DRM driver. If somebody would like it elsewhere,
that's of course no problem.
Cc: Andrew Morton
Cc: Matthew Wilcox
Cc: Will Deacon
Cc: Peter Zi
From: Thomas Hellstrom
With emulated coherent memory we need to be able to quickly look up
a resource from the MOB offset. Instead of traversing a linked list with
O(n) worst case, use an RBtree with O(log n) worst case complexity.
Cc: Andrew Morton
Cc: Matthew Wilcox
Cc: Will Deacon
Cc: Pete
From: Thomas Hellstrom
Add the callbacks necessary to implement emulated coherent memory for
surfaces. Add a flag to the gb_surface_create ioctl to indicate that
surface memory should be coherent.
Also bump the drm minor version to signal the availability of coherent
surfaces.
Cc: Andrew Morton
dirty.c
new file mode 100644
index ..be3302a8e309
--- /dev/null
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
@@ -0,0 +1,417 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/******
+ *
+ * Copyright 2019 VMware, Inc., Palo Alto
From: Thomas Hellstrom
Similar to write-coherent resources, make sure that from the user-space
point of view, GPU rendered contents is automatically available for
reading by the CPU.
Cc: Andrew Morton
Cc: Matthew Wilcox
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Rik van Riel
Cc: Minchan Kim
Cc
On 9/18/19 4:41 PM, Kirill A. Shutemov wrote:
On Wed, Sep 18, 2019 at 02:59:08PM +0200, Thomas Hellström (VMware) wrote:
From: Thomas Hellstrom
Add two utilities to a) write-protect and b) clean all ptes pointing into
a range of an address space.
The utilities are intended to aid in tracking
On 9/25/19 12:55 PM, Christian König wrote:
The busy BO might actually be already deleted,
so grab only a list reference.
Signed-off-by: Christian König
---
drivers/gpu/drm/ttm/ttm_bo.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/dri
On 9/25/19 12:55 PM, Christian König wrote:
As the name says global memory and bo accounting is global. So it doesn't
make to much sense having pointers to global structures all around the code.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c | 2 +-
drivers/gpu/
On 9/25/19 12:55 PM, Christian König wrote:
This allows blocking for BOs to become available
in the memory management.
Amdgpu is doing this for quite a while now during CS. Now
apply the new behavior to all drivers using TTM.
Signed-off-by: Christian König
Got to test this to see that there
From: Thomas Hellstrom
The explicit typcasts are meaningless, so remove them.
Suggested-by: Matthew Wilcox
Signed-off-by: Thomas Hellstrom
Reviewed-by: Christian König
---
drivers/gpu/drm/ttm/ttm_bo_vm.c | 8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/
From: Thomas Hellstrom
The default TTM fault handler may not be completely sufficient
(vmwgfx needs to do some bookkeeping, control the write protectionand also
needs to restrict the number of prefaults).
Also make it possible replicate ttm_bo_vm_reserve() functionality for,
for example, mkwrite
From: Thomas Hellström
Graphics APIs like OpenGL 4.4 and Vulkan require the graphics driver
to provide coherent graphics memory, meaning that the GPU sees any
content written to the coherent memory on the next GPU operation that
touches that memory, and the CPU sees any content written by the GPU
From: Thomas Hellstrom
Add two utilities to a) write-protect and b) clean all ptes pointing into
a range of an address space.
The utilities are intended to aid in tracking dirty pages (either
driver-allocated system memory or pci device memory).
The write-protect utility should be used in conjunc
From: Thomas Hellstrom
Similar to write-coherent resources, make sure that from the user-space
point of view, GPU rendered contents is automatically available for
reading by the CPU.
Cc: Andrew Morton
Cc: Matthew Wilcox
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Rik van Riel
Cc: Minchan Kim
Cc
From: Thomas Hellstrom
With emulated coherent memory we need to be able to quickly look up
a resource from the MOB offset. Instead of traversing a linked list with
O(n) worst case, use an RBtree with O(log n) worst case complexity.
Cc: Andrew Morton
Cc: Matthew Wilcox
Cc: Will Deacon
Cc: Pete
dirty.c
new file mode 100644
index ..be3302a8e309
--- /dev/null
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
@@ -0,0 +1,417 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/******
+ *
+ * Copyright 2019 VMware, Inc., Palo Alto
From: Thomas Hellstrom
Add the callbacks necessary to implement emulated coherent memory for
surfaces. Add a flag to the gb_surface_create ioctl to indicate that
surface memory should be coherent.
Also bump the drm minor version to signal the availability of coherent
surfaces.
Cc: Andrew Morton
On 9/26/19 1:55 PM, Thomas Hellström (VMware) wrote:
From: Thomas Hellstrom
Add two utilities to a) write-protect and b) clean all ptes pointing into
a range of an address space.
The utilities are intended to aid in tracking dirty pages (either
driver-allocated system memory or pci device
Hi,
On 9/26/19 9:19 PM, Linus Torvalds wrote:
On Thu, Sep 26, 2019 at 5:03 AM Thomas Hellström (VMware)
wrote:
I wonder if I can get an ack from an mm maintainer to merge this through
DRM along with the vmwgfx patches? Andrew? Matthew?
It would have helped to actually point to the patch
On 9/26/19 10:16 PM, Linus Torvalds wrote:
On Thu, Sep 26, 2019 at 1:09 PM Thomas Hellström (VMware)
wrote:
That said, if people are OK with me modifying the assert in
pud_trans_huge_lock() and make __walk_page_range non-static, it should
probably be possible to make it work, yes.
I don
On 9/27/19 12:20 AM, Linus Torvalds wrote:
On Thu, Sep 26, 2019 at 1:55 PM Thomas Hellström (VMware)
wrote:
Well, we're working on supporting huge puds and pmds in the graphics
VMAs, although in the write-notify cases we're looking at here, we would
probably want to split them d
On 9/27/19 7:55 AM, Thomas Hellström (VMware) wrote:
On 9/27/19 12:20 AM, Linus Torvalds wrote:
On Thu, Sep 26, 2019 at 1:55 PM Thomas Hellström (VMware)
wrote:
Well, we're working on supporting huge puds and pmds in the graphics
VMAs, although in the write-notify cases we're looki
On 9/30/19 7:12 PM, Linus Torvalds wrote:
On Mon, Sep 30, 2019 at 6:04 AM Kirill A. Shutemov wrote:
Have you seen page_vma_mapped_walk()? I made it specifically for rmap code
to cover cases when a THP is mapped with PTEs. To me it's not a big
stretch to make it cover multiple pages too.
I agre
From: Christian König
This feature is only used by vmwgfx and superfluous for everybody else.
Signed-off-by: Christian König
Co-developed-by: Thomas Hellstrom
Signed-off-by: Thomas Hellstrom
Tested-by: Thomas Hellstrom
---
drivers/gpu/drm/ttm/ttm_bo.c | 27 --
On 9/26/19 10:16 PM, Linus Torvalds wrote:
On Thu, Sep 26, 2019 at 1:09 PM Thomas Hellström (VMware)
wrote:
That said, if people are OK with me modifying the assert in
pud_trans_huge_lock() and make __walk_page_range non-static, it should
probably be possible to make it work, yes.
I don
On 10/2/19 3:18 PM, Kirill A. Shutemov wrote:
On Wed, Oct 02, 2019 at 11:21:01AM +0200, Thomas Hellström (VMware) wrote:
On 9/26/19 10:16 PM, Linus Torvalds wrote:
On Thu, Sep 26, 2019 at 1:09 PM Thomas Hellström (VMware)
wrote:
That said, if people are OK with me modifying the assert in
From: Thomas Hellstrom
We were using an ugly hack to set the page protection correctly.
Fix that and instead use vmf_insert_mixed_prot() and / or
vmf_insert_pfn_prot().
Also get the default page protection from
struct vm_area_struct::vm_page_prot rather than using vm_get_page_prot().
This way we
From: Thomas Hellstrom
The TTM module today uses a hack to be able to set a different page
protection than struct vm_area_struct::vm_page_prot. To be able to do
this properly, add and export vmf_insert_mixed_prot().
Cc: Andrew Morton
Cc: Michal Hocko
Cc: "Matthew Wilcox (Oracle)"
Cc: "Kirill
From: Thomas Hellstrom
We currently only do COW and write-notify on the PTE level, so if the
huge_fault() handler returns VM_FAULT_FALLBACK on wp faults,
split the huge pages and page-table entries. Also do this for huge PUDs
if there is no huge_fault() handler and the vma is not anonymous, simil
From: Thomas Hellstrom
For VM_PFNMAP and VM_MIXEDMAP vmas that want to support transhuge pages
and -page table entries, introduce vma_is_special_huge() that takes the
same codepaths as vma_is_dax().
The use of "special" follows the definition in memory.c, vm_normal_page():
"Special" mappings do
From: Thomas Hellstrom
This helper is used to align user-space buffer object addresses to
huge page boundaries, minimizing the chance of alignment mismatch
between user-space addresses and physical addresses.
Cc: Andrew Morton
Cc: Michal Hocko
Cc: "Matthew Wilcox (Oracle)"
Cc: "Kirill A. Shut
From: Thomas Hellstrom
For graphics drivers needing to modify the page-protection, add
huge page-table entries counterparts to vmf_insert_prn_prot().
Cc: Andrew Morton
Cc: Michal Hocko
Cc: "Matthew Wilcox (Oracle)"
Cc: "Kirill A. Shutemov"
Cc: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: "Christ
From: Thomas Hellstrom
Start using the helpers that align buffer object user-space addresses and
buffer object vram addresses to huge page boundaries.
This is to improve the chances of allowing huge page-table entries.
Cc: Andrew Morton
Cc: Michal Hocko
Cc: "Matthew Wilcox (Oracle)"
Cc: "Kiri
From: Thomas Hellstrom
Using huge page-table entries require that the start of a buffer object
is huge page size aligned. So introduce a ttm_bo_man_get_node_huge()
function that attempts to accomplish this for allocations that are larger
than the huge page size, and provide a new range-manager in
From: Thomas Hellstrom
Support huge (PMD-size and PUD-size) page-table entries by providing a
huge_fault() callback.
We still support private mappings and write-notify by splitting the huge
page-table entries on write-access.
Note that for huge page-faults to occur, either the kernel needs to be
In order to save TLB space and CPU usage this patchset enables huge- and giant
page-table entries for TTM and TTM-enabled graphics drivers.
Patch 1 introduces a vma_is_special_huge() function to make the mm code
take the same path as DAX when splitting huge- and giant page table entries,
(which is
On 11/27/19 10:12 AM, Christian König wrote:
Am 27.11.19 um 09:31 schrieb Thomas Hellström (VMware):
From: Thomas Hellstrom
Support huge (PMD-size and PUD-size) page-table entries by providing a
huge_fault() callback.
We still support private mappings and write-notify by splitting the huge
7;s
compatible with the GPU required alignment.
Thanks,
/Thomas
That would be a one liner if I'm not completely mistaken.
Regards,
Christian.
Am 27.11.19 um 09:31 schrieb Thomas Hellström (VMware):
From: Thomas Hellstrom
Using huge page-table entries require that the start of a buffer
From: Thomas Hellstrom
The TTM module today uses a hack to be able to set a different page
protection than struct vm_area_struct::vm_page_prot. To be able to do
this properly, add and export vmf_insert_mixed_prot().
Cc: Andrew Morton
Cc: Michal Hocko
Cc: "Matthew Wilcox (Oracle)"
Cc: "Kirill
From: Thomas Hellstrom
TTM graphics buffer objects may, transparently to user-space, move
between IO and system memory. When that happens, all PTEs pointing to the
old location are zapped before the move and then faulted in again if
needed. When that happens, the page protection caching mode- an
From: Thomas Hellstrom
The drm/ttm module is using a modified on-stack copy of the
struct vm_area_struct to be able to set a page protection with customized
caching. Fix that by adding a vmf_insert_mixed_prot() function similar
to the existing vmf_insert_pfn_prot() for use with drm/ttm.
I'd like
From: Thomas Hellstrom
With vmwgfx dirty-tracking we need a specialized huge_fault
callback. Implement and hook it up.
Cc: Andrew Morton
Cc: Michal Hocko
Cc: "Matthew Wilcox (Oracle)"
Cc: "Kirill A. Shutemov"
Cc: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: "Christian König"
Signed-off-by: Thom
From: Thomas Hellstrom
Using huge page-table entries require that the start of a buffer object
is huge page size aligned. So introduce a ttm_bo_man_get_node_huge()
function that attempts to accomplish this for allocations that are larger
than the huge page size, and provide a new range-manager in
From: Thomas Hellstrom
For VM_PFNMAP and VM_MIXEDMAP vmas that want to support transhuge pages
and -page table entries, introduce vma_is_special_huge() that takes the
same codepaths as vma_is_dax().
The use of "special" follows the definition in memory.c, vm_normal_page():
"Special" mappings do
From: Thomas Hellstrom
We currently only do COW and write-notify on the PTE level, so if the
huge_fault() handler returns VM_FAULT_FALLBACK on wp faults,
split the huge pages and page-table entries. Also do this for huge PUDs
if there is no huge_fault() handler and the vma is not anonymous, simil
From: Thomas Hellstrom
Support huge (PMD-size and PUD-size) page-table entries by providing a
huge_fault() callback.
We still support private mappings and write-notify by splitting the huge
page-table entries on write-access.
Note that for huge page-faults to occur, either the kernel needs to be
From: Thomas Hellstrom
This helper is used to align user-space buffer object addresses to
huge page boundaries, minimizing the chance of alignment mismatch
between user-space addresses and physical addresses.
Cc: Andrew Morton
Cc: Michal Hocko
Cc: "Matthew Wilcox (Oracle)"
Cc: "Kirill A. Shut
From: Thomas Hellstrom
Start using the helpers that align buffer object user-space addresses and
buffer object vram addresses to huge page boundaries.
This is to improve the chances of allowing huge page-table entries.
Cc: Andrew Morton
Cc: Michal Hocko
Cc: "Matthew Wilcox (Oracle)"
Cc: "Kiri
In order to save TLB space and CPU usage this patchset enables huge- and giant
page-table entries for TTM and TTM-enabled graphics drivers.
Patch 1 introduces a vma_is_special_huge() function to make the mm code
take the same path as DAX when splitting huge- and giant page table entries,
(which cu
From: Thomas Hellstrom
For graphics drivers needing to modify the page-protection, add
huge page-table entries counterparts to vmf_insert_prn_prot().
Cc: Andrew Morton
Cc: Michal Hocko
Cc: "Matthew Wilcox (Oracle)"
Cc: "Kirill A. Shutemov"
Cc: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: "Christ
On 12/4/19 12:11 PM, Christian König wrote:
Am 03.12.19 um 14:22 schrieb Thomas Hellström (VMware):
From: Thomas Hellstrom
This helper is used to align user-space buffer object addresses to
huge page boundaries, minimizing the chance of alignment mismatch
between user-space addresses and
On 12/4/19 12:13 PM, Christian König wrote:
Am 03.12.19 um 14:22 schrieb Thomas Hellström (VMware):
From: Thomas Hellstrom
Using huge page-table entries require that the start of a buffer object
is huge page size aligned. So introduce a ttm_bo_man_get_node_huge()
function that attempts to
On 12/4/19 1:08 PM, Christian König wrote:
Am 04.12.19 um 12:36 schrieb Thomas Hellström (VMware):
On 12/4/19 12:11 PM, Christian König wrote:
Am 03.12.19 um 14:22 schrieb Thomas Hellström (VMware):
From: Thomas Hellstrom
This helper is used to align user-space buffer object addresses to
On 12/4/19 1:16 PM, Christian König wrote:
Am 04.12.19 um 12:45 schrieb Thomas Hellström (VMware):
On 12/4/19 12:13 PM, Christian König wrote:
Am 03.12.19 um 14:22 schrieb Thomas Hellström (VMware):
From: Thomas Hellstrom
Using huge page-table entries require that the start of a buffer
On 12/4/19 2:52 PM, Michal Hocko wrote:
On Tue 03-12-19 11:48:53, Thomas Hellström (VMware) wrote:
From: Thomas Hellstrom
TTM graphics buffer objects may, transparently to user-space, move
between IO and system memory. When that happens, all PTEs pointing to the
old location are zapped
On 12/4/19 3:35 PM, Michal Hocko wrote:
On Wed 04-12-19 15:16:09, Thomas Hellström (VMware) wrote:
On 12/4/19 2:52 PM, Michal Hocko wrote:
On Tue 03-12-19 11:48:53, Thomas Hellström (VMware) wrote:
From: Thomas Hellstrom
TTM graphics buffer objects may, transparently to user-space, move
On 12/4/19 3:42 PM, Michal Hocko wrote:
On Wed 04-12-19 15:36:58, Thomas Hellström (VMware) wrote:
On 12/4/19 3:35 PM, Michal Hocko wrote:
On Wed 04-12-19 15:16:09, Thomas Hellström (VMware) wrote:
On 12/4/19 2:52 PM, Michal Hocko wrote:
On Tue 03-12-19 11:48:53, Thomas Hellström (VMware
On 12/4/19 4:26 PM, Michal Hocko wrote:
On Wed 04-12-19 16:19:27, Thomas Hellström (VMware) wrote:
On 12/4/19 3:42 PM, Michal Hocko wrote:
On Wed 04-12-19 15:36:58, Thomas Hellström (VMware) wrote:
On 12/4/19 3:35 PM, Michal Hocko wrote:
On Wed 04-12-19 15:16:09, Thomas Hellström (VMware
On 12/4/19 3:40 PM, Christian König wrote:
Am 04.12.19 um 13:32 schrieb Thomas Hellström (VMware):
On 12/4/19 1:08 PM, Christian König wrote:
Am 04.12.19 um 12:36 schrieb Thomas Hellström (VMware):
On 12/4/19 12:11 PM, Christian König wrote:
Am 03.12.19 um 14:22 schrieb Thomas Hellström
From: Thomas Hellstrom
The drm/ttm module is using a modified on-stack copy of the
struct vm_area_struct to be able to set a page protection with customized
caching. Fix that by adding a vmf_insert_mixed_prot() function similar
to the existing vmf_insert_pfn_prot() for use with drm/ttm.
I'd like
From: Thomas Hellstrom
The TTM module today uses a hack to be able to set a different page
protection than struct vm_area_struct::vm_page_prot. To be able to do
this properly, add the needed vm functionality as vmf_insert_mixed_prot().
Cc: Andrew Morton
Cc: Michal Hocko
Cc: "Matthew Wilcox (Or
From: Thomas Hellstrom
TTM graphics buffer objects may, transparently to user-space, move
between IO and system memory. When that happens, all PTEs pointing to the
old location are zapped before the move and then faulted in again if
needed. When that happens, the page protection caching mode- an
On 8/21/19 6:34 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 05:54:27PM +0200, Thomas Hellström (VMware) wrote:
On 8/20/19 4:53 PM, Daniel Vetter wrote:
Full audit of everyone:
- i915, radeon, amdgpu should be clean per their maintainers.
- vram helpers should be fine, they don'
On 8/21/19 9:51 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 08:27:59PM +0200, Thomas Hellström (VMware) wrote:
On 8/21/19 8:11 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 7:06 PM Thomas Hellström (VMware)
wrote:
On 8/21/19 6:34 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 05
On 8/21/19 4:09 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 2:47 PM Thomas Hellström (VMware)
wrote:
On 8/21/19 2:40 PM, Thomas Hellström (VMware) wrote:
On 8/20/19 4:53 PM, Daniel Vetter wrote:
With nouveau fixed all ttm-using drives have the correct nesting of
mmap_sem vs dma_resv
On 8/21/19 4:47 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 4:27 PM Thomas Hellström (VMware)
wrote:
On 8/21/19 4:09 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 2:47 PM Thomas Hellström (VMware)
wrote:
On 8/21/19 2:40 PM, Thomas Hellström (VMware) wrote:
On 8/20/19 4:53 PM
On 8/21/19 2:40 PM, Thomas Hellström (VMware) wrote:
On 8/20/19 4:53 PM, Daniel Vetter wrote:
With nouveau fixed all ttm-using drives have the correct nesting of
mmap_sem vs dma_resv, and we can just lock the buffer.
Assuming I didn't screw up anything with my audit of course.
Signed-o
i
Cc: Gerd Hoffmann
Cc: "VMware Graphics"
Cc: Thomas Hellstrom
---
drivers/gpu/drm/ttm/ttm_bo.c| 34 -
drivers/gpu/drm/ttm/ttm_bo_vm.c | 26 +
include/drm/ttm/ttm_bo_api.h| 1 -
3 files changed, 1 insertion(+
-less wait optimization (Thomas)
- Use _lock_interruptible to be good citizens (Thomas)
Reviewed-by: Christian König
Signed-off-by: Daniel Vetter
Cc: Christian Koenig
Cc: Huang Rui
Cc: Gerd Hoffmann
Cc: "VMware Graphics"
Cc: Thomas Hellstrom
---
drivers/gpu/drm/ttm/ttm
i
Cc: Gerd Hoffmann
Cc: "VMware Graphics"
Cc: Thomas Hellstrom
---
drivers/gpu/drm/ttm/ttm_bo.c| 34 -
drivers/gpu/drm/ttm/ttm_bo_vm.c | 26 +
include/drm/ttm/ttm_bo_api.h| 1 -
3 files changed, 1 insertion(+
On 8/21/19 5:14 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 5:03 PM Thomas Hellström (VMware)
wrote:
On 8/21/19 4:47 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 4:27 PM Thomas Hellström (VMware)
wrote:
On 8/21/19 4:09 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 2:47 PM
fmann
Cc: Ben Skeggs
Cc: "VMware Graphics"
Cc: Thomas Hellstrom
Signed-off-by: Daniel Vetter
---
drivers/dma-buf/dma-resv.c | 12
1 file changed, 12 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 42a8f3f11681..3edca10d3faf 1006
On 8/21/19 5:22 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 5:19 PM Thomas Hellström (VMware)
wrote:
On 8/21/19 5:14 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 5:03 PM Thomas Hellström (VMware)
wrote:
On 8/21/19 4:47 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 4:27 PM
On 8/21/19 4:10 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 3:16 PM Thomas Hellström (VMware)
wrote:
On 8/20/19 4:53 PM, Daniel Vetter wrote:
With nouveau fixed all ttm-using drives have the correct nesting of
mmap_sem vs dma_resv, and we can just lock the buffer.
Assuming I didn
On 8/21/19 8:11 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 7:06 PM Thomas Hellström (VMware)
wrote:
On 8/21/19 6:34 PM, Daniel Vetter wrote:
On Wed, Aug 21, 2019 at 05:54:27PM +0200, Thomas Hellström (VMware) wrote:
On 8/20/19 4:53 PM, Daniel Vetter wrote:
Full audit of everyone
Gerd Hoffmann
Cc: "VMware Graphics"
Cc: Thomas Hellstrom
---
drivers/gpu/drm/ttm/ttm_bo.c | 36 ---
drivers/gpu/drm/ttm/ttm_bo_util.c | 1 -
drivers/gpu/drm/ttm/ttm_bo_vm.c | 18 +---
include/drm/ttm/ttm_bo_api.h | 4
4 f
Cc: Eric Anholt
Cc: Dave Airlie
Cc: Gerd Hoffmann
Cc: Ben Skeggs
Cc: "VMware Graphics"
Cc: Thomas Hellstrom
Reviewed-by: Christian König
Reviewed-by: Chris Wilson
Tested-by: Chris Wilson
Signed-off-by: Daniel Vetter
---
drivers/dma-buf/dma-resv.c | 24
1
e, I guess either is fine.
/Thomas
But that worked before so it should still work now,
Christian.
Also from the other thread: Reviewed-by: Thomas Hellström
Thanks, Daniel
Signed-off-by: Daniel Vetter
Cc: Christian Koenig
Cc: Huang Rui
Cc: Gerd Hoffmann
Cc: "VMware Graphics"
On 8/22/19 4:24 PM, Thomas Hellström (VMware) wrote:
On 8/22/19 4:02 PM, Koenig, Christian wrote:
Am 22.08.19 um 15:06 schrieb Daniel Vetter:
On Thu, Aug 22, 2019 at 07:56:56AM +, Koenig, Christian wrote:
Am 22.08.19 um 08:49 schrieb Daniel Vetter:
With nouveau fixed all ttm-using drives
On 8/22/19 3:36 PM, Daniel Vetter wrote:
On Thu, Aug 22, 2019 at 3:30 PM Thomas Hellström (VMware)
wrote:
On 8/22/19 3:07 PM, Daniel Vetter wrote:
Full audit of everyone:
- i915, radeon, amdgpu should be clean per their maintainers.
- vram helpers should be fine, they don't do co
From: Thomas Hellstrom
The FAULT_FLAG_ALLOW_RETRY semantics is tricky and appears poorly
documented. Add a comment to the TTM fault() implementation to avoid
future confusion.
Cc: Christian Koenig
Signed-off-by: Thomas Hellstrom
---
drivers/gpu/drm/ttm/ttm_bo_vm.c | 11 +++
1 file cha
From: Thomas Hellstrom
The FAULT_FLAG_ALLOW_RETRY semantics is tricky and appears poorly
documented. Add a comment to the TTM fault() implementation to avoid
future confusion.
Cc: Christian Koenig
Signed-off-by: Thomas Hellstrom
---
v2: Incorrect email to Christian :)
---
drivers/gpu/drm/ttm/
From: Thomas Hellstrom
With SEV encryption, all DMA memory must be marked decrypted
(AKA "shared") for devices to be able to read it. In the future we might
want to be able to switch normal (encrypted) memory to decrypted in exactly
the same way as we handle caching states, and that would require
From: Thomas Hellstrom
The TTM dma pool allocates coherent pages for use with TTM. When SEV is
active, such allocations become very expensive since the linear kernel
map has to be changed to mark the pages decrypted. So to avoid too many
such allocations and frees, cache the decrypted pages even
AINTAINERS
@@ -17203,6 +17203,7 @@ M: "VMware, Inc."
L: virtualizat...@lists.linux-foundation.org
S: Supported
F: arch/x86/kernel/cpu/vmware.c
+F: arch/x86/include/asm/vmware.h
VMWARE PVRDMA DRIVER
M: Adit Ranadive
diff --git a/arch/x86/include/asm/cpufeatures.h
From: Thomas Hellstrom
Use the definition provided by include/asm/vmware.h
CC: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: "H. Peter Anvin"
Cc:
Cc:
Signed-off-by: Thomas Hellstrom
Reviewed-by: Doug Covelli
Acked-by: Dmitry Torokhov
---
drivers/input/mouse/vmmouse.c | 6 +++-
From: Thomas Hellstrom
Use the definition provided by include/asm/vmware.h
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: "H. Peter Anvin"
Cc:
Cc:
Signed-off-by: Thomas Hellstrom
Reviewed-by: Doug Covelli
---
drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 21 +
drive
10:13:15 +0200
From: Thomas Hellström (VMware)
To: linux-ker...@vger.kernel.org
CC: Thomas Hellstrom , pv-driv...@vmware.com,
x...@kernel.org, dri-devel@lists.freedesktop.org, Doug Covelli
, Ingo Molnar , Borislav Petkov
, H. Peter Anvin ,
linux-graphics-maintai...@vmware.com, Thomas
On 8/27/19 5:44 PM, Borislav Petkov wrote:
On Fri, Aug 23, 2019 at 10:13:14AM +0200, Thomas Hellström (VMware) wrote:
+/*
+ * The high bandwidth out call. The low word of edx is presumed to have the
+ * HB and OUT bits set.
+ */
+#define VMWARE_HYPERCALL_HB_OUT
With SEV memory encryption and in some cases also with SME memory
encryption, coherent memory is unencrypted. In those cases, TTM doesn't
set up the correct page protection. Fix this by having the TTM
coherent page allocator call into the platform code to determine whether
coherent memory is encryp
From: Thomas Hellstrom
The TTM dma pool allocates coherent pages for use with TTM. When forcing
unencrypted DMA, such allocations become very expensive since the linear
kernel map has to be changed to mark the pages decrypted. To avoid too many
such allocations and frees, cache the decrypted page
From: Thomas Hellstrom
With TTM pages allocated out of the DMA pool, use the
force_dma_unencrypted function to be able to set up the correct
page-protection. Previously it was unconditionally set to encrypted,
which only works with SME encryption on devices with a large enough DMA
mask.
Tested w
From: Thomas Hellstrom
The force_dma_unencrypted symbol is needed by TTM to set up the correct
page protection when memory encryption is active. Export it.
Cc: Dave Hansen
Cc: Andy Lutomirski
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: "H. Peter Anvin"
C
201 - 300 of 441 matches
Mail list logo