As the ringbuffer may exist inside stolen memory, our access to it may
be via the GTT iomap. This implies we may only have WC access for which
the conventional memset() substitution of rep stos performs very badly,
so switch to the rep mov[dq] variants when available.
Signed-off-by: Chris Wilson
Build up a large stockpile of requests, ~500,000, and feed them into the
system at 20KHz whilst simultaneously triggering set-wedged in order to
try and race i915_gem_set_wedged() against the engine->submit_request()
callback.
v2: Tweak sleep for flushing timer signals.
Signed-off-by: Chris Wilso
Execute the same batch on each engine and check that the composite fence
across all engines completes only after the batch is completed on every
engine.
Signed-off-by: Chris Wilson
Reviewed-by: Antonio Argenziano
---
tests/gem_exec_fence.c | 127 +
On 17 March 2018 at 08:51, Chris Wilson wrote:
> As the ringbuffer may exist inside stolen memory, our access to it may
> be via the GTT iomap. This implies we may only have WC access for which
> the conventional memset() substitution of rep stos performs very badly,
> so switch to the rep mov[dq]
We already try to keep all GuC log related code in separate file,
handling flush event should be placed there too. This will also
allow future code reuse.
Signed-off-by: Michal Wajdeczko
Cc: Michal Winiarski
Cc: Sagar Arun Kamble
Cc: Chris Wilson
Cc: Oscar Mateo
---
drivers/gpu/drm/i915/inte
On 3/14/2018 3:07 PM, Chris Wilson wrote:
When choosing the initial frequency in intel_gt_pm_busy() we also need
to calculate the current min/max bounds. As this calculation is going to
become more complex with the intersection of several different limits,
refactor it to a common function. The
On 3/17/2018 8:36 PM, Michal Wajdeczko wrote:
We already try to keep all GuC log related code in separate file,
handling flush event should be placed there too. This will also
allow future code reuse.
Signed-off-by: Michal Wajdeczko
Cc: Michal Winiarski
Cc: Sagar Arun Kamble
Cc: Chris Wilso
It makes more sense to use vma->size, since this determines the number
of entries we inserted into the vm, while the vma->node.size is the size
of the vm window we reserved, which may also include padding. At the
very least this keeps things consistent with the GTT routines.
Signed-off-by: Matthew
Hi David,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on v4.16-rc4]
[also build test WARNING on next-20180316]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits
Hi David,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on v4.16-rc4]
[also build test ERROR on next-20180316]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits/David-W
10 matches
Mail list logo