[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule

2020-05-19 Thread Patchwork
== Series Details ==

Series: series starting with [01/12] drm/i915: Don't set queue-priority hint 
when supressing the reschedule
URL   : https://patchwork.freedesktop.org/series/77389/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
364ab8bd9968 drm/i915: Don't set queue-priority hint when supressing the 
reschedule
-:10: WARNING:TYPO_SPELLING: 'runnning' may be misspelled - perhaps 'running'?
#10: 
the HW runnning with only the inflight request.

total: 0 errors, 1 warnings, 0 checks, 28 lines checked
1d8f35d7152c drm/i915/selftests: Change priority overflow detection
160ec59b6ea7 drm/i915/selftests: Restore to default heartbeat
447b4e777a40 drm/i915/selftests: Check for an initial-breadcrumb in 
wait_for_submit()
841b9993f566 drm/i915/execlists: Shortcircuit queue_prio() for no internal 
levels
1ebfae936ba7 drm/i915: Move saturated workload detection back to the context
-:22: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description 
(prefer a maximum 75 chars per line)
#22: 
References: 44d89409a12e ("drm/i915: Make the semaphore saturation mask global")

-:22: ERROR:GIT_COMMIT_ID: Please use git commit description style 'commit <12+ 
chars of sha1> ("")' - ie: 'commit 44d89409a12e ("drm/i915: Make 
the semaphore saturation mask global")'
#22: 
References: 44d89409a12e ("drm/i915: Make the semaphore saturation mask global")

total: 1 errors, 1 warnings, 0 checks, 68 lines checked
dabfa4752751 drm/i915/selftests: Add tests for timeslicing virtual engines
0bbb9df18350 drm/i915/gt: Kick virtual siblings on timeslice out
be9b4c6a092f drm/i915/gt: Incorporate the virtual engine into timeslicing
05075f82f723 drm/i915/gt: Use virtual_engine during execlists_dequeue
36097b3facaf drm/i915/gt: Decouple inflight virtual engines
2dd47ed24108 drm/i915/gt: Resubmit the virtual engine on schedule-out

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.SPARSE: warning for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule

2020-05-19 Thread Patchwork
== Series Details ==

Series: series starting with [01/12] drm/i915: Don't set queue-priority hint 
when supressing the reschedule
URL   : https://patchwork.freedesktop.org/series/77389/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.0
Fast mode used, each commit won't be checked separately.
-
+drivers/gpu/drm/i915/display/intel_display.c:1222:22: error: Expected constant 
expression in case statement
+drivers/gpu/drm/i915/display/intel_display.c:1225:22: error: Expected constant 
expression in case statement
+drivers/gpu/drm/i915/display/intel_display.c:1228:22: error: Expected constant 
expression in case statement
+drivers/gpu/drm/i915/display/intel_display.c:1231:22: error: Expected constant 
expression in case statement
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2274:17: error: bad integer 
constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2275:17: error: bad integer 
constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2276:17: error: bad integer 
constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2277:17: error: bad integer 
constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2278:17: error: bad integer 
constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2279:17: error: bad integer 
constant expression
+drivers/gpu/drm/i915/gt/intel_reset.c:1310:5: warning: context imbalance in 
'intel_gt_reset_trylock' - different lock contexts for basic block
+drivers/gpu/drm/i915/gt/sysfs_engines.c:61:10: error: bad integer constant 
expression
+drivers/gpu/drm/i915/gt/sysfs_engines.c:62:10: error: bad integer constant 
expression
+drivers/gpu/drm/i915/gt/sysfs_engines.c:66:10: error: bad integer constant 
expression
+drivers/gpu/drm/i915/gvt/mmio.c:287:23: warning: memcpy with byte count of 
279040
+drivers/gpu/drm/i915/i915_perf.c:1425:15: warning: memset with byte count of 
16777216
+drivers/gpu/drm/i915/i915_perf.c:1479:15: warning: memset with byte count of 
16777216
+./include/linux/compiler.h:199:9: warning: context imbalance in 
'engines_sample' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen6_read16' 
- different lock contexts for basic block
+./include/linux/spinlock.h:408

[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule

2020-05-19 Thread Patchwork
== Series Details ==

Series: series starting with [01/12] drm/i915: Don't set queue-priority hint 
when supressing the reschedule
URL   : https://patchwork.freedesktop.org/series/77389/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8498 -> Patchwork_17707


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/index.html

Known issues


  Here are the changes found in Patchwork_17707 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@i915_selftest@live@gt_mocs:
- fi-bwr-2160:[PASS][1] -> [INCOMPLETE][2] ([i915#489])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/fi-bwr-2160/igt@i915_selftest@live@gt_mocs.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/fi-bwr-2160/igt@i915_selftest@live@gt_mocs.html

  
  {name}: This element is suppressed. This means it is ignored when computing
  the status of the difference (SUCCESS, WARNING, or FAILURE).

  [i915#1803]: https://gitlab.freedesktop.org/drm/intel/issues/1803
  [i915#489]: https://gitlab.freedesktop.org/drm/intel/issues/489


Participating hosts (52 -> 44)
--

  Missing(8): fi-kbl-soraka fi-ilk-m540 fi-hsw-4200u fi-byt-squawks 
fi-bsw-cyan fi-ctg-p8600 fi-byt-clapper fi-bdw-samus 


Build changes
-

  * Linux: CI_DRM_8498 -> Patchwork_17707

  CI-20190529: 20190529
  CI_DRM_8498: 1493c649ae92207a758afa50a639275bd6c80e2e @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5659: 66ab5e42811fee3dea8c21ab29e70e323a0650de @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17707: 2dd47ed241085a15e7df5e1f79ac259a94eff274 @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

2dd47ed24108 drm/i915/gt: Resubmit the virtual engine on schedule-out
36097b3facaf drm/i915/gt: Decouple inflight virtual engines
05075f82f723 drm/i915/gt: Use virtual_engine during execlists_dequeue
be9b4c6a092f drm/i915/gt: Incorporate the virtual engine into timeslicing
0bbb9df18350 drm/i915/gt: Kick virtual siblings on timeslice out
dabfa4752751 drm/i915/selftests: Add tests for timeslicing virtual engines
1ebfae936ba7 drm/i915: Move saturated workload detection back to the context
841b9993f566 drm/i915/execlists: Shortcircuit queue_prio() for no internal 
levels
447b4e777a40 drm/i915/selftests: Check for an initial-breadcrumb in 
wait_for_submit()
160ec59b6ea7 drm/i915/selftests: Restore to default heartbeat
1d8f35d7152c drm/i915/selftests: Change priority overflow detection
364ab8bd9968 drm/i915: Don't set queue-priority hint when supressing the 
reschedule

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC

2020-05-19 Thread Ville Syrjälä
On Mon, May 18, 2020 at 05:58:32PM -0700, Swathi Dhanavanthri wrote:
> This is a permanent w/a for JSL/EHL.This is to be applied to the
> PCH types on JSL/EHL ie JSP/MCC
> Bspec: 52888
> 
> Signed-off-by: Swathi Dhanavanthri 
> ---
>  drivers/gpu/drm/i915/i915_irq.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index 4dc601dffc08..1974369cebb8 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -2902,8 +2902,8 @@ static void gen11_display_irq_reset(struct 
> drm_i915_private *dev_priv)
>   if (INTEL_PCH_TYPE(dev_priv) >= PCH_ICP)
>   GEN3_IRQ_RESET(uncore, SDE);
>  
> - /* Wa_14010685332:icl */
> - if (INTEL_PCH_TYPE(dev_priv) == PCH_ICP) {
> + /* Wa_14010685332:icl,jsl,ehl */
> + if (INTEL_PCH_TYPE(dev_priv) == PCH_ICP || PCH_JSP || PCH_MCC) {

That's not how c works.

>   intel_uncore_rmw(uncore, SOUTH_CHICKEN1,
>SBCLK_RUN_REFCLK_DIS, SBCLK_RUN_REFCLK_DIS);
>   intel_uncore_rmw(uncore, SOUTH_CHICKEN1,
> -- 
> 2.20.1
> 
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Ville Syrjälä
Intel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915/gvt: Use ARRAY_SIZE for vgpu_types

2020-05-19 Thread Zhenyu Wang
On 2020.05.18 22:00:52 +0100, Chris Wilson wrote:
> Quoting Aishwarya Ramakrishnan (2020-05-18 16:03:36)
> > Prefer ARRAY_SIZE instead of using sizeof
> > 
> > Fixes coccicheck warning: Use ARRAY_SIZE
> > 
> > Signed-off-by: Aishwarya Ramakrishnan 
> Reviewed-by: Chris Wilson 

Applied, thanks!

-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827


signature.asc
Description: PGP signature
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.IGT: failure for series starting with [01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule

2020-05-19 Thread Patchwork
== Series Details ==

Series: series starting with [01/12] drm/i915: Don't set queue-priority hint 
when supressing the reschedule
URL   : https://patchwork.freedesktop.org/series/77389/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_8498_full -> Patchwork_17707_full


Summary
---

  **FAILURE**

  Serious unknown changes coming with Patchwork_17707_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_17707_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
---

  Here are the unknown changes that may have been introduced in 
Patchwork_17707_full:

### IGT changes ###

 Possible regressions 

  * igt@gem_exec_balancer@semaphore:
- shard-tglb: [PASS][1] -> [DMESG-WARN][2]
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-tglb8/igt@gem_exec_balan...@semaphore.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-tglb5/igt@gem_exec_balan...@semaphore.html

  * igt@runner@aborted:
- shard-tglb: NOTRUN -> [FAIL][3]
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-tglb5/igt@run...@aborted.html

  
Known issues


  Here are the changes found in Patchwork_17707_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@kms_cursor_crc@pipe-b-cursor-suspend:
- shard-apl:  [PASS][4] -> [DMESG-WARN][5] ([i915#180]) +2 similar 
issues
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-apl1/igt@kms_cursor_...@pipe-b-cursor-suspend.html
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-apl6/igt@kms_cursor_...@pipe-b-cursor-suspend.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions:
- shard-skl:  [PASS][6] -> [FAIL][7] ([IGT#5])
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-skl3/igt@kms_cursor_leg...@flip-vs-cursor-atomic-transitions.html
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-skl9/igt@kms_cursor_leg...@flip-vs-cursor-atomic-transitions.html

  * igt@kms_cursor_legacy@flip-vs-cursor-toggle:
- shard-skl:  [PASS][8] -> [FAIL][9] ([IGT#5] / [i915#697])
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-skl6/igt@kms_cursor_leg...@flip-vs-cursor-toggle.html
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-skl8/igt@kms_cursor_leg...@flip-vs-cursor-toggle.html

  * igt@kms_flip_tiling@flip-changes-tiling:
- shard-apl:  [PASS][10] -> [FAIL][11] ([i915#95])
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-apl8/igt@kms_flip_til...@flip-changes-tiling.html
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-apl8/igt@kms_flip_til...@flip-changes-tiling.html

  * igt@kms_hdr@bpc-switch-suspend:
- shard-skl:  [PASS][12] -> [FAIL][13] ([i915#1188])
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-skl5/igt@kms_...@bpc-switch-suspend.html
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-skl8/igt@kms_...@bpc-switch-suspend.html

  * igt@kms_plane_alpha_blend@pipe-c-constant-alpha-min:
- shard-skl:  [PASS][14] -> [FAIL][15] ([fdo#108145] / [i915#265]) 
+1 similar issue
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-skl3/igt@kms_plane_alpha_bl...@pipe-c-constant-alpha-min.html
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-skl9/igt@kms_plane_alpha_bl...@pipe-c-constant-alpha-min.html

  * igt@kms_psr@psr2_sprite_render:
- shard-iclb: [PASS][16] -> [SKIP][17] ([fdo#109441]) +2 similar 
issues
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-iclb2/igt@kms_psr@psr2_sprite_render.html
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-iclb6/igt@kms_psr@psr2_sprite_render.html

  
 Possible fixes 

  * igt@gen9_exec_parse@allowed-all:
- shard-apl:  [DMESG-WARN][18] ([i915#1436] / [i915#716]) -> 
[PASS][19]
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-apl1/igt@gen9_exec_pa...@allowed-all.html
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-apl4/igt@gen9_exec_pa...@allowed-all.html

  * igt@i915_selftest@live@execlists:
- shard-skl:  [INCOMPLETE][20] ([i915#1795] / [i915#1874]) -> 
[PASS][21]
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8498/shard-skl2/igt@i915_selftest@l...@execlists.html
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17707/shard-skl4/igt@i915_selftest@l...@execlists.html

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
- shard-kbl:  [DMESG-WARN][22] ([i915#180]) -> [PASS][23] +3 
similar issues
   [2

Re: [Intel-gfx] [PATCH 07/12] drm/i915/selftests: Add tests for timeslicing virtual engines

2020-05-19 Thread Tvrtko Ursulin



On 19/05/2020 07:31, Chris Wilson wrote:

Make sure that we can execute a virtual request on an already busy
engine, and conversely that we can execute a normal request if the
engines are already fully occupied by virtual requests.

Signed-off-by: Chris Wilson 
---
  drivers/gpu/drm/i915/gt/selftest_lrc.c | 200 -
  1 file changed, 197 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c 
b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index f6949cd55e92..ef38dd52945c 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -3591,9 +3591,11 @@ static int nop_virtual_engine(struct intel_gt *gt,
return err;
  }
  
-static unsigned int select_siblings(struct intel_gt *gt,

-   unsigned int class,
-   struct intel_engine_cs **siblings)
+static unsigned int
+__select_siblings(struct intel_gt *gt,
+ unsigned int class,
+ struct intel_engine_cs **siblings,
+ bool (*filter)(const struct intel_engine_cs *))
  {
unsigned int n = 0;
unsigned int inst;
@@ -3602,12 +3604,23 @@ static unsigned int select_siblings(struct intel_gt *gt,
if (!gt->engine_class[class][inst])
continue;
  
+		if (filter && !filter(gt->engine_class[class][inst]))

+   continue;
+
siblings[n++] = gt->engine_class[class][inst];
}
  
  	return n;

  }
  
+static unsigned int

+select_siblings(struct intel_gt *gt,
+   unsigned int class,
+   struct intel_engine_cs **siblings)
+{
+   return __select_siblings(gt, class, siblings, NULL);
+}
+
  static int live_virtual_engine(void *arg)
  {
struct intel_gt *gt = arg;
@@ -3762,6 +3775,186 @@ static int live_virtual_mask(void *arg)
return 0;
  }
  
+static long slice_timeout(struct intel_engine_cs *engine)

+{
+   long timeout;
+
+   /* Enough time for a timeslice to kick in, and kick out */
+   timeout = 2 * msecs_to_jiffies_timeout(timeslice(engine));
+
+   /* Enough time for the nop request to complete */
+   timeout += HZ / 5;
+
+   return timeout;
+}
+
+static int slicein_virtual_engine(struct intel_gt *gt,
+ struct intel_engine_cs **siblings,
+ unsigned int nsibling)
+{
+   const long timeout = slice_timeout(siblings[0]);
+   struct intel_context *ce;
+   struct i915_request *rq;
+   struct igt_spinner spin;
+   unsigned int n;
+   int err = 0;
+
+   /*
+* Virtual requests must take part in timeslicing on the target engines.
+*/
+
+   if (igt_spinner_init(&spin, gt))
+   return -ENOMEM;
+
+   for (n = 0; n < nsibling; n++) {
+   ce = intel_context_create(siblings[n]);
+   if (IS_ERR(ce)) {
+   err = PTR_ERR(ce);
+   goto out;
+   }
+
+   rq = igt_spinner_create_request(&spin, ce, MI_ARB_CHECK);
+   intel_context_put(ce);
+   if (IS_ERR(rq)) {
+   err = PTR_ERR(rq);
+   goto out;
+   }
+
+   i915_request_add(rq);
+   }
+
+   ce = intel_execlists_create_virtual(siblings, nsibling);
+   if (IS_ERR(ce)) {
+   err = PTR_ERR(ce);
+   goto out;
+   }
+
+   rq = intel_context_create_request(ce);
+   intel_context_put(ce);
+   if (IS_ERR(rq)) {
+   err = PTR_ERR(rq);
+   goto out;
+   }
+
+   i915_request_get(rq);
+   i915_request_add(rq);
+   if (i915_request_wait(rq, 0, timeout) < 0) {
+   GEM_TRACE_ERR("%s(%s) failed to slice in virtual request\n",
+ __func__, rq->engine->name);
+   GEM_TRACE_DUMP();
+   intel_gt_set_wedged(gt);
+   err = -EIO;
+   }
+   i915_request_put(rq);
+
+out:
+   igt_spinner_end(&spin);
+   if (igt_flush_test(gt->i915))
+   err = -EIO;
+   igt_spinner_fini(&spin);
+   return err;
+}
+
+static int sliceout_virtual_engine(struct intel_gt *gt,
+  struct intel_engine_cs **siblings,
+  unsigned int nsibling)
+{
+   const long timeout = slice_timeout(siblings[0]);
+   struct intel_context *ce;
+   struct i915_request *rq;
+   struct igt_spinner spin;
+   unsigned int n;
+   int err = 0;
+
+   /*
+* Virtual requests must allow others a fair timeslice.
+*/
+
+   if (igt_spinner_init(&spin, gt))
+   return -ENOMEM;
+
+   /* XXX We do not handle oversubscription and fairness with normal rq */
+   for (n = 0; n < nsibling; n++) {
+   ce = intel_execlists_create_virtual(siblings, 

Re: [Intel-gfx] [PATCH 09/12] drm/i915/gt: Incorporate the virtual engine into timeslicing

2020-05-19 Thread Tvrtko Ursulin



On 19/05/2020 07:31, Chris Wilson wrote:

It was quite the oversight to only factor in the normal queue to decide
the timeslicing switch priority. By leaving out the next virtual request
from the priority decision, we would not timeslice the current engine if
there was an available virtual request.

Testcase: igt/gem_exec_balancer/sliced
Fixes: 3df2deed411e ("drm/i915/execlists: Enable timeslice on partial virtual engine 
dequeue")
Signed-off-by: Chris Wilson 
Cc: Mika Kuoppala 
Cc: Tvrtko Ursulin 
---
  drivers/gpu/drm/i915/gt/intel_lrc.c | 30 +++--
  1 file changed, 24 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c 
b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 35e7ae3c049c..42cb0cae2845 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1898,7 +1898,8 @@ static void defer_active(struct intel_engine_cs *engine)
  
  static bool

  need_timeslice(const struct intel_engine_cs *engine,
-  const struct i915_request *rq)
+  const struct i915_request *rq,
+  const struct rb_node *rb)
  {
int hint;
  
@@ -1906,6 +1907,24 @@ need_timeslice(const struct intel_engine_cs *engine,

return false;
  
  	hint = engine->execlists.queue_priority_hint;

+
+   if (rb) {
+   const struct virtual_engine *ve =
+   rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
+   const struct intel_engine_cs *inflight =
+   intel_context_inflight(&ve->context);
+
+   if (!inflight || inflight == engine) {
+   struct i915_request *next;
+
+   rcu_read_lock();
+   next = READ_ONCE(ve->request);
+   if (next)
+   hint = max(hint, rq_prio(next));
+   rcu_read_unlock();
+   }
+   }
+
if (!list_is_last(&rq->sched.link, &engine->active.requests))
hint = max(hint, rq_prio(list_next_entry(rq, sched.link)));
  
@@ -1980,10 +1999,9 @@ static void set_timeslice(struct intel_engine_cs *engine)

set_timer_ms(&engine->execlists.timer, duration);
  }
  
-static void start_timeslice(struct intel_engine_cs *engine)

+static void start_timeslice(struct intel_engine_cs *engine, int prio)
  {
struct intel_engine_execlists *execlists = &engine->execlists;
-   const int prio = queue_prio(execlists);
unsigned long duration;
  
  	if (!intel_engine_has_timeslices(engine))

@@ -2143,7 +2161,7 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
__unwind_incomplete_requests(engine);
  
  			last = NULL;

-   } else if (need_timeslice(engine, last) &&
+   } else if (need_timeslice(engine, last, rb) &&
   timeslice_expired(execlists, last)) {
if (i915_request_completed(last)) {
tasklet_hi_schedule(&execlists->tasklet);
@@ -2191,7 +2209,7 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
 * Even if ELSP[1] is occupied and not worthy
 * of timeslices, our queue might be.
 */
-   start_timeslice(engine);
+   start_timeslice(engine, queue_prio(execlists));
return;
}
}
@@ -2226,7 +2244,7 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
  
  			if (last && !can_merge_rq(last, rq)) {

spin_unlock(&ve->base.active.lock);
-   start_timeslice(engine);
+   start_timeslice(engine, rq_prio(rq));
return; /* leave this for another sibling */
}
  



Reviewed-by: Tvrtko Ursulin 

Regards,

Tvrtko
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH i-g-t] i915: Add gem_exec_endless

2020-05-19 Thread Chris Wilson
Start our preparations for guaranteeing endless execution.

First, we just want to estimate the 'ulta-low latency' dispatch overhead
by running an endless chain of batch buffers. The legacy binding process
here will be replaced by async VM_BIND, but for the moment this
suffices to construct the GTT as required for arbitrary
*user-controlled* indirect execution.

Signed-off-by: Chris Wilson 
Cc: Joonas Lahtinen 
Cc: Mika Kuoppala 
---
 lib/igt_core.h|   1 +
 tests/Makefile.sources|   3 +
 tests/i915/gem_exec_endless.c | 354 ++
 tests/meson.build |   1 +
 4 files changed, 359 insertions(+)
 create mode 100644 tests/i915/gem_exec_endless.c

diff --git a/lib/igt_core.h b/lib/igt_core.h
index b97fa2faa..c58715204 100644
--- a/lib/igt_core.h
+++ b/lib/igt_core.h
@@ -1369,6 +1369,7 @@ void igt_kmsg(const char *format, ...);
 #define KMSG_DEBUG "<7>[IGT] "
 
 #define READ_ONCE(x) (*(volatile typeof(x) *)(&(x)))
+#define WRITE_ONCE(x, v) do *(volatile typeof(x) *)(&(x)) = (v); while (0)
 
 #define MSEC_PER_SEC (1000)
 #define USEC_PER_SEC (1000*MSEC_PER_SEC)
diff --git a/tests/Makefile.sources b/tests/Makefile.sources
index c450fa0ed..d1f7cf819 100644
--- a/tests/Makefile.sources
+++ b/tests/Makefile.sources
@@ -265,6 +265,9 @@ gem_exec_schedule_SOURCES = i915/gem_exec_schedule.c
 TESTS_progs += gem_exec_store
 gem_exec_store_SOURCES = i915/gem_exec_store.c
 
+TESTS_progs += gem_exec_endless
+gem_exec_endless_SOURCES = i915/gem_exec_endless.c
+
 TESTS_progs += gem_exec_suspend
 gem_exec_suspend_SOURCES = i915/gem_exec_suspend.c
 
diff --git a/tests/i915/gem_exec_endless.c b/tests/i915/gem_exec_endless.c
new file mode 100644
index 0..c25c94641
--- /dev/null
+++ b/tests/i915/gem_exec_endless.c
@@ -0,0 +1,354 @@
+/*
+ * Copyright © 2019 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include 
+
+#include "i915/gem.h"
+#include "i915/gem_ring.h"
+#include "igt.h"
+#include "sw_sync.h"
+
+#define MAX_ENGINES 64
+
+#define MI_SEMAPHORE_WAIT  (0x1c << 23)
+#define   MI_SEMAPHORE_POLL (1 << 15)
+#define   MI_SEMAPHORE_SAD_GT_SDD   (0 << 12)
+#define   MI_SEMAPHORE_SAD_GTE_SDD  (1 << 12)
+#define   MI_SEMAPHORE_SAD_LT_SDD   (2 << 12)
+#define   MI_SEMAPHORE_SAD_LTE_SDD  (3 << 12)
+#define   MI_SEMAPHORE_SAD_EQ_SDD   (4 << 12)
+#define   MI_SEMAPHORE_SAD_NEQ_SDD  (5 << 12)
+
+static uint32_t batch_create(int i915)
+{
+   const uint32_t bbe = MI_BATCH_BUFFER_END;
+   uint32_t handle = gem_create(i915, 4096);
+   gem_write(i915, handle, 0, &bbe, sizeof(bbe));
+   return handle;
+}
+
+struct supervisor {
+   int device;
+   uint32_t handle;
+   uint32_t context;
+
+   uint32_t *map;
+   uint32_t *semaphore;
+   uint32_t *terminate;
+   uint64_t *dispatch;
+};
+
+static unsigned int offset_in_page(void *addr)
+{
+   return (uintptr_t)addr & 4095;
+}
+
+static uint32_t __supervisor_create_context(int i915,
+   const struct 
intel_execution_engine2 *e)
+{
+   I915_DEFINE_CONTEXT_PARAM_ENGINES(engines, 2);
+   struct drm_i915_gem_context_create_ext_setparam p_ring = {
+   {
+   .name = I915_CONTEXT_CREATE_EXT_SETPARAM,
+   .next_extension = 0
+   },
+   {
+   .param = I915_CONTEXT_PARAM_RINGSIZE,
+   .value = 4096,
+   },
+   };
+   struct drm_i915_gem_context_create_ext_setparam p_engines = {
+   {
+   .name = I915_CONTEXT_CREATE_EXT_SETPARAM,
+   .next_extension = to_user_pointer(&p_ring)
+
+   },
+   {
+   .param = I915_CONTEXT_PARAM_ENGINES,
+   .value = to_user_pointer(&eng

Re: [Intel-gfx] [PATCH] drm/i915/selftests: Measure CS_TIMESTAMP

2020-05-19 Thread Ville Syrjälä
On Sat, May 16, 2020 at 02:31:02PM +0100, Chris Wilson wrote:
> Count the number of CS_TIMESTAMP ticks and check that it matches our
> expectations.

Looks ok for everything except g4x/ilk. Those would need something
like
https://patchwork.freedesktop.org/patch/355944/?series=74145&rev=1
+ read TIMESTAMP_UDW instead of TIMESTAMP.

bw/cl still needs
https://patchwork.freedesktop.org/patch/355946/?series=74145&rev=1
though the test seems a bit flaky on my cl. Sometimes the cycle count
comes up short. Never seen it exceed the expected value, but it can 
come up significantly short. And curiously it does seem to have a
tendency to come out as roughly some nice fraction (seen at least
1/2 and 1/4 quite a few times). Dunno if the tick rate actually
changes due to some unknown circumstances, or if the counter just
updates somehow lazily. Certainly polling the counter over a longer
period does show it to tick at the expected rate.

Anyways, test looks sane to me
Reviewed-by: Ville Syrjälä 

> 
> Signed-off-by: Chris Wilson 
> Cc: Ville Syrjälä 
> ---
>  drivers/gpu/drm/i915/gt/selftest_gt_pm.c | 113 +++
>  1 file changed, 113 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/gt/selftest_gt_pm.c 
> b/drivers/gpu/drm/i915/gt/selftest_gt_pm.c
> index 242181a5214c..cac4cf2a5e1d 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_gt_pm.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_gt_pm.c
> @@ -5,10 +5,122 @@
>   * Copyright © 2019 Intel Corporation
>   */
>  
> +#include 
> +
> +#include "intel_gt_clock_utils.h"
> +
>  #include "selftest_llc.h"
>  #include "selftest_rc6.h"
>  #include "selftest_rps.h"
>  
> +static int cmp_u64(const void *A, const void *B)
> +{
> + const u64 *a = A, *b = B;
> +
> + if (a < b)
> + return -1;
> + else if (a > b)
> + return 1;
> + else
> + return 0;
> +}
> +
> +static int cmp_u32(const void *A, const void *B)
> +{
> + const u32 *a = A, *b = B;
> +
> + if (a < b)
> + return -1;
> + else if (a > b)
> + return 1;
> + else
> + return 0;
> +}
> +
> +static void measure_clocks(struct intel_engine_cs *engine,
> +u32 *out_cycles, ktime_t *out_dt)
> +{
> + ktime_t dt[5];
> + u32 cycles[5];
> + int i;
> +
> + for (i = 0; i < 5; i++) {
> + preempt_disable();
> + dt[i] = ktime_get();
> + cycles[i] = -ENGINE_READ_FW(engine, RING_TIMESTAMP);
> +
> + udelay(1000);
> +
> + dt[i] = ktime_sub(ktime_get(), dt[i]);
> + cycles[i] += ENGINE_READ_FW(engine, RING_TIMESTAMP);
> + preempt_enable();
> + }
> +
> + /* Use the median of both cycle/dt; close enough */
> + sort(cycles, 5, sizeof(*cycles), cmp_u32, NULL);
> + *out_cycles = (cycles[1] + 2 * cycles[2] + cycles[3]) / 4;
> +
> + sort(dt, 5, sizeof(*dt), cmp_u64, NULL);
> + *out_dt = div_u64(dt[1] + 2 * dt[2] + dt[3], 4);
> +}
> +
> +static int live_gt_clocks(void *arg)
> +{
> + struct intel_gt *gt = arg;
> + struct intel_engine_cs *engine;
> + enum intel_engine_id id;
> + int err = 0;
> +
> + if (!RUNTIME_INFO(gt->i915)->cs_timestamp_frequency_hz) { /* unknown */
> + pr_info("CS_TIMESTAMP frequency unknown\n");
> + return 0;
> + }
> +
> + if (INTEL_GEN(gt->i915) < 4) /* Any CS_TIMESTAMP? */
> + return 0;
> +
> + intel_gt_pm_get(gt);
> + intel_uncore_forcewake_get(gt->uncore, FORCEWAKE_ALL);
> +
> + for_each_engine(engine, gt, id) {
> + u32 cycles;
> + u32 expected;
> + u64 time;
> + u64 dt;
> +
> + if (INTEL_GEN(engine->i915) < 7 && engine->id != RCS0)
> + continue;
> +
> + measure_clocks(engine, &cycles, &dt);
> +
> + time = i915_cs_timestamp_ticks_to_ns(engine->i915, cycles);
> + expected = i915_cs_timestamp_ns_to_ticks(engine->i915, dt);
> +
> + pr_info("%s: TIMESTAMP %d cycles [%lldns] in %lldns [%d 
> cycles], using CS clock frequency of %uKHz\n",
> + engine->name, cycles, time, dt, expected,
> + RUNTIME_INFO(engine->i915)->cs_timestamp_frequency_hz / 
> 1000);
> +
> + if (9 * time < 8 * dt || 8 * time > 9 * dt) {
> + pr_err("%s: CS ticks did not match walltime!\n",
> +engine->name);
> + err = -EINVAL;
> + break;
> + }
> +
> + if (9 * expected < 8 * cycles || 8 * expected > 9 * cycles) {
> + pr_err("%s: walltime did not match CS ticks!\n",
> +engine->name);
> + err = -EINVAL;
> + break;
> + }
> + }
> +
> + intel_uncore_forcewake_put(gt->uncore, FORCEWAKE_ALL);
> + intel_gt_pm_put(gt);
> +
> + return err;
> +}
> +
>

Re: [Intel-gfx] [PATCH i-g-t] i915: Add gem_exec_endless

2020-05-19 Thread Mika Kuoppala
Chris Wilson  writes:

> Start our preparations for guaranteeing endless execution.
>
> First, we just want to estimate the 'ulta-low latency' dispatch overhead
> by running an endless chain of batch buffers. The legacy binding process
> here will be replaced by async VM_BIND, but for the moment this
> suffices to construct the GTT as required for arbitrary
> *user-controlled* indirect execution.
>
> Signed-off-by: Chris Wilson 
> Cc: Joonas Lahtinen 
> Cc: Mika Kuoppala 
> ---
>  lib/igt_core.h|   1 +
>  tests/Makefile.sources|   3 +
>  tests/i915/gem_exec_endless.c | 354 ++
>  tests/meson.build |   1 +
>  4 files changed, 359 insertions(+)
>  create mode 100644 tests/i915/gem_exec_endless.c
>
> diff --git a/lib/igt_core.h b/lib/igt_core.h
> index b97fa2faa..c58715204 100644
> --- a/lib/igt_core.h
> +++ b/lib/igt_core.h
> @@ -1369,6 +1369,7 @@ void igt_kmsg(const char *format, ...);
>  #define KMSG_DEBUG   "<7>[IGT] "
>  
>  #define READ_ONCE(x) (*(volatile typeof(x) *)(&(x)))
> +#define WRITE_ONCE(x, v) do *(volatile typeof(x) *)(&(x)) = (v); while (0)
>  
>  #define MSEC_PER_SEC (1000)
>  #define USEC_PER_SEC (1000*MSEC_PER_SEC)
> diff --git a/tests/Makefile.sources b/tests/Makefile.sources
> index c450fa0ed..d1f7cf819 100644
> --- a/tests/Makefile.sources
> +++ b/tests/Makefile.sources
> @@ -265,6 +265,9 @@ gem_exec_schedule_SOURCES = i915/gem_exec_schedule.c
>  TESTS_progs += gem_exec_store
>  gem_exec_store_SOURCES = i915/gem_exec_store.c
>  
> +TESTS_progs += gem_exec_endless
> +gem_exec_endless_SOURCES = i915/gem_exec_endless.c
> +
>  TESTS_progs += gem_exec_suspend
>  gem_exec_suspend_SOURCES = i915/gem_exec_suspend.c
>  
> diff --git a/tests/i915/gem_exec_endless.c b/tests/i915/gem_exec_endless.c
> new file mode 100644
> index 0..c25c94641
> --- /dev/null
> +++ b/tests/i915/gem_exec_endless.c
> @@ -0,0 +1,354 @@
> +/*
> + * Copyright © 2019 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 
> DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include 
> +
> +#include "i915/gem.h"
> +#include "i915/gem_ring.h"
> +#include "igt.h"
> +#include "sw_sync.h"
> +
> +#define MAX_ENGINES 64
> +
> +#define MI_SEMAPHORE_WAIT(0x1c << 23)
> +#define   MI_SEMAPHORE_POLL (1 << 15)
> +#define   MI_SEMAPHORE_SAD_GT_SDD   (0 << 12)
> +#define   MI_SEMAPHORE_SAD_GTE_SDD  (1 << 12)
> +#define   MI_SEMAPHORE_SAD_LT_SDD   (2 << 12)
> +#define   MI_SEMAPHORE_SAD_LTE_SDD  (3 << 12)
> +#define   MI_SEMAPHORE_SAD_EQ_SDD   (4 << 12)
> +#define   MI_SEMAPHORE_SAD_NEQ_SDD  (5 << 12)
> +
> +static uint32_t batch_create(int i915)
> +{
> + const uint32_t bbe = MI_BATCH_BUFFER_END;
> + uint32_t handle = gem_create(i915, 4096);
> + gem_write(i915, handle, 0, &bbe, sizeof(bbe));
> + return handle;
> +}
> +
> +struct supervisor {
> + int device;
> + uint32_t handle;
> + uint32_t context;
> +
> + uint32_t *map;
> + uint32_t *semaphore;
> + uint32_t *terminate;
> + uint64_t *dispatch;
> +};
> +
> +static unsigned int offset_in_page(void *addr)
> +{
> + return (uintptr_t)addr & 4095;
> +}
> +
> +static uint32_t __supervisor_create_context(int i915,
> + const struct 
> intel_execution_engine2 *e)
> +{
> + I915_DEFINE_CONTEXT_PARAM_ENGINES(engines, 2);
> + struct drm_i915_gem_context_create_ext_setparam p_ring = {
> + {
> + .name = I915_CONTEXT_CREATE_EXT_SETPARAM,
> + .next_extension = 0
> + },
> + {
> + .param = I915_CONTEXT_PARAM_RINGSIZE,
> + .value = 4096,
> + },
> + };
> + struct drm_i915_gem_context_create_ext_setparam p_engines = {
> + {
> + .name = I915_CONTEXT_CREATE_E

Re: [Intel-gfx] [PATCH] drm/i915/selftests: Measure CS_TIMESTAMP

2020-05-19 Thread Chris Wilson
Quoting Ville Syrjälä (2020-05-19 11:42:45)
> On Sat, May 16, 2020 at 02:31:02PM +0100, Chris Wilson wrote:
> > Count the number of CS_TIMESTAMP ticks and check that it matches our
> > expectations.
> 
> Looks ok for everything except g4x/ilk. Those would need something
> like
> https://patchwork.freedesktop.org/patch/355944/?series=74145&rev=1
> + read TIMESTAMP_UDW instead of TIMESTAMP.
> 
> bw/cl still needs
> https://patchwork.freedesktop.org/patch/355946/?series=74145&rev=1
> though the test seems a bit flaky on my cl. Sometimes the cycle count
> comes up short. Never seen it exceed the expected value, but it can 
> come up significantly short. And curiously it does seem to have a
> tendency to come out as roughly some nice fraction (seen at least
> 1/2 and 1/4 quite a few times). Dunno if the tick rate actually
> changes due to some unknown circumstances, or if the counter just
> updates somehow lazily. Certainly polling the counter over a longer
> period does show it to tick at the expected rate.

Any guestimate at how short a period is long enough?
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [igt-dev] [PATCH i-g-t] i915: Add gem_exec_endless

2020-05-19 Thread Chris Wilson
Quoting Mika Kuoppala (2020-05-19 11:43:16)
> Chris Wilson  writes:
> > +static void supervisor_dispatch(struct supervisor *sv, uint64_t addr)
> > +{
> > + WRITE_ONCE(*sv->dispatch, 64 << 10);
> 
> addr << 10 ?

addr :)
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Fix AUX power domain toggling across TypeC mode resets

2020-05-19 Thread Imre Deak
On Fri, May 15, 2020 at 08:36:31PM +, Patchwork wrote:
> == Series Details ==
> 
> Series: drm/i915: Fix AUX power domain toggling across TypeC mode resets
> URL   : https://patchwork.freedesktop.org/series/77280/
> State : success

Thanks for the review, pushed to -dinq.

> 
> == Summary ==
> 
> CI Bug Log - changes from CI_DRM_8488_full -> Patchwork_17669_full
> 
> 
> Summary
> ---
> 
>   **WARNING**
> 
>   Minor unknown changes coming with Patchwork_17669_full need to be verified
>   manually.
>   
>   If you think the reported changes have nothing to do with the changes
>   introduced in Patchwork_17669_full, please notify your bug team to allow 
> them
>   to document this new failure mode, which will reduce false positives in CI.
> 
>   
> 
> Possible new issues
> ---
> 
>   Here are the unknown changes that may have been introduced in 
> Patchwork_17669_full:
> 
> ### CI changes ###
> 
>  Warnings 
> 
>   * boot:
> - shard-hsw:  ([FAIL][1], [FAIL][2], [FAIL][3], [FAIL][4], 
> [FAIL][5], [FAIL][6], [FAIL][7], [FAIL][8], [FAIL][9], [FAIL][10], 
> [FAIL][11], [FAIL][12], [FAIL][13], [FAIL][14], [FAIL][15], [FAIL][16], 
> [FAIL][17], [FAIL][18], [FAIL][19], [FAIL][20], [FAIL][21], [FAIL][22], 
> [FAIL][23], [FAIL][24], [FAIL][25]) ([CI#80]) -> ([FAIL][26], [FAIL][27], 
> [FAIL][28], [FAIL][29], [FAIL][30], [FAIL][31], [FAIL][32], [FAIL][33], 
> [FAIL][34], [FAIL][35], [FAIL][36], [FAIL][37], [FAIL][38], [FAIL][39], 
> [FAIL][40], [FAIL][41], [FAIL][42], [FAIL][43], [FAIL][44], [FAIL][45], 
> [FAIL][46], [FAIL][47], [FAIL][48], [FAIL][49], [FAIL][50])
>[1]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw1/boot.html
>[2]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw1/boot.html
>[3]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw1/boot.html
>[4]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw1/boot.html
>[5]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw8/boot.html
>[6]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw8/boot.html
>[7]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw8/boot.html
>[8]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw8/boot.html
>[9]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw7/boot.html
>[10]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw7/boot.html
>[11]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw7/boot.html
>[12]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw7/boot.html
>[13]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw6/boot.html
>[14]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw6/boot.html
>[15]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw6/boot.html
>[16]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw6/boot.html
>[17]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw6/boot.html
>[18]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw4/boot.html
>[19]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw4/boot.html
>[20]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw4/boot.html
>[21]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw4/boot.html
>[22]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw2/boot.html
>[23]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw2/boot.html
>[24]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw2/boot.html
>[25]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8488/shard-hsw2/boot.html
>[26]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17669/shard-hsw1/boot.html
>[27]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17669/shard-hsw1/boot.html
>[28]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17669/shard-hsw1/boot.html
>[29]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17669/shard-hsw1/boot.html
>[30]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17669/shard-hsw1/boot.html
>[31]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17669/shard-hsw2/boot.html
>[32]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17669/shard-hsw2/boot.html
>[33]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17669/shard-hsw2/boot.html
>[34]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17669/shard-hsw2/boot.html
>[35]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17669/shard-hsw4/boot.html
>[36]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17669/shard-hsw4/boot.html
>[37]: 
> https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17669/shard-hsw4/boot.html
>[38]: 
> https://intel-

[Intel-gfx] [PATCH] drm/i915/selftests: Measure dispatch latency

2020-05-19 Thread Chris Wilson
A useful metric of the system's health is how fast we can tell the GPU
to do various actions, so measure our latency.

v2: Refactor all the instruction building into emitters.

Signed-off-by: Chris Wilson 
Cc: Mika Kuoppala 
Cc: Joonas Lahtinen 
---
 drivers/gpu/drm/i915/selftests/i915_request.c | 779 ++
 1 file changed, 779 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c 
b/drivers/gpu/drm/i915/selftests/i915_request.c
index 6014e8dfcbb1..db09e9cb54b8 100644
--- a/drivers/gpu/drm/i915/selftests/i915_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_request.c
@@ -24,16 +24,20 @@
 
 #include 
 #include 
+#include 
 
 #include "gem/i915_gem_pm.h"
 #include "gem/selftests/mock_context.h"
 
+#include "gt/intel_engine_heartbeat.h"
 #include "gt/intel_engine_pm.h"
 #include "gt/intel_engine_user.h"
 #include "gt/intel_gt.h"
+#include "gt/intel_gt_requests.h"
 
 #include "i915_random.h"
 #include "i915_selftest.h"
+#include "igt_flush_test.h"
 #include "igt_live_test.h"
 #include "igt_spinner.h"
 #include "lib_sw_fence.h"
@@ -1524,6 +1528,780 @@ struct perf_series {
struct intel_context *ce[];
 };
 
+static int cmp_u32(const void *A, const void *B)
+{
+   const u32 *a = A, *b = B;
+
+   return *a - *b;
+}
+
+static u32 trifilter(u32 *a)
+{
+   u64 sum;
+
+#define TF_COUNT 5
+   sort(a, TF_COUNT, sizeof(*a), cmp_u32, NULL);
+
+   sum = mul_u32_u32(a[2], 2);
+   sum += a[1];
+   sum += a[3];
+
+   GEM_BUG_ON(sum > U32_MAX);
+   return sum;
+#define TF_BIAS 2
+}
+
+static u64 cycles_to_ns(struct intel_engine_cs *engine, u32 cycles)
+{
+   u64 ns = i915_cs_timestamp_ticks_to_ns(engine->i915, cycles);
+
+   return DIV_ROUND_CLOSEST(ns, 1 << TF_BIAS);
+}
+
+static u32 *emit_timestamp_store(u32 *cs, struct intel_context *ce, u32 offset)
+{
+   *cs++ = MI_STORE_REGISTER_MEM_GEN8 | MI_USE_GGTT;
+   *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP((ce->engine->mmio_base)));
+   *cs++ = offset;
+   *cs++ = 0;
+
+   return cs;
+}
+
+static u32 *emit_store_dw(u32 *cs, u32 offset, u32 value)
+{
+   *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
+   *cs++ = offset;
+   *cs++ = 0;
+   *cs++ = value;
+
+   return cs;
+}
+
+static u32 *emit_semaphore_poll(u32 *cs, u32 mode, u32 value, u32 offset)
+{
+   *cs++ = MI_SEMAPHORE_WAIT |
+   MI_SEMAPHORE_GLOBAL_GTT |
+   MI_SEMAPHORE_POLL |
+   mode;
+   *cs++ = value;
+   *cs++ = offset;
+   *cs++ = 0;
+
+   return cs;
+}
+
+static u32 *emit_semaphore_poll_until(u32 *cs, u32 offset, u32 value)
+{
+   return emit_semaphore_poll(cs, MI_SEMAPHORE_SAD_EQ_SDD, value, offset);
+}
+
+static void semaphore_set(u32 *sema, u32 value)
+{
+   WRITE_ONCE(*sema, value);
+   wmb(); /* flush the update to the cache, and beyond */
+}
+
+static u32 *hwsp_scratch(const struct intel_context *ce)
+{
+   return memset32(ce->engine->status_page.addr + 1000, 0, 21);
+}
+
+static u32 hwsp_offset(const struct intel_context *ce, u32 *dw)
+{
+   return (i915_ggtt_offset(ce->engine->status_page.vma) +
+   offset_in_page(dw));
+}
+
+static int measure_semaphore_response(struct intel_context *ce)
+{
+   u32 *sema = hwsp_scratch(ce);
+   const u32 offset = hwsp_offset(ce, sema);
+   u32 elapsed[TF_COUNT], cycles;
+   struct i915_request *rq;
+   u32 *cs;
+   int i;
+
+   /*
+* Measure how many cycles it takes for the HW to detect the change
+* in a semaphore value.
+*
+*A: read CS_TIMESTAMP from CPU
+*poke semaphore
+*B: read CS_TIMESTAMP on GPU
+*
+* Semaphore latency: B - A
+*/
+
+   semaphore_set(sema, -1);
+
+   rq = i915_request_create(ce);
+   if (IS_ERR(rq))
+   return PTR_ERR(rq);
+
+   cs = intel_ring_begin(rq, 4 + 12 * ARRAY_SIZE(elapsed));
+   if (IS_ERR(cs)) {
+   i915_request_add(rq);
+   return PTR_ERR(cs);
+   }
+
+   cs = emit_store_dw(cs, offset, 0);
+   for (i = 1; i <= ARRAY_SIZE(elapsed); i++) {
+   cs = emit_semaphore_poll_until(cs, offset, i);
+   cs = emit_timestamp_store(cs, ce, offset + i * sizeof(u32));
+   cs = emit_store_dw(cs, offset, 0);
+   }
+
+   intel_ring_advance(rq, cs);
+   i915_request_add(rq);
+
+   if (wait_for(READ_ONCE(*sema) == 0, 50)) {
+   intel_gt_set_wedged(ce->engine->gt);
+   return -EIO;
+   }
+
+   for (i = 1; i <= ARRAY_SIZE(elapsed); i++) {
+   preempt_disable();
+   cycles = ENGINE_READ_FW(ce->engine, RING_TIMESTAMP);
+   semaphore_set(sema, i);
+   preempt_enable();
+
+   if (wait_for(READ_ONCE(*sema) == 0, 50)) {
+   intel_gt_set_wedged(ce->engine->gt);
+   return -EIO;
+   }
+
+

Re: [Intel-gfx] [PATCH] drm/i915/selftests: Measure CS_TIMESTAMP

2020-05-19 Thread Ville Syrjälä
On Tue, May 19, 2020 at 11:46:54AM +0100, Chris Wilson wrote:
> Quoting Ville Syrjälä (2020-05-19 11:42:45)
> > On Sat, May 16, 2020 at 02:31:02PM +0100, Chris Wilson wrote:
> > > Count the number of CS_TIMESTAMP ticks and check that it matches our
> > > expectations.
> > 
> > Looks ok for everything except g4x/ilk. Those would need something
> > like
> > https://patchwork.freedesktop.org/patch/355944/?series=74145&rev=1
> > + read TIMESTAMP_UDW instead of TIMESTAMP.
> > 
> > bw/cl still needs
> > https://patchwork.freedesktop.org/patch/355946/?series=74145&rev=1
> > though the test seems a bit flaky on my cl. Sometimes the cycle count
> > comes up short. Never seen it exceed the expected value, but it can 
> > come up significantly short. And curiously it does seem to have a
> > tendency to come out as roughly some nice fraction (seen at least
> > 1/2 and 1/4 quite a few times). Dunno if the tick rate actually
> > changes due to some unknown circumstances, or if the counter just
> > updates somehow lazily. Certainly polling the counter over a longer
> > period does show it to tick at the expected rate.
> 
> Any guestimate at how short a period is long enough?

After a bit more debugging it looks like the read just sometimes returns
a stale value:
[ 5248.749794] rcs0: 0: TIMESTAMP 75->123 (48) cycles [1013808ns]
[ 5248.749817] rcs0: 1: TIMESTAMP 202859->202859 (0) cycles [1031688ns]
[ 5248.749818] rcs0: 2: TIMESTAMP 409179->613179 (204000) cycles [1020234ns]
[ 5248.749820] rcs0: 3: TIMESTAMP 613227->825083 (211856) cycles [1059623ns]
[ 5248.749821] rcs0: 4: TIMESTAMP 825163->1036491 (211328) cycles [1057109ns]

So far it looks like doing a double read is sufficient to get
an up to date value.

-- 
Ville Syrjälä
Intel
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/selftests: Measure CS_TIMESTAMP (rev3)

2020-05-19 Thread Patchwork
== Series Details ==

Series: drm/i915/selftests: Measure CS_TIMESTAMP (rev3)
URL   : https://patchwork.freedesktop.org/series/77320/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
4686c5234501 drm/i915/selftests: Measure CS_TIMESTAMP
-:68: CHECK:USLEEP_RANGE: usleep_range is preferred over udelay; see 
Documentation/timers/timers-howto.rst
#68: FILE: drivers/gpu/drm/i915/gt/selftest_gt_pm.c:52:
+   udelay(1000);

total: 0 errors, 0 warnings, 1 checks, 129 lines checked

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915/selftests: Measure dispatch latency

2020-05-19 Thread Mika Kuoppala
Chris Wilson  writes:

> A useful metric of the system's health is how fast we can tell the GPU
> to do various actions, so measure our latency.
>
> v2: Refactor all the instruction building into emitters.
>
> Signed-off-by: Chris Wilson 
> Cc: Mika Kuoppala 
> Cc: Joonas Lahtinen 

Not much nitpicking left. Could have used one goto in the fence
using tests on error paths but meh.

Lots of tests poking hw from different angles.
With a clear comments, it is like a guided tour of our
submission/scheduling front.

Analyzing of differences between different sets will
be interesting.

Reviewed-by: Mika Kuoppala 

> ---
>  drivers/gpu/drm/i915/selftests/i915_request.c | 779 ++
>  1 file changed, 779 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c 
> b/drivers/gpu/drm/i915/selftests/i915_request.c
> index 6014e8dfcbb1..db09e9cb54b8 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_request.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_request.c
> @@ -24,16 +24,20 @@
>  
>  #include 
>  #include 
> +#include 
>  
>  #include "gem/i915_gem_pm.h"
>  #include "gem/selftests/mock_context.h"
>  
> +#include "gt/intel_engine_heartbeat.h"
>  #include "gt/intel_engine_pm.h"
>  #include "gt/intel_engine_user.h"
>  #include "gt/intel_gt.h"
> +#include "gt/intel_gt_requests.h"
>  
>  #include "i915_random.h"
>  #include "i915_selftest.h"
> +#include "igt_flush_test.h"
>  #include "igt_live_test.h"
>  #include "igt_spinner.h"
>  #include "lib_sw_fence.h"
> @@ -1524,6 +1528,780 @@ struct perf_series {
>   struct intel_context *ce[];
>  };
>  
> +static int cmp_u32(const void *A, const void *B)
> +{
> + const u32 *a = A, *b = B;
> +
> + return *a - *b;
> +}
> +
> +static u32 trifilter(u32 *a)
> +{
> + u64 sum;
> +
> +#define TF_COUNT 5
> + sort(a, TF_COUNT, sizeof(*a), cmp_u32, NULL);
> +
> + sum = mul_u32_u32(a[2], 2);
> + sum += a[1];
> + sum += a[3];
> +
> + GEM_BUG_ON(sum > U32_MAX);
> + return sum;
> +#define TF_BIAS 2
> +}
> +
> +static u64 cycles_to_ns(struct intel_engine_cs *engine, u32 cycles)
> +{
> + u64 ns = i915_cs_timestamp_ticks_to_ns(engine->i915, cycles);
> +
> + return DIV_ROUND_CLOSEST(ns, 1 << TF_BIAS);
> +}
> +
> +static u32 *emit_timestamp_store(u32 *cs, struct intel_context *ce, u32 
> offset)
> +{
> + *cs++ = MI_STORE_REGISTER_MEM_GEN8 | MI_USE_GGTT;
> + *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP((ce->engine->mmio_base)));
> + *cs++ = offset;
> + *cs++ = 0;
> +
> + return cs;
> +}
> +
> +static u32 *emit_store_dw(u32 *cs, u32 offset, u32 value)
> +{
> + *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
> + *cs++ = offset;
> + *cs++ = 0;
> + *cs++ = value;
> +
> + return cs;
> +}
> +
> +static u32 *emit_semaphore_poll(u32 *cs, u32 mode, u32 value, u32 offset)
> +{
> + *cs++ = MI_SEMAPHORE_WAIT |
> + MI_SEMAPHORE_GLOBAL_GTT |
> + MI_SEMAPHORE_POLL |
> + mode;
> + *cs++ = value;
> + *cs++ = offset;
> + *cs++ = 0;
> +
> + return cs;
> +}
> +
> +static u32 *emit_semaphore_poll_until(u32 *cs, u32 offset, u32 value)
> +{
> + return emit_semaphore_poll(cs, MI_SEMAPHORE_SAD_EQ_SDD, value, offset);
> +}
> +
> +static void semaphore_set(u32 *sema, u32 value)
> +{
> + WRITE_ONCE(*sema, value);
> + wmb(); /* flush the update to the cache, and beyond */
> +}
> +
> +static u32 *hwsp_scratch(const struct intel_context *ce)
> +{
> + return memset32(ce->engine->status_page.addr + 1000, 0, 21);
> +}
> +
> +static u32 hwsp_offset(const struct intel_context *ce, u32 *dw)
> +{
> + return (i915_ggtt_offset(ce->engine->status_page.vma) +
> + offset_in_page(dw));
> +}
> +
> +static int measure_semaphore_response(struct intel_context *ce)
> +{
> + u32 *sema = hwsp_scratch(ce);
> + const u32 offset = hwsp_offset(ce, sema);
> + u32 elapsed[TF_COUNT], cycles;
> + struct i915_request *rq;
> + u32 *cs;
> + int i;
> +
> + /*
> +  * Measure how many cycles it takes for the HW to detect the change
> +  * in a semaphore value.
> +  *
> +  *A: read CS_TIMESTAMP from CPU
> +  *poke semaphore
> +  *B: read CS_TIMESTAMP on GPU
> +  *
> +  * Semaphore latency: B - A
> +  */
> +
> + semaphore_set(sema, -1);
> +
> + rq = i915_request_create(ce);
> + if (IS_ERR(rq))
> + return PTR_ERR(rq);
> +
> + cs = intel_ring_begin(rq, 4 + 12 * ARRAY_SIZE(elapsed));
> + if (IS_ERR(cs)) {
> + i915_request_add(rq);
> + return PTR_ERR(cs);
> + }
> +
> + cs = emit_store_dw(cs, offset, 0);
> + for (i = 1; i <= ARRAY_SIZE(elapsed); i++) {
> + cs = emit_semaphore_poll_until(cs, offset, i);
> + cs = emit_timestamp_store(cs, ce, offset + i * sizeof(u32));
> + cs = emit_store_dw(cs, offset, 0);
> + }
> +
> + intel_ring_advance(rq, cs);
> + i915_request_add(rq);
> +

Re: [Intel-gfx] [PATCH] drm/i915/selftests: Measure dispatch latency

2020-05-19 Thread Chris Wilson
Quoting Mika Kuoppala (2020-05-19 13:47:31)
> Chris Wilson  writes:
> 
> > A useful metric of the system's health is how fast we can tell the GPU
> > to do various actions, so measure our latency.
> >
> > v2: Refactor all the instruction building into emitters.
> >
> > Signed-off-by: Chris Wilson 
> > Cc: Mika Kuoppala 
> > Cc: Joonas Lahtinen 
> 
> Not much nitpicking left. Could have used one goto in the fence
> using tests on error paths but meh.

Error handling is not great here, I agree.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.BAT: failure for drm/i915/selftests: Measure CS_TIMESTAMP (rev3)

2020-05-19 Thread Patchwork
== Series Details ==

Series: drm/i915/selftests: Measure CS_TIMESTAMP (rev3)
URL   : https://patchwork.freedesktop.org/series/77320/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_8502 -> Patchwork_17708


Summary
---

  **FAILURE**

  Serious unknown changes coming with Patchwork_17708 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_17708, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17708/index.html

Possible new issues
---

  Here are the unknown changes that may have been introduced in Patchwork_17708:

### IGT changes ###

 Possible regressions 

  * igt@i915_selftest@live@gt_pm:
- fi-elk-e7500:   [PASS][1] -> [DMESG-FAIL][2]
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8502/fi-elk-e7500/igt@i915_selftest@live@gt_pm.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17708/fi-elk-e7500/igt@i915_selftest@live@gt_pm.html
- fi-ilk-650: [PASS][3] -> [DMESG-FAIL][4]
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8502/fi-ilk-650/igt@i915_selftest@live@gt_pm.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17708/fi-ilk-650/igt@i915_selftest@live@gt_pm.html
- fi-bwr-2160:[PASS][5] -> [DMESG-FAIL][6]
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8502/fi-bwr-2160/igt@i915_selftest@live@gt_pm.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17708/fi-bwr-2160/igt@i915_selftest@live@gt_pm.html

  
Known issues


  Here are the changes found in Patchwork_17708 that come from known issues:

### IGT changes ###

 Possible fixes 

  * igt@i915_selftest@live@execlists:
- fi-cfl-guc: [INCOMPLETE][7] ([i915#656]) -> [PASS][8]
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8502/fi-cfl-guc/igt@i915_selftest@l...@execlists.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17708/fi-cfl-guc/igt@i915_selftest@l...@execlists.html

  
  [i915#656]: https://gitlab.freedesktop.org/drm/intel/issues/656


Participating hosts (50 -> 44)
--

  Missing(6): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-ctg-p8600 fi-byt-clapper 


Build changes
-

  * Linux: CI_DRM_8502 -> Patchwork_17708

  CI-20190529: 20190529
  CI_DRM_8502: 5bafb3de802a8dd663009250b587c1a78ad298c9 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5659: 66ab5e42811fee3dea8c21ab29e70e323a0650de @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17708: 4686c52345011560c020f375436eea7fcb31fbae @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

4686c5234501 drm/i915/selftests: Measure CS_TIMESTAMP

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17708/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH] drm/i915/selftests: Measure dispatch latency

2020-05-19 Thread Chris Wilson
A useful metric of the system's health is how fast we can tell the GPU
to do various actions, so measure our latency.

v2: Refactor all the instruction building into emitters.
v3: Mark the error handling if not perfect, at least consistent.

Signed-off-by: Chris Wilson 
Cc: Mika Kuoppala 
Cc: Joonas Lahtinen 
Reviewed-by: Mika Kuoppala 
---
 drivers/gpu/drm/i915/selftests/i915_request.c | 823 ++
 1 file changed, 823 insertions(+)

diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c 
b/drivers/gpu/drm/i915/selftests/i915_request.c
index 6014e8dfcbb1..92c628f18c60 100644
--- a/drivers/gpu/drm/i915/selftests/i915_request.c
+++ b/drivers/gpu/drm/i915/selftests/i915_request.c
@@ -24,16 +24,20 @@
 
 #include 
 #include 
+#include 
 
 #include "gem/i915_gem_pm.h"
 #include "gem/selftests/mock_context.h"
 
+#include "gt/intel_engine_heartbeat.h"
 #include "gt/intel_engine_pm.h"
 #include "gt/intel_engine_user.h"
 #include "gt/intel_gt.h"
+#include "gt/intel_gt_requests.h"
 
 #include "i915_random.h"
 #include "i915_selftest.h"
+#include "igt_flush_test.h"
 #include "igt_live_test.h"
 #include "igt_spinner.h"
 #include "lib_sw_fence.h"
@@ -1524,6 +1528,824 @@ struct perf_series {
struct intel_context *ce[];
 };
 
+static int cmp_u32(const void *A, const void *B)
+{
+   const u32 *a = A, *b = B;
+
+   return *a - *b;
+}
+
+static u32 trifilter(u32 *a)
+{
+   u64 sum;
+
+#define TF_COUNT 5
+   sort(a, TF_COUNT, sizeof(*a), cmp_u32, NULL);
+
+   sum = mul_u32_u32(a[2], 2);
+   sum += a[1];
+   sum += a[3];
+
+   GEM_BUG_ON(sum > U32_MAX);
+   return sum;
+#define TF_BIAS 2
+}
+
+static u64 cycles_to_ns(struct intel_engine_cs *engine, u32 cycles)
+{
+   u64 ns = i915_cs_timestamp_ticks_to_ns(engine->i915, cycles);
+
+   return DIV_ROUND_CLOSEST(ns, 1 << TF_BIAS);
+}
+
+static u32 *emit_timestamp_store(u32 *cs, struct intel_context *ce, u32 offset)
+{
+   *cs++ = MI_STORE_REGISTER_MEM_GEN8 | MI_USE_GGTT;
+   *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP((ce->engine->mmio_base)));
+   *cs++ = offset;
+   *cs++ = 0;
+
+   return cs;
+}
+
+static u32 *emit_store_dw(u32 *cs, u32 offset, u32 value)
+{
+   *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
+   *cs++ = offset;
+   *cs++ = 0;
+   *cs++ = value;
+
+   return cs;
+}
+
+static u32 *emit_semaphore_poll(u32 *cs, u32 mode, u32 value, u32 offset)
+{
+   *cs++ = MI_SEMAPHORE_WAIT |
+   MI_SEMAPHORE_GLOBAL_GTT |
+   MI_SEMAPHORE_POLL |
+   mode;
+   *cs++ = value;
+   *cs++ = offset;
+   *cs++ = 0;
+
+   return cs;
+}
+
+static u32 *emit_semaphore_poll_until(u32 *cs, u32 offset, u32 value)
+{
+   return emit_semaphore_poll(cs, MI_SEMAPHORE_SAD_EQ_SDD, value, offset);
+}
+
+static void semaphore_set(u32 *sema, u32 value)
+{
+   WRITE_ONCE(*sema, value);
+   wmb(); /* flush the update to the cache, and beyond */
+}
+
+static u32 *hwsp_scratch(const struct intel_context *ce)
+{
+   return memset32(ce->engine->status_page.addr + 1000, 0, 21);
+}
+
+static u32 hwsp_offset(const struct intel_context *ce, u32 *dw)
+{
+   return (i915_ggtt_offset(ce->engine->status_page.vma) +
+   offset_in_page(dw));
+}
+
+static int measure_semaphore_response(struct intel_context *ce)
+{
+   u32 *sema = hwsp_scratch(ce);
+   const u32 offset = hwsp_offset(ce, sema);
+   u32 elapsed[TF_COUNT], cycles;
+   struct i915_request *rq;
+   u32 *cs;
+   int err;
+   int i;
+
+   /*
+* Measure how many cycles it takes for the HW to detect the change
+* in a semaphore value.
+*
+*A: read CS_TIMESTAMP from CPU
+*poke semaphore
+*B: read CS_TIMESTAMP on GPU
+*
+* Semaphore latency: B - A
+*/
+
+   semaphore_set(sema, -1);
+
+   rq = i915_request_create(ce);
+   if (IS_ERR(rq))
+   return PTR_ERR(rq);
+
+   cs = intel_ring_begin(rq, 4 + 12 * ARRAY_SIZE(elapsed));
+   if (IS_ERR(cs)) {
+   i915_request_add(rq);
+   err = PTR_ERR(cs);
+   goto err;
+   }
+
+   cs = emit_store_dw(cs, offset, 0);
+   for (i = 1; i <= ARRAY_SIZE(elapsed); i++) {
+   cs = emit_semaphore_poll_until(cs, offset, i);
+   cs = emit_timestamp_store(cs, ce, offset + i * sizeof(u32));
+   cs = emit_store_dw(cs, offset, 0);
+   }
+
+   intel_ring_advance(rq, cs);
+   i915_request_add(rq);
+
+   if (wait_for(READ_ONCE(*sema) == 0, 50)) {
+   err = -EIO;
+   goto err;
+   }
+
+   for (i = 1; i <= ARRAY_SIZE(elapsed); i++) {
+   preempt_disable();
+   cycles = ENGINE_READ_FW(ce->engine, RING_TIMESTAMP);
+   semaphore_set(sema, i);
+   preempt_enable();
+
+   if (wait_for(READ_ONCE(*sema) == 0, 50)) {
+   

[Intel-gfx] [PATCH v9 3/7] drm/i915: Check plane configuration properly

2020-05-19 Thread Stanislav Lisovskiy
From: Stanislav Lisovskiy 

Checking with hweight8 if plane configuration had
changed seems to be wrong as different plane configs
can result in a same hamming weight.
So lets check the bitmask itself.

Reviewed-by: Manasi Navare 
Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_display.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index e93a553a344d..a9ab66d97360 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -14614,7 +14614,13 @@ static int intel_atomic_check_planes(struct 
intel_atomic_state *state)
old_active_planes = old_crtc_state->active_planes & 
~BIT(PLANE_CURSOR);
new_active_planes = new_crtc_state->active_planes & 
~BIT(PLANE_CURSOR);
 
-   if (hweight8(old_active_planes) == hweight8(new_active_planes))
+   /*
+* Not only the number of planes, but if the plane 
configuration had
+* changed might already mean we need to recompute min CDCLK,
+* because different planes might consume different amount of 
Dbuf bandwidth
+* according to formula: Bw per plane = Pixel rate * bpp * 
pipe/plane scale factor
+*/
+   if (old_active_planes == new_active_planes)
continue;
 
ret = intel_crtc_add_planes_to_state(state, crtc, 
new_active_planes);
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v9 5/7] drm/i915: Introduce for_each_dbuf_slice_in_mask macro

2020-05-19 Thread Stanislav Lisovskiy
We quite often need now to iterate only particular dbuf slices
in mask, whether they are active or related to particular crtc.

v2: - Minor code refactoring
v3: - Use enum for max slices instead of macro

Let's make our life a bit easier and use a macro for that.

Reviewed-by: Manasi Navare 
Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_display.h   | 7 +++
 drivers/gpu/drm/i915/display/intel_display_power.h | 1 +
 2 files changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/i915/display/intel_display.h 
b/drivers/gpu/drm/i915/display/intel_display.h
index efb4da205ea2..b7a6d56bac5f 100644
--- a/drivers/gpu/drm/i915/display/intel_display.h
+++ b/drivers/gpu/drm/i915/display/intel_display.h
@@ -187,6 +187,13 @@ enum plane_id {
for ((__p) = PLANE_PRIMARY; (__p) < I915_MAX_PLANES; (__p)++) \
for_each_if((__crtc)->plane_ids_mask & BIT(__p))
 
+#define for_each_dbuf_slice_in_mask(__slice, __mask) \
+   for ((__slice) = DBUF_S1; (__slice) < I915_MAX_DBUF_SLICES; 
(__slice)++) \
+   for_each_if((BIT(__slice)) & (__mask))
+
+#define for_each_dbuf_slice(__slice) \
+   for_each_dbuf_slice_in_mask(__slice, BIT(I915_MAX_DBUF_SLICES) - 1)
+
 enum port {
PORT_NONE = -1,
 
diff --git a/drivers/gpu/drm/i915/display/intel_display_power.h 
b/drivers/gpu/drm/i915/display/intel_display_power.h
index 6c917699293b..4d0d6f9dad26 100644
--- a/drivers/gpu/drm/i915/display/intel_display_power.h
+++ b/drivers/gpu/drm/i915/display/intel_display_power.h
@@ -314,6 +314,7 @@ intel_display_power_put_async(struct drm_i915_private *i915,
 enum dbuf_slice {
DBUF_S1,
DBUF_S2,
+   I915_MAX_DBUF_SLICES
 };
 
 #define with_intel_display_power(i915, domain, wf) \
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v9 0/7] Consider DBuf bandwidth when calculating CDCLK

2020-05-19 Thread Stanislav Lisovskiy
We need to calculate cdclk after watermarks/ddb has been calculated
as with recent hw CDCLK needs to be adjusted accordingly to DBuf
requirements, which is not possible with current code organization.

Setting CDCLK according to DBuf BW requirements and not just rejecting
if it doesn't satisfy BW requirements, will allow us to save power when
it is possible and gain additional bandwidth when it's needed - i.e
boosting both our power management and perfomance capabilities.

This patch is preparation for that, first we now extract modeset
calculation from modeset checks, in order to call it after wm/ddb
has been calculated.

Stanislav Lisovskiy (7):
  drm/i915: Decouple cdclk calculation from modeset checks
  drm/i915: Extract cdclk requirements checking to separate function
  drm/i915: Check plane configuration properly
  drm/i915: Plane configuration affects CDCLK in Gen11+
  drm/i915: Introduce for_each_dbuf_slice_in_mask macro
  drm/i915: Adjust CDCLK accordingly to our DBuf bw needs
  drm/i915: Remove unneeded hack now for CDCLK

 drivers/gpu/drm/i915/display/intel_bw.c   | 118 +-
 drivers/gpu/drm/i915/display/intel_bw.h   |  10 ++
 drivers/gpu/drm/i915/display/intel_cdclk.c|  40 +++---
 drivers/gpu/drm/i915/display/intel_cdclk.h|   1 -
 drivers/gpu/drm/i915/display/intel_display.c  |  89 ++---
 drivers/gpu/drm/i915/display/intel_display.h  |   7 ++
 .../drm/i915/display/intel_display_power.h|   1 +
 drivers/gpu/drm/i915/i915_drv.h   |   1 +
 drivers/gpu/drm/i915/intel_pm.c   |  31 -
 drivers/gpu/drm/i915/intel_pm.h   |   3 +
 10 files changed, 261 insertions(+), 40 deletions(-)

-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v9 1/7] drm/i915: Decouple cdclk calculation from modeset checks

2020-05-19 Thread Stanislav Lisovskiy
We need to calculate cdclk after watermarks/ddb has been calculated
as with recent hw CDCLK needs to be adjusted accordingly to DBuf
requirements, which is not possible with current code organization.

Setting CDCLK according to DBuf BW requirements and not just rejecting
if it doesn't satisfy BW requirements, will allow us to save power when
it is possible and gain additional bandwidth when it's needed - i.e
boosting both our power management and perfomance capabilities.

This patch is preparation for that, first we now extract modeset
calculation from modeset checks, in order to call it after wm/ddb
has been calculated.

v2: - Extract only intel_modeset_calc_cdclk from intel_modeset_checks
  (Ville Syrjälä)

v3: - Clear plls after intel_modeset_calc_cdclk

v4: - Added r-b from previous revision to commit message

Reviewed-by: Ville Syrjälä 
Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_display.c | 22 +++-
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index 432b4eeaf9f6..005e324d0582 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -14493,12 +14493,6 @@ static int intel_modeset_checks(struct 
intel_atomic_state *state)
return ret;
}
 
-   ret = intel_modeset_calc_cdclk(state);
-   if (ret)
-   return ret;
-
-   intel_modeset_clear_plls(state);
-
if (IS_HASWELL(dev_priv))
return hsw_mode_set_planes_workaround(state);
 
@@ -14830,10 +14824,6 @@ static int intel_atomic_check(struct drm_device *dev,
goto fail;
}
 
-   ret = intel_atomic_check_crtcs(state);
-   if (ret)
-   goto fail;
-
intel_fbc_choose_crtc(dev_priv, state);
ret = calc_watermark_data(state);
if (ret)
@@ -14843,6 +14833,18 @@ static int intel_atomic_check(struct drm_device *dev,
if (ret)
goto fail;
 
+   if (any_ms) {
+   ret = intel_modeset_calc_cdclk(state);
+   if (ret)
+   return ret;
+
+   intel_modeset_clear_plls(state);
+   }
+
+   ret = intel_atomic_check_crtcs(state);
+   if (ret)
+   goto fail;
+
for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
new_crtc_state, i) {
if (!needs_modeset(new_crtc_state) &&
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v9 2/7] drm/i915: Extract cdclk requirements checking to separate function

2020-05-19 Thread Stanislav Lisovskiy
In Gen11+ whenever we might exceed DBuf bandwidth we might need to
recalculate CDCLK which DBuf bandwidth is scaled with.
Total Dbuf bw used might change based on particular plane needs.

Thus to calculate if cdclk needs to be changed it is not enough
anymore to check plane configuration and plane min cdclk, per DBuf
bw can be calculated only after wm/ddb calculation is done and
all required planes are added into the state. In order to keep
all min_cdclk related checks in one place let's extract it into
separate function, checking and modifying any_ms.

Reviewed-by: Manasi Navare 
Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_display.c | 30 ++--
 1 file changed, 22 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index 005e324d0582..e93a553a344d 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -14572,8 +14572,7 @@ static bool active_planes_affects_min_cdclk(struct 
drm_i915_private *dev_priv)
IS_IVYBRIDGE(dev_priv);
 }
 
-static int intel_atomic_check_planes(struct intel_atomic_state *state,
-bool *need_cdclk_calc)
+static int intel_atomic_check_planes(struct intel_atomic_state *state)
 {
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_crtc_state *old_crtc_state, *new_crtc_state;
@@ -14623,6 +14622,22 @@ static int intel_atomic_check_planes(struct 
intel_atomic_state *state,
return ret;
}
 
+   return 0;
+}
+
+static int intel_atomic_check_cdclk(struct intel_atomic_state *state,
+   bool *need_cdclk_calc)
+{
+   struct intel_cdclk_state *new_cdclk_state;
+   int i;
+   struct intel_plane_state *plane_state;
+   struct intel_plane *plane;
+   int ret;
+
+   new_cdclk_state = intel_atomic_get_new_cdclk_state(state);
+   if (new_cdclk_state && new_cdclk_state->force_min_cdclk_changed)
+   *need_cdclk_calc = true;
+
/*
 * active_planes bitmask has been updated, and potentially
 * affected planes are part of the state. We can now
@@ -14685,7 +14700,6 @@ static int intel_atomic_check(struct drm_device *dev,
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_atomic_state *state = to_intel_atomic_state(_state);
struct intel_crtc_state *old_crtc_state, *new_crtc_state;
-   struct intel_cdclk_state *new_cdclk_state;
struct intel_crtc *crtc;
int ret, i;
bool any_ms = false;
@@ -14796,14 +14810,10 @@ static int intel_atomic_check(struct drm_device *dev,
if (ret)
goto fail;
 
-   ret = intel_atomic_check_planes(state, &any_ms);
+   ret = intel_atomic_check_planes(state);
if (ret)
goto fail;
 
-   new_cdclk_state = intel_atomic_get_new_cdclk_state(state);
-   if (new_cdclk_state && new_cdclk_state->force_min_cdclk_changed)
-   any_ms = true;
-
/*
 * distrust_bios_wm will force a full dbuf recomputation
 * but the hardware state will only get updated accordingly
@@ -14833,6 +14843,10 @@ static int intel_atomic_check(struct drm_device *dev,
if (ret)
goto fail;
 
+   ret = intel_atomic_check_cdclk(state, &any_ms);
+   if (ret)
+   goto fail;
+
if (any_ms) {
ret = intel_modeset_calc_cdclk(state);
if (ret)
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v9 6/7] drm/i915: Adjust CDCLK accordingly to our DBuf bw needs

2020-05-19 Thread Stanislav Lisovskiy
According to BSpec max BW per slice is calculated using formula
Max BW = CDCLK * 64. Currently when calculating min CDCLK we
account only per plane requirements, however in order to avoid
FIFO underruns we need to estimate accumulated BW consumed by
all planes(ddb entries basically) residing on that particular
DBuf slice. This will allow us to put CDCLK lower and save power
when we don't need that much bandwidth or gain additional
performance once plane consumption grows.

v2: - Fix long line warning
- Limited new DBuf bw checks to only gens >= 11

v3: - Lets track used Dbuf bw per slice and per crtc in bw state
  (or may be in DBuf state in future), that way we don't need
  to have all crtcs in state and those only if we detect if
  are actually going to change cdclk, just same way as we
  do with other stuff, i.e intel_atomic_serialize_global_state
  and co. Just as per Ville's paradigm.
- Made dbuf bw calculation procedure look nicer by introducing
  for_each_dbuf_slice_in_mask - we often will now need to iterate
  slices using mask.
- According to experimental results CDCLK * 64 accounts for
  overall bandwidth across all dbufs, not per dbuf.

v4: - Fixed missing const(Ville)
- Removed spurious whitespaces(Ville)
- Fixed local variable init(reduced scope where not needed)
- Added some comments about data rate for planar formats
- Changed struct intel_crtc_bw to intel_dbuf_bw
- Moved dbuf bw calculation to intel_compute_min_cdclk(Ville)

v5: - Removed unneeded macro

v6: - Prevent too frequent CDCLK switching back and forth:
  Always switch to higher CDCLK when needed to prevent bandwidth
  issues, however don't switch to lower CDCLK earlier than once
  in 30 minutes in order to prevent constant modeset blinking.
  We could of course not switch back at all, however this is
  bad from power consumption point of view.

v7: - Fixed to track cdclk using bw_state, modeset will be now
  triggered only when CDCLK change is really needed.

v8: - Lock global state if bw_state->min_cdclk is changed.
- Try getting bw_state only if there are crtcs in the commit
  (need to have read-locked global state)

v9: - Do not do Dbuf bw check for gens < 9 - triggers WARN
  as ddb_size is 0.

v10: - Lock global state for older gens as well.

v11: - Define new bw_calc_min_cdclk hook, instead of using
   a condition(Manasi Navare)

Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_bw.c  | 118 ++-
 drivers/gpu/drm/i915/display/intel_bw.h  |  10 ++
 drivers/gpu/drm/i915/display/intel_cdclk.c   |  28 -
 drivers/gpu/drm/i915/display/intel_cdclk.h   |   1 -
 drivers/gpu/drm/i915/display/intel_display.c |  39 +-
 drivers/gpu/drm/i915/i915_drv.h  |   1 +
 drivers/gpu/drm/i915/intel_pm.c  |  31 -
 drivers/gpu/drm/i915/intel_pm.h  |   3 +
 8 files changed, 217 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_bw.c 
b/drivers/gpu/drm/i915/display/intel_bw.c
index 6e7cc3a4f1aa..e46bc9e626b1 100644
--- a/drivers/gpu/drm/i915/display/intel_bw.c
+++ b/drivers/gpu/drm/i915/display/intel_bw.c
@@ -6,8 +6,10 @@
 #include 
 
 #include "intel_bw.h"
+#include "intel_pm.h"
 #include "intel_display_types.h"
 #include "intel_sideband.h"
+#include "intel_cdclk.h"
 
 /* Parameters for Qclk Geyserville (QGV) */
 struct intel_qgv_point {
@@ -333,7 +335,6 @@ static unsigned int intel_bw_crtc_data_rate(const struct 
intel_crtc_state *crtc_
 
return data_rate;
 }
-
 void intel_bw_crtc_update(struct intel_bw_state *bw_state,
  const struct intel_crtc_state *crtc_state)
 {
@@ -410,6 +411,121 @@ intel_atomic_get_bw_state(struct intel_atomic_state 
*state)
return to_intel_bw_state(bw_state);
 }
 
+int skl_bw_calc_min_cdclk(struct intel_atomic_state *state)
+{
+   struct drm_i915_private *dev_priv = to_i915(state->base.dev);
+   int i;
+   const struct intel_crtc_state *crtc_state;
+   struct intel_crtc *crtc;
+   int max_bw = 0;
+   int slice_id;
+   struct intel_bw_state *new_bw_state = NULL;
+   struct intel_bw_state *old_bw_state = NULL;
+
+   for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
+   enum plane_id plane_id;
+   struct intel_dbuf_bw *crtc_bw;
+
+   new_bw_state = intel_atomic_get_bw_state(state);
+   if (IS_ERR(new_bw_state))
+   return PTR_ERR(new_bw_state);
+
+   crtc_bw = &new_bw_state->dbuf_bw[crtc->pipe];
+
+   memset(&crtc_bw->used_bw, 0, sizeof(crtc_bw->used_bw));
+
+   for_each_plane_id_on_crtc(crtc, plane_id) {
+   const struct skl_ddb_entry *plane_alloc =
+   &crtc_state->wm.skl.plane_ddb_y[plane_id];
+   const struct skl_ddb_entry *uv_plane_alloc =
+   

[Intel-gfx] [PATCH v9 7/7] drm/i915: Remove unneeded hack now for CDCLK

2020-05-19 Thread Stanislav Lisovskiy
No need to bump up CDCLK now, as it is now correctly
calculated, accounting for DBuf BW as BSpec says.

Reviewed-by: Manasi Navare 
Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_cdclk.c | 12 
 1 file changed, 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_cdclk.c 
b/drivers/gpu/drm/i915/display/intel_cdclk.c
index 2e0217821bf5..6c7789cdc3ba 100644
--- a/drivers/gpu/drm/i915/display/intel_cdclk.c
+++ b/drivers/gpu/drm/i915/display/intel_cdclk.c
@@ -2070,18 +2070,6 @@ int intel_crtc_compute_min_cdclk(const struct 
intel_crtc_state *crtc_state)
/* Account for additional needs from the planes */
min_cdclk = max(intel_planes_min_cdclk(crtc_state), min_cdclk);
 
-   /*
-* HACK. Currently for TGL platforms we calculate
-* min_cdclk initially based on pixel_rate divided
-* by 2, accounting for also plane requirements,
-* however in some cases the lowest possible CDCLK
-* doesn't work and causing the underruns.
-* Explicitly stating here that this seems to be currently
-* rather a Hack, than final solution.
-*/
-   if (IS_TIGERLAKE(dev_priv))
-   min_cdclk = max(min_cdclk, (int)crtc_state->pixel_rate);
-
if (min_cdclk > dev_priv->max_cdclk_freq) {
drm_dbg_kms(&dev_priv->drm,
"required cdclk (%d kHz) exceeds max (%d kHz)\n",
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v9 4/7] drm/i915: Plane configuration affects CDCLK in Gen11+

2020-05-19 Thread Stanislav Lisovskiy
From: Stanislav Lisovskiy 

So lets support it.

Reviewed-by: Manasi Navare 
Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_display.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c 
b/drivers/gpu/drm/i915/display/intel_display.c
index a9ab66d97360..800ae3768841 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -14569,7 +14569,7 @@ static bool active_planes_affects_min_cdclk(struct 
drm_i915_private *dev_priv)
/* See {hsw,vlv,ivb}_plane_ratio() */
return IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv) ||
IS_CHERRYVIEW(dev_priv) || IS_VALLEYVIEW(dev_priv) ||
-   IS_IVYBRIDGE(dev_priv);
+   IS_IVYBRIDGE(dev_priv) || (INTEL_GEN(dev_priv) >= 11);
 }
 
 static int intel_atomic_check_planes(struct intel_atomic_state *state)
-- 
2.24.1.485.gad05a3d8e5

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [CI 3/3] drm/i915/gt: Incorporate the virtual engine into timeslicing

2020-05-19 Thread Chris Wilson
It was quite the oversight to only factor in the normal queue to decide
the timeslicing switch priority. By leaving out the next virtual request
from the priority decision, we would not timeslice the current engine if
there was an available virtual request.

Testcase: igt/gem_exec_balancer/sliced
Fixes: 3df2deed411e ("drm/i915/execlists: Enable timeslice on partial virtual 
engine dequeue")
Signed-off-by: Chris Wilson 
Cc: Mika Kuoppala 
Cc: Tvrtko Ursulin 
Reviewed-by: Tvrtko Ursulin 
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 30 +++--
 1 file changed, 24 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c 
b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 7ee89d58258a..de5be57ed6d2 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1895,7 +1895,8 @@ static void defer_active(struct intel_engine_cs *engine)
 
 static bool
 need_timeslice(const struct intel_engine_cs *engine,
-  const struct i915_request *rq)
+  const struct i915_request *rq,
+  const struct rb_node *rb)
 {
int hint;
 
@@ -1903,6 +1904,24 @@ need_timeslice(const struct intel_engine_cs *engine,
return false;
 
hint = engine->execlists.queue_priority_hint;
+
+   if (rb) {
+   const struct virtual_engine *ve =
+   rb_entry(rb, typeof(*ve), nodes[engine->id].rb);
+   const struct intel_engine_cs *inflight =
+   intel_context_inflight(&ve->context);
+
+   if (!inflight || inflight == engine) {
+   struct i915_request *next;
+
+   rcu_read_lock();
+   next = READ_ONCE(ve->request);
+   if (next)
+   hint = max(hint, rq_prio(next));
+   rcu_read_unlock();
+   }
+   }
+
if (!list_is_last(&rq->sched.link, &engine->active.requests))
hint = max(hint, rq_prio(list_next_entry(rq, sched.link)));
 
@@ -1977,10 +1996,9 @@ static void set_timeslice(struct intel_engine_cs *engine)
set_timer_ms(&engine->execlists.timer, duration);
 }
 
-static void start_timeslice(struct intel_engine_cs *engine)
+static void start_timeslice(struct intel_engine_cs *engine, int prio)
 {
struct intel_engine_execlists *execlists = &engine->execlists;
-   const int prio = queue_prio(execlists);
unsigned long duration;
 
if (!intel_engine_has_timeslices(engine))
@@ -2140,7 +2158,7 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
__unwind_incomplete_requests(engine);
 
last = NULL;
-   } else if (need_timeslice(engine, last) &&
+   } else if (need_timeslice(engine, last, rb) &&
   timeslice_expired(execlists, last)) {
if (i915_request_completed(last)) {
tasklet_hi_schedule(&execlists->tasklet);
@@ -2188,7 +2206,7 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
 * Even if ELSP[1] is occupied and not worthy
 * of timeslices, our queue might be.
 */
-   start_timeslice(engine);
+   start_timeslice(engine, queue_prio(execlists));
return;
}
}
@@ -2223,7 +2241,7 @@ static void execlists_dequeue(struct intel_engine_cs 
*engine)
 
if (last && !can_merge_rq(last, rq)) {
spin_unlock(&ve->base.active.lock);
-   start_timeslice(engine);
+   start_timeslice(engine, rq_prio(rq));
return; /* leave this for another sibling */
}
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [CI 2/3] drm/i915/gt: Kick virtual siblings on timeslice out

2020-05-19 Thread Chris Wilson
If we decide to timeslice out the current virtual request, we will
unsubmit it while it is still busy (ve->context.inflight == sibling[0]).
If the virtual tasklet and then the other sibling tasklets run before we
completely schedule out the active virtual request for the preemption,
those other tasklets will see that the virtul request is still inflight
on sibling[0] and leave it be. Therefore when we finally schedule-out
the virtual request and if we see that we have passed it back to the
virtual engine, reschedule the virtual tasklet so that it may be
resubmitted on any of the siblings.

Signed-off-by: Chris Wilson 
Reviewed-by: Tvrtko Ursulin 
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c 
b/drivers/gpu/drm/i915/gt/intel_lrc.c
index d7ef3f8640d2..7ee89d58258a 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1402,7 +1402,7 @@ static void kick_siblings(struct i915_request *rq, struct 
intel_context *ce)
struct virtual_engine *ve = container_of(ce, typeof(*ve), context);
struct i915_request *next = READ_ONCE(ve->request);
 
-   if (next && next->execution_mask & ~rq->execution_mask)
+   if (next == rq || (next && next->execution_mask & ~rq->execution_mask))
tasklet_hi_schedule(&ve->base.execlists.tasklet);
 }
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [CI 1/3] drm/i915/selftests: Add tests for timeslicing virtual engines

2020-05-19 Thread Chris Wilson
Make sure that we can execute a virtual request on an already busy
engine, and conversely that we can execute a normal request if the
engines are already fully occupied by virtual requests.

Signed-off-by: Chris Wilson 
Reviewed-by: Tvrtko Ursulin 
---
 drivers/gpu/drm/i915/gt/selftest_lrc.c | 200 -
 1 file changed, 197 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c 
b/drivers/gpu/drm/i915/gt/selftest_lrc.c
index 94854a467e66..7ab0e804f73a 100644
--- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
@@ -3600,9 +3600,11 @@ static int nop_virtual_engine(struct intel_gt *gt,
return err;
 }
 
-static unsigned int select_siblings(struct intel_gt *gt,
-   unsigned int class,
-   struct intel_engine_cs **siblings)
+static unsigned int
+__select_siblings(struct intel_gt *gt,
+ unsigned int class,
+ struct intel_engine_cs **siblings,
+ bool (*filter)(const struct intel_engine_cs *))
 {
unsigned int n = 0;
unsigned int inst;
@@ -3611,12 +3613,23 @@ static unsigned int select_siblings(struct intel_gt *gt,
if (!gt->engine_class[class][inst])
continue;
 
+   if (filter && !filter(gt->engine_class[class][inst]))
+   continue;
+
siblings[n++] = gt->engine_class[class][inst];
}
 
return n;
 }
 
+static unsigned int
+select_siblings(struct intel_gt *gt,
+   unsigned int class,
+   struct intel_engine_cs **siblings)
+{
+   return __select_siblings(gt, class, siblings, NULL);
+}
+
 static int live_virtual_engine(void *arg)
 {
struct intel_gt *gt = arg;
@@ -3771,6 +3784,186 @@ static int live_virtual_mask(void *arg)
return 0;
 }
 
+static long slice_timeout(struct intel_engine_cs *engine)
+{
+   long timeout;
+
+   /* Enough time for a timeslice to kick in, and kick out */
+   timeout = 2 * msecs_to_jiffies_timeout(timeslice(engine));
+
+   /* Enough time for the nop request to complete */
+   timeout += HZ / 5;
+
+   return timeout;
+}
+
+static int slicein_virtual_engine(struct intel_gt *gt,
+ struct intel_engine_cs **siblings,
+ unsigned int nsibling)
+{
+   const long timeout = slice_timeout(siblings[0]);
+   struct intel_context *ce;
+   struct i915_request *rq;
+   struct igt_spinner spin;
+   unsigned int n;
+   int err = 0;
+
+   /*
+* Virtual requests must take part in timeslicing on the target engines.
+*/
+
+   if (igt_spinner_init(&spin, gt))
+   return -ENOMEM;
+
+   for (n = 0; n < nsibling; n++) {
+   ce = intel_context_create(siblings[n]);
+   if (IS_ERR(ce)) {
+   err = PTR_ERR(ce);
+   goto out;
+   }
+
+   rq = igt_spinner_create_request(&spin, ce, MI_ARB_CHECK);
+   intel_context_put(ce);
+   if (IS_ERR(rq)) {
+   err = PTR_ERR(rq);
+   goto out;
+   }
+
+   i915_request_add(rq);
+   }
+
+   ce = intel_execlists_create_virtual(siblings, nsibling);
+   if (IS_ERR(ce)) {
+   err = PTR_ERR(ce);
+   goto out;
+   }
+
+   rq = intel_context_create_request(ce);
+   intel_context_put(ce);
+   if (IS_ERR(rq)) {
+   err = PTR_ERR(rq);
+   goto out;
+   }
+
+   i915_request_get(rq);
+   i915_request_add(rq);
+   if (i915_request_wait(rq, 0, timeout) < 0) {
+   GEM_TRACE_ERR("%s(%s) failed to slice in virtual request\n",
+ __func__, rq->engine->name);
+   GEM_TRACE_DUMP();
+   intel_gt_set_wedged(gt);
+   err = -EIO;
+   }
+   i915_request_put(rq);
+
+out:
+   igt_spinner_end(&spin);
+   if (igt_flush_test(gt->i915))
+   err = -EIO;
+   igt_spinner_fini(&spin);
+   return err;
+}
+
+static int sliceout_virtual_engine(struct intel_gt *gt,
+  struct intel_engine_cs **siblings,
+  unsigned int nsibling)
+{
+   const long timeout = slice_timeout(siblings[0]);
+   struct intel_context *ce;
+   struct i915_request *rq;
+   struct igt_spinner spin;
+   unsigned int n;
+   int err = 0;
+
+   /*
+* Virtual requests must allow others a fair timeslice.
+*/
+
+   if (igt_spinner_init(&spin, gt))
+   return -ENOMEM;
+
+   /* XXX We do not handle oversubscription and fairness with normal rq */
+   for (n = 0; n < nsibling; n++) {
+   ce = intel_execlists_create_virtual(siblings, nsibling);
+

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/selftests: Measure dispatch latency (rev9)

2020-05-19 Thread Patchwork
== Series Details ==

Series: drm/i915/selftests: Measure dispatch latency (rev9)
URL   : https://patchwork.freedesktop.org/series/77308/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8502 -> Patchwork_17709


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17709/index.html

Known issues


  Here are the changes found in Patchwork_17709 that come from known issues:

### IGT changes ###

 Possible fixes 

  * igt@i915_selftest@live@execlists:
- fi-cfl-guc: [INCOMPLETE][1] ([i915#656]) -> [PASS][2]
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8502/fi-cfl-guc/igt@i915_selftest@l...@execlists.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17709/fi-cfl-guc/igt@i915_selftest@l...@execlists.html

  
  [i915#656]: https://gitlab.freedesktop.org/drm/intel/issues/656


Participating hosts (50 -> 43)
--

  Missing(7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-ctg-p8600 fi-kbl-7560u fi-byt-clapper 


Build changes
-

  * Linux: CI_DRM_8502 -> Patchwork_17709

  CI-20190529: 20190529
  CI_DRM_8502: 5bafb3de802a8dd663009250b587c1a78ad298c9 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5659: 66ab5e42811fee3dea8c21ab29e70e323a0650de @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17709: a072e6b6f5c787b3051f16f09afc6cd411b99af9 @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

a072e6b6f5c7 drm/i915/selftests: Measure dispatch latency

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17709/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH] dma-fence: add might_sleep annotation to _wait()

2020-05-19 Thread Daniel Vetter
Do it uncontionally, there's a separate peek function with
dma_fence_is_signalled() which can be called from atomic context.

v2: Consensus calls for an unconditional might_sleep (Chris,
Christian)

Full audit:
- dma-fence.h: Uses MAX_SCHEDULE_TIMOUT, good chance this sleeps
- dma-resv.c: Timeout always at least 1
- st-dma-fence.c: Save to sleep in testcases
- amdgpu_cs.c: Both callers are for variants of the wait ioctl
- amdgpu_device.c: Two callers in vram recover code, both right next
  to mutex_lock.
- amdgpu_vm.c: Use in the vm_wait ioctl, next to _reserve/unreserve
- remaining functions in amdgpu: All for test_ib implementations for
  various engines, caller for that looks all safe (debugfs, driver
  load, reset)
- etnaviv: another wait ioctl
- habanalabs: another wait ioctl
- nouveau_fence.c: hardcoded 15*HZ ... glorious
- nouveau_gem.c: hardcoded 2*HZ ... so not even super consistent, but
  this one does have a WARN_ON :-/ At least this one is only a
  fallback path for when kmalloc fails. Maybe this should be put onto
  some worker list instead, instead of a work per unamp ...
- i915/selftests: Hardecoded HZ / 4 or HZ / 8
- i915/gt/selftests: Going up the callchain looks safe looking at
  nearby callers
- i915/gt/intel_gt_requests.c. Wrapped in a mutex_lock
- i915/gem_i915_gem_wait.c: The i915-version which is called instead
  for i915 fences already has a might_sleep() annotation, so all good

Cc: Alex Deucher 
Cc: Lucas Stach 
Cc: Jani Nikula 
Cc: Joonas Lahtinen 
Cc: Rodrigo Vivi 
Cc: Ben Skeggs 
Cc: "VMware Graphics" 
Cc: Oded Gabbay 
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
Cc: linux-r...@vger.kernel.org
Cc: amd-...@lists.freedesktop.org
Cc: intel-gfx@lists.freedesktop.org
Cc: Chris Wilson 
Cc: Maarten Lankhorst 
Cc: Christian König 
Signed-off-by: Daniel Vetter 
---
 drivers/dma-buf/dma-fence.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 90edf2b281b0..656e9ac2d028 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -208,6 +208,8 @@ dma_fence_wait_timeout(struct dma_fence *fence, bool intr, 
signed long timeout)
if (WARN_ON(timeout < 0))
return -EINVAL;
 
+   might_sleep();
+
trace_dma_fence_wait_start(fence);
if (fence->ops->wait)
ret = fence->ops->wait(fence, intr, timeout);
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 01/12] drm/i915: Don't set queue-priority hint when supressing the reschedule

2020-05-19 Thread Mika Kuoppala
Chris Wilson  writes:

s/supressing/suppressing

> We recorded the execlists->queue_priority_hint update for the inflight
> request without kicking the tasklet. The next submitted request then
> failed to be scheduled as it had a lower priority than the hint, leaving
> the HW runnning with only the inflight request.

s/nnn/nn

Reviewed-by: Mika Kuoppala 

>
> Fixes: 6cebcf746f3f ("drm/i915: Tweak scheduler's kick_submission()")
> Signed-off-by: Chris Wilson 
> ---
>  drivers/gpu/drm/i915/i915_scheduler.c | 16 
>  1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_scheduler.c 
> b/drivers/gpu/drm/i915/i915_scheduler.c
> index f4ea318781f0..cbb880b10c65 100644
> --- a/drivers/gpu/drm/i915/i915_scheduler.c
> +++ b/drivers/gpu/drm/i915/i915_scheduler.c
> @@ -209,14 +209,6 @@ static void kick_submission(struct intel_engine_cs 
> *engine,
>   if (!inflight)
>   goto unlock;
>  
> - ENGINE_TRACE(engine,
> -  "bumping queue-priority-hint:%d for rq:%llx:%lld, 
> inflight:%llx:%lld prio %d\n",
> -  prio,
> -  rq->fence.context, rq->fence.seqno,
> -  inflight->fence.context, inflight->fence.seqno,
> -  inflight->sched.attr.priority);
> - engine->execlists.queue_priority_hint = prio;
> -
>   /*
>* If we are already the currently executing context, don't
>* bother evaluating if we should preempt ourselves.
> @@ -224,6 +216,14 @@ static void kick_submission(struct intel_engine_cs 
> *engine,
>   if (inflight->context == rq->context)
>   goto unlock;
>  
> + ENGINE_TRACE(engine,
> +  "bumping queue-priority-hint:%d for rq:%llx:%lld, 
> inflight:%llx:%lld prio %d\n",
> +  prio,
> +  rq->fence.context, rq->fence.seqno,
> +  inflight->fence.context, inflight->fence.seqno,
> +  inflight->sched.attr.priority);
> +
> + engine->execlists.queue_priority_hint = prio;
>   if (need_preempt(prio, rq_prio(inflight)))
>   tasklet_hi_schedule(&engine->execlists.tasklet);
>  
> -- 
> 2.20.1
>
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v2 3/9] drm/i915/display/sdvo: Prefer drm_WARN* over WARN*

2020-05-19 Thread Jani Nikula
On Fri, 08 May 2020, "Laxminarayan Bharadiya, Pankaj"   
 wrote:
>> -Original Message-
>> From: Jani Nikula 
>> Sent: 08 May 2020 12:19
>> To: Laxminarayan Bharadiya, Pankaj
>> ; dan...@ffwll.ch; intel-
>> g...@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Joonas Lahtinen
>> ; Vivi, Rodrigo ;
>> David Airlie ; Ville Syrjälä 
>> ; Chris
>> Wilson ; Deak, Imre ;
>> Maarten Lankhorst ; Laxminarayan
>> Bharadiya, Pankaj 
>> Subject: Re: [PATCH v2 3/9] drm/i915/display/sdvo: Prefer drm_WARN* over
>> WARN*
>> 
>> On Mon, 04 May 2020, Pankaj Bharadiya
>>  wrote:
>> > struct drm_device specific drm_WARN* macros include device information
>> > in the backtrace, so we know what device the warnings originate from.
>> >
>> > Prefer drm_WARN* over WARN* calls.
>> >
>> > changes since v1:
>> > - Added dev_priv local variable and used it in drm_WARN* calls (Jani)
>> 
>> In the earlier patches you're adding i915 local variable, here it's 
>> dev_priv. We're
>> gradually transitioning from dev_priv to i915, so I'm not thrilled about 
>> adding
>> new dev_priv.
>
> dev_priv name is being used throughout the file. So to be consistent with 
> rest of the
> code, I used dev_priv variable in this specific file. 
>
> Shall I rename it to i915?
>
> I used i915 or dev_priv  variable name based on what variable name being
> already used for struct drm_i915_private pointer in a given file.

I understand your reasoning. However with i915 I've preferred to switch
when possible.

Regardless, pushed the series now. Thanks for the patches, and sorry for
the delay.

BR,
Jani.



>
> Thanks,
> Pankaj
>
>> 
>> BR,
>> Jani.
>> 
>> 
>> 
>> >
>> > Signed-off-by: Pankaj Bharadiya
>> > 
>> > ---
>> >  drivers/gpu/drm/i915/display/intel_sdvo.c | 21 ++---
>> >  1 file changed, 14 insertions(+), 7 deletions(-)
>> >
>> > diff --git a/drivers/gpu/drm/i915/display/intel_sdvo.c
>> > b/drivers/gpu/drm/i915/display/intel_sdvo.c
>> > index bc6c26818e15..773523dcd107 100644
>> > --- a/drivers/gpu/drm/i915/display/intel_sdvo.c
>> > +++ b/drivers/gpu/drm/i915/display/intel_sdvo.c
>> > @@ -411,6 +411,7 @@ static const char *sdvo_cmd_name(u8 cmd)  static
>> > void intel_sdvo_debug_write(struct intel_sdvo *intel_sdvo, u8 cmd,
>> >   const void *args, int args_len)  {
>> > +  struct drm_i915_private *dev_priv =
>> > +to_i915(intel_sdvo->base.base.dev);
>> >const char *cmd_name;
>> >int i, pos = 0;
>> >char buffer[64];
>> > @@ -431,7 +432,7 @@ static void intel_sdvo_debug_write(struct intel_sdvo
>> *intel_sdvo, u8 cmd,
>> >else
>> >BUF_PRINT("(%02X)", cmd);
>> >
>> > -  WARN_ON(pos >= sizeof(buffer) - 1);
>> > +  drm_WARN_ON(&dev_priv->drm, pos >= sizeof(buffer) - 1);
>> >  #undef BUF_PRINT
>> >
>> >DRM_DEBUG_KMS("%s: W: %02X %s\n", SDVO_NAME(intel_sdvo), cmd,
>> > buffer); @@ -533,6 +534,7 @@ static bool intel_sdvo_write_cmd(struct
>> > intel_sdvo *intel_sdvo, u8 cmd,  static bool 
>> > intel_sdvo_read_response(struct
>> intel_sdvo *intel_sdvo,
>> > void *response, int response_len)  {
>> > +  struct drm_i915_private *dev_priv =
>> > +to_i915(intel_sdvo->base.base.dev);
>> >const char *cmd_status;
>> >u8 retry = 15; /* 5 quick checks, followed by 10 long checks */
>> >u8 status;
>> > @@ -597,7 +599,7 @@ static bool intel_sdvo_read_response(struct
>> intel_sdvo *intel_sdvo,
>> >BUF_PRINT(" %02X", ((u8 *)response)[i]);
>> >}
>> >
>> > -  WARN_ON(pos >= sizeof(buffer) - 1);
>> > +  drm_WARN_ON(&dev_priv->drm, pos >= sizeof(buffer) - 1);
>> >  #undef BUF_PRINT
>> >
>> >DRM_DEBUG_KMS("%s: R: %s\n", SDVO_NAME(intel_sdvo), buffer);
>> @@
>> > -1081,6 +1083,7 @@ static bool intel_sdvo_compute_avi_infoframe(struct
>> intel_sdvo *intel_sdvo,
>> > struct intel_crtc_state 
>> > *crtc_state,
>> > struct drm_connector_state
>> *conn_state)  {
>> > +  struct drm_i915_private *dev_priv =
>> > +to_i915(intel_sdvo->base.base.dev);
>> >struct hdmi_avi_infoframe *frame = &crtc_state->infoframes.avi.avi;
>> >const struct drm_display_mode *adjusted_mode =
>> >&crtc_state->hw.adjusted_mode;
>> > @@ -1106,7 +1109,7 @@ static bool
>> intel_sdvo_compute_avi_infoframe(struct intel_sdvo *intel_sdvo,
>> >
>> HDMI_QUANTIZATION_RANGE_FULL);
>> >
>> >ret = hdmi_avi_infoframe_check(frame);
>> > -  if (WARN_ON(ret))
>> > +  if (drm_WARN_ON(&dev_priv->drm, ret))
>> >return false;
>> >
>> >return true;
>> > @@ -1115,6 +1118,7 @@ static bool
>> > intel_sdvo_compute_avi_infoframe(struct intel_sdvo *intel_sdvo,  static 
>> > bool
>> intel_sdvo_set_avi_infoframe(struct intel_sdvo *intel_sdvo,
>> > const struct intel_crtc_state
>> *crtc_state)  {
>> > +  struct drm_i915_private *dev_priv =
>> > +to_i915(intel_sdvo->base.base.dev);
>> >u8 sdvo_data[HDMI_INFOFRAME_SIZE(AVI)];
>> >const un

Re: [Intel-gfx] [PATCH 03/12] drm/i915/selftests: Restore to default heartbeat

2020-05-19 Thread Mika Kuoppala
Chris Wilson  writes:

> Since we temporarily disable the heartbeat and restore back to the
> default value, we can use the stored defaults on the engine and avoid
> using a local.
>
> Signed-off-by: Chris Wilson 
> ---

Reviewed-by: Mika Kuoppala 

>  drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 25 +++
>  drivers/gpu/drm/i915/gt/selftest_lrc.c   | 67 +++
>  drivers/gpu/drm/i915/gt/selftest_rps.c   | 69 
>  drivers/gpu/drm/i915/gt/selftest_timeline.c  | 15 ++---
>  4 files changed, 67 insertions(+), 109 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c 
> b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
> index 2b2efff6e19d..4aa4cc917d8b 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
> @@ -310,22 +310,20 @@ static bool wait_until_running(struct hang *h, struct 
> i915_request *rq)
> 1000));
>  }
>  
> -static void engine_heartbeat_disable(struct intel_engine_cs *engine,
> -  unsigned long *saved)
> +static void engine_heartbeat_disable(struct intel_engine_cs *engine)
>  {
> - *saved = engine->props.heartbeat_interval_ms;
>   engine->props.heartbeat_interval_ms = 0;
>  
>   intel_engine_pm_get(engine);
>   intel_engine_park_heartbeat(engine);
>  }
>  
> -static void engine_heartbeat_enable(struct intel_engine_cs *engine,
> - unsigned long saved)
> +static void engine_heartbeat_enable(struct intel_engine_cs *engine)
>  {
>   intel_engine_pm_put(engine);
>  
> - engine->props.heartbeat_interval_ms = saved;
> + engine->props.heartbeat_interval_ms =
> + engine->defaults.heartbeat_interval_ms;
>  }
>  
>  static int igt_hang_sanitycheck(void *arg)
> @@ -473,7 +471,6 @@ static int igt_reset_nop_engine(void *arg)
>   for_each_engine(engine, gt, id) {
>   unsigned int reset_count, reset_engine_count, count;
>   struct intel_context *ce;
> - unsigned long heartbeat;
>   IGT_TIMEOUT(end_time);
>   int err;
>  
> @@ -485,7 +482,7 @@ static int igt_reset_nop_engine(void *arg)
>   reset_engine_count = i915_reset_engine_count(global, engine);
>   count = 0;
>  
> - engine_heartbeat_disable(engine, &heartbeat);
> + engine_heartbeat_disable(engine);
>   set_bit(I915_RESET_ENGINE + id, >->reset.flags);
>   do {
>   int i;
> @@ -529,7 +526,7 @@ static int igt_reset_nop_engine(void *arg)
>   }
>   } while (time_before(jiffies, end_time));
>   clear_bit(I915_RESET_ENGINE + id, >->reset.flags);
> - engine_heartbeat_enable(engine, heartbeat);
> + engine_heartbeat_enable(engine);
>  
>   pr_info("%s(%s): %d resets\n", __func__, engine->name, count);
>  
> @@ -564,7 +561,6 @@ static int __igt_reset_engine(struct intel_gt *gt, bool 
> active)
>  
>   for_each_engine(engine, gt, id) {
>   unsigned int reset_count, reset_engine_count;
> - unsigned long heartbeat;
>   IGT_TIMEOUT(end_time);
>  
>   if (active && !intel_engine_can_store_dword(engine))
> @@ -580,7 +576,7 @@ static int __igt_reset_engine(struct intel_gt *gt, bool 
> active)
>   reset_count = i915_reset_count(global);
>   reset_engine_count = i915_reset_engine_count(global, engine);
>  
> - engine_heartbeat_disable(engine, &heartbeat);
> + engine_heartbeat_disable(engine);
>   set_bit(I915_RESET_ENGINE + id, >->reset.flags);
>   do {
>   if (active) {
> @@ -632,7 +628,7 @@ static int __igt_reset_engine(struct intel_gt *gt, bool 
> active)
>   }
>   } while (time_before(jiffies, end_time));
>   clear_bit(I915_RESET_ENGINE + id, >->reset.flags);
> - engine_heartbeat_enable(engine, heartbeat);
> + engine_heartbeat_enable(engine);
>  
>   if (err)
>   break;
> @@ -789,7 +785,6 @@ static int __igt_reset_engines(struct intel_gt *gt,
>   struct active_engine threads[I915_NUM_ENGINES] = {};
>   unsigned long device = i915_reset_count(global);
>   unsigned long count = 0, reported;
> - unsigned long heartbeat;
>   IGT_TIMEOUT(end_time);
>  
>   if (flags & TEST_ACTIVE &&
> @@ -832,7 +827,7 @@ static int __igt_reset_engines(struct intel_gt *gt,
>  
>   yield(); /* start all threads before we begin */
>  
> - engine_heartbeat_disable(engine, &heartbeat);
> + engine_heartbeat_disable(engine);
>   set_bit(I915_RESET_ENGINE + id, >->reset.flags);
>   do {
>   struct i915_request *rq = NULL;
> @@ -906,7 +

Re: [Intel-gfx] [PATCH 02/12] drm/i915/selftests: Change priority overflow detection

2020-05-19 Thread Mika Kuoppala
Chris Wilson  writes:

> Check for integer overflow in the priority chain, rather than against a
> type-constricted max-priority check.
>
> Signed-off-by: Chris Wilson 

Reviewed-by: Mika Kuoppala 

> ---
>  drivers/gpu/drm/i915/gt/selftest_lrc.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c 
> b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> index 94854a467e66..3e042fa4b94b 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> @@ -2735,12 +2735,12 @@ static int live_preempt_gang(void *arg)
>   /* Submit each spinner at increasing priority */
>   engine->schedule(rq, &attr);
>  
> + if (prio < attr.priority)
> + break;
> +
>   if (prio <= I915_PRIORITY_MAX)
>   continue;
>  
> - if (prio > (INT_MAX >> I915_USER_PRIORITY_SHIFT))
> - break;
> -
>   if (__igt_timeout(end_time, NULL))
>   break;
>   } while (1);
> -- 
> 2.20.1
>
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v2 3/9] drm/i915/display/sdvo: Prefer drm_WARN* over WARN*

2020-05-19 Thread Laxminarayan Bharadiya, Pankaj


> -Original Message-
> From: Jani Nikula 
> Sent: 19 May 2020 19:12
> To: Laxminarayan Bharadiya, Pankaj
> ; dan...@ffwll.ch; intel-
> g...@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Joonas Lahtinen
> ; Vivi, Rodrigo ;
> David Airlie ; Ville Syrjälä 
> ; Chris
> Wilson ; Deak, Imre ;
> Maarten Lankhorst 
> Subject: RE: [PATCH v2 3/9] drm/i915/display/sdvo: Prefer drm_WARN* over
> WARN*
> 
> On Fri, 08 May 2020, "Laxminarayan Bharadiya, Pankaj"
>wrote:
> >> -Original Message-
> >> From: Jani Nikula 
> >> Sent: 08 May 2020 12:19
> >> To: Laxminarayan Bharadiya, Pankaj
> >> ; dan...@ffwll.ch; intel-
> >> g...@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Joonas
> >> Lahtinen ; Vivi, Rodrigo
> >> ; David Airlie ; Ville
> >> Syrjälä ; Chris Wilson
> >> ; Deak, Imre ; Maarten
> >> Lankhorst ; Laxminarayan
> >> Bharadiya, Pankaj 
> >> Subject: Re: [PATCH v2 3/9] drm/i915/display/sdvo: Prefer drm_WARN*
> >> over
> >> WARN*
> >>
> >> On Mon, 04 May 2020, Pankaj Bharadiya
> >>  wrote:
> >> > struct drm_device specific drm_WARN* macros include device
> >> > information in the backtrace, so we know what device the warnings
> originate from.
> >> >
> >> > Prefer drm_WARN* over WARN* calls.
> >> >
> >> > changes since v1:
> >> > - Added dev_priv local variable and used it in drm_WARN* calls
> >> > (Jani)
> >>
> >> In the earlier patches you're adding i915 local variable, here it's
> >> dev_priv. We're gradually transitioning from dev_priv to i915, so I'm
> >> not thrilled about adding new dev_priv.
> >
> > dev_priv name is being used throughout the file. So to be consistent
> > with rest of the code, I used dev_priv variable in this specific file.
> >
> > Shall I rename it to i915?
> >
> > I used i915 or dev_priv  variable name based on what variable name
> > being already used for struct drm_i915_private pointer in a given file.
> 
> I understand your reasoning. However with i915 I've preferred to switch when
> possible.
> 
> Regardless, pushed the series now. Thanks for the patches, and sorry for the
> delay.

Thank you Jani.
Will you please  review - https://patchwork.freedesktop.org/series/75265/#rev2 

Thanks,
Pankaj

> 
> BR,
> Jani.
> 
> 
> 
> >
> > Thanks,
> > Pankaj
> >
> >>
> >> BR,
> >> Jani.
> >>
> >>
> >>
> >> >
> >> > Signed-off-by: Pankaj Bharadiya
> >> > 
> >> > ---
> >> >  drivers/gpu/drm/i915/display/intel_sdvo.c | 21
> >> > ++---
> >> >  1 file changed, 14 insertions(+), 7 deletions(-)
> >> >
> >> > diff --git a/drivers/gpu/drm/i915/display/intel_sdvo.c
> >> > b/drivers/gpu/drm/i915/display/intel_sdvo.c
> >> > index bc6c26818e15..773523dcd107 100644
> >> > --- a/drivers/gpu/drm/i915/display/intel_sdvo.c
> >> > +++ b/drivers/gpu/drm/i915/display/intel_sdvo.c
> >> > @@ -411,6 +411,7 @@ static const char *sdvo_cmd_name(u8 cmd)
> >> > static void intel_sdvo_debug_write(struct intel_sdvo *intel_sdvo, u8 cmd,
> >> > const void *args, int args_len)  {
> >> > +struct drm_i915_private *dev_priv =
> >> > +to_i915(intel_sdvo->base.base.dev);
> >> >  const char *cmd_name;
> >> >  int i, pos = 0;
> >> >  char buffer[64];
> >> > @@ -431,7 +432,7 @@ static void intel_sdvo_debug_write(struct
> >> > intel_sdvo
> >> *intel_sdvo, u8 cmd,
> >> >  else
> >> >  BUF_PRINT("(%02X)", cmd);
> >> >
> >> > -WARN_ON(pos >= sizeof(buffer) - 1);
> >> > +drm_WARN_ON(&dev_priv->drm, pos >= sizeof(buffer) - 1);
> >> >  #undef BUF_PRINT
> >> >
> >> >  DRM_DEBUG_KMS("%s: W: %02X %s\n", SDVO_NAME(intel_sdvo), cmd,
> >> > buffer); @@ -533,6 +534,7 @@ static bool
> >> > intel_sdvo_write_cmd(struct intel_sdvo *intel_sdvo, u8 cmd,  static
> >> > bool intel_sdvo_read_response(struct
> >> intel_sdvo *intel_sdvo,
> >> >   void *response, int response_len)  
> >> > {
> >> > +struct drm_i915_private *dev_priv =
> >> > +to_i915(intel_sdvo->base.base.dev);
> >> >  const char *cmd_status;
> >> >  u8 retry = 15; /* 5 quick checks, followed by 10 long checks */
> >> >  u8 status;
> >> > @@ -597,7 +599,7 @@ static bool intel_sdvo_read_response(struct
> >> intel_sdvo *intel_sdvo,
> >> >  BUF_PRINT(" %02X", ((u8 *)response)[i]);
> >> >  }
> >> >
> >> > -WARN_ON(pos >= sizeof(buffer) - 1);
> >> > +drm_WARN_ON(&dev_priv->drm, pos >= sizeof(buffer) - 1);
> >> >  #undef BUF_PRINT
> >> >
> >> >  DRM_DEBUG_KMS("%s: R: %s\n", SDVO_NAME(intel_sdvo), buffer);
> >> @@
> >> > -1081,6 +1083,7 @@ static bool
> >> > intel_sdvo_compute_avi_infoframe(struct
> >> intel_sdvo *intel_sdvo,
> >> >   struct intel_crtc_state 
> >> > *crtc_state,
> >> >   struct drm_connector_state
> >> *conn_state)  {
> >> > +struct drm_i915_private *dev_priv =
> >> > +to_i915(intel_sdvo->base.base.dev);
> >

Re: [Intel-gfx] [PATCH] drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC

2020-05-19 Thread kbuild test robot
Hi Swathi,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on drm-intel/for-linux-next]
[also build test WARNING on drm-tip/drm-tip next-20200518]
[cannot apply to v5.7-rc6]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:
https://github.com/0day-ci/linux/commits/Swathi-Dhanavanthri/drm-i915-ehl-Extend-w-a-14010685332-to-JSP-MCC/20200519-184947
base:   git://anongit.freedesktop.org/drm-intel for-linux-next
config: x86_64-allyesconfig (attached as .config)
compiler: clang version 11.0.0 (https://github.com/llvm/llvm-project 
135b877874fae96b4372c8a3fbfaa8ff44ff86e3)
reproduce:
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# install x86_64 cross compiling tool for clang build
# apt-get install binutils-x86-64-linux-gnu
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot 

All warnings (new ones prefixed by >>, old ones prefixed by <<):

>> drivers/gpu/drm/i915/i915_irq.c:2906:42: warning: converting the enum 
>> constant to a boolean [-Wint-in-bool-context]
if (INTEL_PCH_TYPE(dev_priv) == PCH_ICP || PCH_JSP || PCH_MCC) {
^
drivers/gpu/drm/i915/i915_irq.c:2906:53: warning: converting the enum constant 
to a boolean [-Wint-in-bool-context]
if (INTEL_PCH_TYPE(dev_priv) == PCH_ICP || PCH_JSP || PCH_MCC) {
^
2 warnings generated.

vim +2906 drivers/gpu/drm/i915/i915_irq.c

  2867  
  2868  static void gen11_display_irq_reset(struct drm_i915_private *dev_priv)
  2869  {
  2870  struct intel_uncore *uncore = &dev_priv->uncore;
  2871  enum pipe pipe;
  2872  
  2873  intel_uncore_write(uncore, GEN11_DISPLAY_INT_CTL, 0);
  2874  
  2875  if (INTEL_GEN(dev_priv) >= 12) {
  2876  enum transcoder trans;
  2877  
  2878  for (trans = TRANSCODER_A; trans <= TRANSCODER_D; 
trans++) {
  2879  enum intel_display_power_domain domain;
  2880  
  2881  domain = POWER_DOMAIN_TRANSCODER(trans);
  2882  if (!intel_display_power_is_enabled(dev_priv, 
domain))
  2883  continue;
  2884  
  2885  intel_uncore_write(uncore, 
TRANS_PSR_IMR(trans), 0x);
  2886  intel_uncore_write(uncore, 
TRANS_PSR_IIR(trans), 0x);
  2887  }
  2888  } else {
  2889  intel_uncore_write(uncore, EDP_PSR_IMR, 0x);
  2890  intel_uncore_write(uncore, EDP_PSR_IIR, 0x);
  2891  }
  2892  
  2893  for_each_pipe(dev_priv, pipe)
  2894  if (intel_display_power_is_enabled(dev_priv,
  2895 
POWER_DOMAIN_PIPE(pipe)))
  2896  GEN8_IRQ_RESET_NDX(uncore, DE_PIPE, pipe);
  2897  
  2898  GEN3_IRQ_RESET(uncore, GEN8_DE_PORT_);
  2899  GEN3_IRQ_RESET(uncore, GEN8_DE_MISC_);
  2900  GEN3_IRQ_RESET(uncore, GEN11_DE_HPD_);
  2901  
  2902  if (INTEL_PCH_TYPE(dev_priv) >= PCH_ICP)
  2903  GEN3_IRQ_RESET(uncore, SDE);
  2904  
  2905  /* Wa_14010685332:icl,jsl,ehl */
> 2906  if (INTEL_PCH_TYPE(dev_priv) == PCH_ICP || PCH_JSP || PCH_MCC) {
  2907  intel_uncore_rmw(uncore, SOUTH_CHICKEN1,
  2908   SBCLK_RUN_REFCLK_DIS, 
SBCLK_RUN_REFCLK_DIS);
  2909  intel_uncore_rmw(uncore, SOUTH_CHICKEN1,
  2910   SBCLK_RUN_REFCLK_DIS, 0);
  2911  }
  2912  }
  2913  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 04/12] drm/i915/selftests: Check for an initial-breadcrumb in wait_for_submit()

2020-05-19 Thread Mika Kuoppala
Chris Wilson  writes:

> When we look at i915_request_is_started() we must be careful in case we
> are using a request that does not have the initial-breadcrumb and
> instead the is-started is being compared against the end of the previous
> request. This will make wait_for_submit() declare that a request has
> been already submitted too early.


submittedstarted...handled_by_gpu.

I guess wait_for_submit is generic enough to cater all cases
but we actually do not wait for submit but we wait for
gpu to reach it.

>
> Signed-off-by: Chris Wilson 

Reviewed-by: Mika Kuoppala 

> ---
>  drivers/gpu/drm/i915/gt/selftest_lrc.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c 
> b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> index b71f04db9c6e..f6949cd55e92 100644
> --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c
> @@ -75,7 +75,7 @@ static bool is_active(struct i915_request *rq)
>   if (i915_request_on_hold(rq))
>   return true;
>  
> - if (i915_request_started(rq))
> + if (i915_request_has_initial_breadcrumb(rq) && i915_request_started(rq))
>   return true;
>  
>   return false;
> -- 
> 2.20.1
>
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH] drm/i915: Neuter virtual rq->engine on retire

2020-05-19 Thread Chris Wilson
We do not hold a reference to rq->engine, and so if it is a virtual
engine it may have already been freed by the time we free the request.
The last reference we hold on the virtual engine is via rq->context,
and that is released on request retirement. So if we find ourselves
retiring a virtual request, redirect it to a real sibling.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1906
Fixes: 43acd6516ca9 ("drm/i915: Keep a per-engine request pool")
Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
---
 drivers/gpu/drm/i915/i915_request.c | 17 +
 1 file changed, 17 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_request.c 
b/drivers/gpu/drm/i915/i915_request.c
index 31ef683d27b4..a816218cc693 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -242,9 +242,26 @@ static void remove_from_engine(struct i915_request *rq)
spin_lock(&engine->active.lock);
locked = engine;
}
+
list_del_init(&rq->sched.link);
clear_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
clear_bit(I915_FENCE_FLAG_HOLD, &rq->fence.flags);
+
+   /*
+* During i915_fence_release we stash one request on the
+* rq->engine for use as an emergency reserve. However, we
+* neither want to keep a request on a virtual engine, nor do
+* we hold a reference to a virtual engine at that point. So
+* if rq->engine is virtual, replace it with a real one. Which
+* one is immaterial at this point as the request has been
+* retired, and if it was a virtual engine will not have any
+* signaling or other related paraphernalia.
+*
+* However, it would be nice if we didn't have to...
+*/
+   if (intel_engine_is_virtual(rq->engine))
+   rq->engine = intel_virtual_engine_get_sibling(rq->engine, 0);
+
spin_unlock_irq(&locked->active.lock);
 }
 
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v10] drm/i915/dsb: Pre allocate and late cleanup of cmd buffer

2020-05-19 Thread Maarten Lankhorst
Op 18-05-2020 om 14:12 schreef Animesh Manna:
> Pre-allocate command buffer in atomic_commit using intel_dsb_prepare
> function which also includes pinning and map in cpu domain.
>
> No functional change is dsb write/commit functions.
>
> Now dsb get/put function is removed and ref-count mechanism is
> not needed. Below dsb api added to do respective job mentioned
> below.
>
> intel_dsb_prepare - Allocate, pin and map the buffer.
> intel_dsb_cleanup - Unpin and release the gem object.
>
> RFC: Initial patch for design review.
> v2: included _init() part in _prepare(). [Daniel, Ville]
> v3: dsb_cleanup called after cleanup_planes. [Daniel]
> v4: dsb structure is moved to intel_crtc_state from intel_crtc. [Maarten]
> v5: dsb get/put/ref-count mechanism removed. [Maarten]
> v6: Based on review feedback following changes are added,
> - replaced intel_dsb structure by pointer in intel_crtc_state. [Maarten]
> - passing intel_crtc_state to dsp-api to simplify the code. [Maarten]
> - few dsb functions prototype modified to simplify code.
> v7: added few cosmetic changes suggested by Jani and null check for
> crtc_state in dsb_cleanup removed as suggested by Maarten.
> v8: changed the function parameter to intel_crtc_state* of
> ivb_load_lut_ext_max() from intel_crtc. [Maarten]
> v9: error handling improved in _write() and prepare(). [Maarten]
>
> Cc: Maarten Lankhorst 
> Cc: Ville Syrjälä 
> Cc: Jani Nikula 
> Cc: Daniel Vetter 
> Acked-by: Daniel Vetter 
> Signed-off-by: Animesh Manna 
> ---
>  drivers/gpu/drm/i915/display/intel_atomic.c   |   3 +
>  drivers/gpu/drm/i915/display/intel_color.c|  66 ++---
>  drivers/gpu/drm/i915/display/intel_display.c  |  58 +++-
>  .../drm/i915/display/intel_display_types.h|   6 +-
>  drivers/gpu/drm/i915/display/intel_dsb.c  | 250 --
>  drivers/gpu/drm/i915/display/intel_dsb.h  |  17 +-
>  6 files changed, 206 insertions(+), 194 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_atomic.c 
> b/drivers/gpu/drm/i915/display/intel_atomic.c
> index d043057d2fa0..3cb866f22e74 100644
> --- a/drivers/gpu/drm/i915/display/intel_atomic.c
> +++ b/drivers/gpu/drm/i915/display/intel_atomic.c
> @@ -252,6 +252,7 @@ intel_crtc_duplicate_state(struct drm_crtc *crtc)
>   crtc_state->wm.need_postvbl_update = false;
>   crtc_state->fb_bits = 0;
>   crtc_state->update_planes = 0;
> + crtc_state->dsb = NULL;
>  
>   return &crtc_state->uapi;
>  }
> @@ -292,6 +293,8 @@ intel_crtc_destroy_state(struct drm_crtc *crtc,
>  {
>   struct intel_crtc_state *crtc_state = to_intel_crtc_state(state);
>  
> + drm_WARN_ON(crtc->dev, crtc_state->dsb);
> +
>   __drm_atomic_helper_crtc_destroy_state(&crtc_state->uapi);
>   intel_crtc_free_hw_state(crtc_state);
>   kfree(crtc_state);
> diff --git a/drivers/gpu/drm/i915/display/intel_color.c 
> b/drivers/gpu/drm/i915/display/intel_color.c
> index 98ece9cd7cdd..945bb03bdd4d 100644
> --- a/drivers/gpu/drm/i915/display/intel_color.c
> +++ b/drivers/gpu/drm/i915/display/intel_color.c
> @@ -714,16 +714,16 @@ static void bdw_load_lut_10(struct intel_crtc *crtc,
>   intel_de_write(dev_priv, PREC_PAL_INDEX(pipe), 0);
>  }
>  
> -static void ivb_load_lut_ext_max(struct intel_crtc *crtc)
> +static void ivb_load_lut_ext_max(const struct intel_crtc_state *crtc_state)
>  {
> + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
>   struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
> - struct intel_dsb *dsb = intel_dsb_get(crtc);
>   enum pipe pipe = crtc->pipe;
>  
>   /* Program the max register to clamp values > 1.0. */
> - intel_dsb_reg_write(dsb, PREC_PAL_EXT_GC_MAX(pipe, 0), 1 << 16);
> - intel_dsb_reg_write(dsb, PREC_PAL_EXT_GC_MAX(pipe, 1), 1 << 16);
> - intel_dsb_reg_write(dsb, PREC_PAL_EXT_GC_MAX(pipe, 2), 1 << 16);
> + intel_dsb_reg_write(crtc_state, PREC_PAL_EXT_GC_MAX(pipe, 0), 1 << 16);
> + intel_dsb_reg_write(crtc_state, PREC_PAL_EXT_GC_MAX(pipe, 1), 1 << 16);
> + intel_dsb_reg_write(crtc_state, PREC_PAL_EXT_GC_MAX(pipe, 2), 1 << 16);
>  
>   /*
>* Program the gc max 2 register to clamp values > 1.0.
> @@ -731,15 +731,13 @@ static void ivb_load_lut_ext_max(struct intel_crtc 
> *crtc)
>* from 3.0 to 7.0
>*/
>   if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) {
> - intel_dsb_reg_write(dsb, PREC_PAL_EXT2_GC_MAX(pipe, 0),
> + intel_dsb_reg_write(crtc_state, PREC_PAL_EXT2_GC_MAX(pipe, 0),
>   1 << 16);
> - intel_dsb_reg_write(dsb, PREC_PAL_EXT2_GC_MAX(pipe, 1),
> + intel_dsb_reg_write(crtc_state, PREC_PAL_EXT2_GC_MAX(pipe, 1),
>   1 << 16);
> - intel_dsb_reg_write(dsb, PREC_PAL_EXT2_GC_MAX(pipe, 2),
> + intel_dsb_reg_write(crtc_state, PREC_PAL_EXT2_GC_MAX(pipe, 2),
>   1 << 16);
>   }
> -
> - intel_dsb_pu

[Intel-gfx] [PATCH] drm/i915/ehl: Wa_22010271021

2020-05-19 Thread Matt Atwood
Reflect recent Bspec changes.

Bspec: 33451

Signed-off-by: Matt Atwood 
---
 drivers/gpu/drm/i915/gt/intel_workarounds.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c 
b/drivers/gpu/drm/i915/gt/intel_workarounds.c
index 90a2b9e399b0..fa1e15657663 100644
--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
@@ -1484,6 +1484,12 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, 
struct i915_wa_list *wal)
wa_write_or(wal,
GEN7_FF_THREAD_MODE,
GEN12_FF_TESSELATION_DOP_GATE_DISABLE);
+
+   /* Wa_22010271021:ehl */
+   if (IS_ELKHARTLAKE(i915))
+   wa_masked_en(wal,
+GEN9_CS_DEBUG_MODE1,
+FF_DOP_CLOCK_GATE_DISABLE);
}
 
if (IS_GEN_RANGE(i915, 9, 12)) {
-- 
2.21.3

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: Neuter virtual rq->engine on retire

2020-05-19 Thread Chris Wilson
Quoting Chris Wilson (2020-05-19 15:51:31)
> We do not hold a reference to rq->engine, and so if it is a virtual
> engine it may have already been freed by the time we free the request.
> The last reference we hold on the virtual engine is via rq->context,
> and that is released on request retirement. So if we find ourselves
> retiring a virtual request, redirect it to a real sibling.
> 
> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1906
> Fixes: 43acd6516ca9 ("drm/i915: Keep a per-engine request pool")
> Signed-off-by: Chris Wilson 
> Cc: Tvrtko Ursulin 
> ---
>  drivers/gpu/drm/i915/i915_request.c | 17 +
>  1 file changed, 17 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_request.c 
> b/drivers/gpu/drm/i915/i915_request.c
> index 31ef683d27b4..a816218cc693 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -242,9 +242,26 @@ static void remove_from_engine(struct i915_request *rq)
> spin_lock(&engine->active.lock);
> locked = engine;
> }
> +
> list_del_init(&rq->sched.link);
> clear_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
> clear_bit(I915_FENCE_FLAG_HOLD, &rq->fence.flags);
> +
> +   /*
> +* During i915_fence_release we stash one request on the
> +* rq->engine for use as an emergency reserve. However, we
> +* neither want to keep a request on a virtual engine, nor do
> +* we hold a reference to a virtual engine at that point. So
> +* if rq->engine is virtual, replace it with a real one. Which
> +* one is immaterial at this point as the request has been
> +* retired, and if it was a virtual engine will not have any
> +* signaling or other related paraphernalia.
> +*
> +* However, it would be nice if we didn't have to...
> +*/
> +   if (intel_engine_is_virtual(rq->engine))

Hmm. execlists_dequeue will assert that rq->engine == veng before
finding out that the request was completed. Annoyingly we would need
some veng magic to cmpxchg(&ve->request, rq, NULL)

> +   rq->engine = intel_virtual_engine_get_sibling(rq->engine, 0);

Back to the drawing board for a bit. Although removing the assert might
be the easiest course of action.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.BAT: failure for drm/i915/selftests: Measure dispatch latency (rev10)

2020-05-19 Thread Patchwork
== Series Details ==

Series: drm/i915/selftests: Measure dispatch latency (rev10)
URL   : https://patchwork.freedesktop.org/series/77308/
State : failure

== Summary ==

Applying: drm/i915/selftests: Measure dispatch latency
Using index info to reconstruct a base tree...
M   drivers/gpu/drm/i915/selftests/i915_request.c
Falling back to patching base and 3-way merge...
No changes -- Patch already applied.

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: Neuter virtual rq->engine on retire

2020-05-19 Thread Chris Wilson
Quoting Chris Wilson (2020-05-19 18:00:04)
> Quoting Chris Wilson (2020-05-19 15:51:31)
> > We do not hold a reference to rq->engine, and so if it is a virtual
> > engine it may have already been freed by the time we free the request.
> > The last reference we hold on the virtual engine is via rq->context,
> > and that is released on request retirement. So if we find ourselves
> > retiring a virtual request, redirect it to a real sibling.
> > 
> > Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1906
> > Fixes: 43acd6516ca9 ("drm/i915: Keep a per-engine request pool")
> > Signed-off-by: Chris Wilson 
> > Cc: Tvrtko Ursulin 
> > ---
> >  drivers/gpu/drm/i915/i915_request.c | 17 +
> >  1 file changed, 17 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_request.c 
> > b/drivers/gpu/drm/i915/i915_request.c
> > index 31ef683d27b4..a816218cc693 100644
> > --- a/drivers/gpu/drm/i915/i915_request.c
> > +++ b/drivers/gpu/drm/i915/i915_request.c
> > @@ -242,9 +242,26 @@ static void remove_from_engine(struct i915_request *rq)
> > spin_lock(&engine->active.lock);
> > locked = engine;
> > }
> > +
> > list_del_init(&rq->sched.link);
> > clear_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
> > clear_bit(I915_FENCE_FLAG_HOLD, &rq->fence.flags);
> > +
> > +   /*
> > +* During i915_fence_release we stash one request on the
> > +* rq->engine for use as an emergency reserve. However, we
> > +* neither want to keep a request on a virtual engine, nor do
> > +* we hold a reference to a virtual engine at that point. So
> > +* if rq->engine is virtual, replace it with a real one. Which
> > +* one is immaterial at this point as the request has been
> > +* retired, and if it was a virtual engine will not have any
> > +* signaling or other related paraphernalia.
> > +*
> > +* However, it would be nice if we didn't have to...
> > +*/
> > +   if (intel_engine_is_virtual(rq->engine))
> 
> Hmm. execlists_dequeue will assert that rq->engine == veng before
> finding out that the request was completed. Annoyingly we would need
> some veng magic to cmpxchg(&ve->request, rq, NULL)
> 
> > +   rq->engine = intel_virtual_engine_get_sibling(rq->engine, 
> > 0);
> 
> Back to the drawing board for a bit. Although removing the assert might
> be the easiest course of action.

A viable alternative would appear not to be to reset rq->engine back to
veng on preemption. It's currently done for consistency, but correctness
trumps consistency. :|
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.BUILD: failure for Consider DBuf bandwidth when calculating CDCLK (rev14)

2020-05-19 Thread Patchwork
== Series Details ==

Series: Consider DBuf bandwidth when calculating CDCLK (rev14)
URL   : https://patchwork.freedesktop.org/series/74739/
State : failure

== Summary ==

Applying: drm/i915: Decouple cdclk calculation from modeset checks
Applying: drm/i915: Extract cdclk requirements checking to separate function
Applying: drm/i915: Check plane configuration properly
Applying: drm/i915: Plane configuration affects CDCLK in Gen11+
Applying: drm/i915: Introduce for_each_dbuf_slice_in_mask macro
Using index info to reconstruct a base tree...
M   drivers/gpu/drm/i915/display/intel_display_power.h
Falling back to patching base and 3-way merge...
Auto-merging drivers/gpu/drm/i915/display/intel_display_power.h
Applying: drm/i915: Adjust CDCLK accordingly to our DBuf bw needs
Using index info to reconstruct a base tree...
M   drivers/gpu/drm/i915/display/intel_bw.c
M   drivers/gpu/drm/i915/display/intel_bw.h
M   drivers/gpu/drm/i915/display/intel_cdclk.c
M   drivers/gpu/drm/i915/display/intel_display.c
M   drivers/gpu/drm/i915/i915_drv.h
M   drivers/gpu/drm/i915/intel_pm.c
M   drivers/gpu/drm/i915/intel_pm.h
Falling back to patching base and 3-way merge...
Auto-merging drivers/gpu/drm/i915/intel_pm.h
CONFLICT (content): Merge conflict in drivers/gpu/drm/i915/intel_pm.h
Auto-merging drivers/gpu/drm/i915/intel_pm.c
Auto-merging drivers/gpu/drm/i915/i915_drv.h
Auto-merging drivers/gpu/drm/i915/display/intel_display.c
Auto-merging drivers/gpu/drm/i915/display/intel_cdclk.c
Auto-merging drivers/gpu/drm/i915/display/intel_bw.h
CONFLICT (content): Merge conflict in drivers/gpu/drm/i915/display/intel_bw.h
Auto-merging drivers/gpu/drm/i915/display/intel_bw.c
CONFLICT (content): Merge conflict in drivers/gpu/drm/i915/display/intel_bw.c
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0006 drm/i915: Adjust CDCLK accordingly to our DBuf bw needs
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.BAT: success for series starting with [CI,1/3] drm/i915/selftests: Add tests for timeslicing virtual engines

2020-05-19 Thread Patchwork
== Series Details ==

Series: series starting with [CI,1/3] drm/i915/selftests: Add tests for 
timeslicing virtual engines
URL   : https://patchwork.freedesktop.org/series/77414/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8505 -> Patchwork_17712


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/index.html

Known issues


  Here are the changes found in Patchwork_17712 that come from known issues:

### IGT changes ###

 Possible fixes 

  * igt@i915_selftest@live@gt_lrc:
- fi-bwr-2160:[INCOMPLETE][1] ([i915#489]) -> [PASS][2]
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/fi-bwr-2160/igt@i915_selftest@live@gt_lrc.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/fi-bwr-2160/igt@i915_selftest@live@gt_lrc.html

  * igt@kms_chamelium@hdmi-hpd-fast:
- fi-kbl-7500u:   [FAIL][3] ([i915#227]) -> [PASS][4]
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/fi-kbl-7500u/igt@kms_chamel...@hdmi-hpd-fast.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/fi-kbl-7500u/igt@kms_chamel...@hdmi-hpd-fast.html

  
  [i915#227]: https://gitlab.freedesktop.org/drm/intel/issues/227
  [i915#489]: https://gitlab.freedesktop.org/drm/intel/issues/489


Participating hosts (49 -> 44)
--

  Additional (1): fi-kbl-7560u 
  Missing(6): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-byt-clapper fi-bdw-samus 


Build changes
-

  * Linux: CI_DRM_8505 -> Patchwork_17712

  CI-20190529: 20190529
  CI_DRM_8505: dd6f7db19af1ccb376719c8759afe6be9107315c @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5660: bf43e3e45a17c16094fb3a47b363ccf1c95c28b9 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17712: 8921ae04cdd7437e0becf07b16c5d9c38cc70753 @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

8921ae04cdd7 drm/i915/gt: Incorporate the virtual engine into timeslicing
6ee3bd2022dc drm/i915/gt: Kick virtual siblings on timeslice out
adc7f3ce64f4 drm/i915/selftests: Add tests for timeslicing virtual engines

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for dma-fence: add might_sleep annotation to _wait()

2020-05-19 Thread Patchwork
== Series Details ==

Series: dma-fence: add might_sleep annotation to _wait()
URL   : https://patchwork.freedesktop.org/series/77417/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
aa2f5c93ddcf dma-fence: add might_sleep annotation to _wait()
-:16: WARNING:TYPO_SPELLING: 'TIMOUT' may be misspelled - perhaps 'TIMEOUT'?
#16: 
- dma-fence.h: Uses MAX_SCHEDULE_TIMOUT, good chance this sleeps

-:70: WARNING:NO_AUTHOR_SIGN_OFF: Missing Signed-off-by: line by nominal patch 
author 'Daniel Vetter '

total: 0 errors, 2 warnings, 0 checks, 8 lines checked

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.BAT: success for dma-fence: add might_sleep annotation to _wait()

2020-05-19 Thread Patchwork
== Series Details ==

Series: dma-fence: add might_sleep annotation to _wait()
URL   : https://patchwork.freedesktop.org/series/77417/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8505 -> Patchwork_17713


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/index.html

Known issues


  Here are the changes found in Patchwork_17713 that come from known issues:

### IGT changes ###

 Possible fixes 

  * igt@i915_selftest@live@gt_lrc:
- fi-bwr-2160:[INCOMPLETE][1] ([i915#489]) -> [PASS][2]
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/fi-bwr-2160/igt@i915_selftest@live@gt_lrc.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/fi-bwr-2160/igt@i915_selftest@live@gt_lrc.html

  * igt@kms_chamelium@hdmi-hpd-fast:
- fi-kbl-7500u:   [FAIL][3] ([i915#227]) -> [PASS][4]
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/fi-kbl-7500u/igt@kms_chamel...@hdmi-hpd-fast.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/fi-kbl-7500u/igt@kms_chamel...@hdmi-hpd-fast.html

  
  {name}: This element is suppressed. This means it is ignored when computing
  the status of the difference (SUCCESS, WARNING, or FAILURE).

  [i915#1803]: https://gitlab.freedesktop.org/drm/intel/issues/1803
  [i915#227]: https://gitlab.freedesktop.org/drm/intel/issues/227
  [i915#489]: https://gitlab.freedesktop.org/drm/intel/issues/489


Participating hosts (49 -> 43)
--

  Missing(6): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-byt-clapper fi-bdw-samus 


Build changes
-

  * Linux: CI_DRM_8505 -> Patchwork_17713

  CI-20190529: 20190529
  CI_DRM_8505: dd6f7db19af1ccb376719c8759afe6be9107315c @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5660: bf43e3e45a17c16094fb3a47b363ccf1c95c28b9 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17713: aa2f5c93ddcf64cf5de53d7d628cf6a70a109a35 @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

aa2f5c93ddcf dma-fence: add might_sleep annotation to _wait()

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH] drm/i915/gem: Suppress some random warnings

2020-05-19 Thread Chris Wilson
Leave the error propagation in place, but limit the warnings to only
show up in CI if the unlikely errors are hit.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 3 +--
 drivers/gpu/drm/i915/gem/i915_gem_phys.c   | 3 +--
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c  | 3 +--
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c| 2 +-
 4 files changed, 4 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index e4fb6c372537..219a36995b96 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1626,8 +1626,7 @@ eb_relocate_entry(struct i915_execbuffer *eb,
err = i915_vma_bind(target->vma,
target->vma->obj->cache_level,
PIN_GLOBAL, NULL);
-   if (drm_WARN_ONCE(&i915->drm, err,
- "Unexpected failure to bind target VMA!"))
+   if (err)
return err;
}
}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c 
b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index 4c1c7232b024..12245a47e5fb 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -27,8 +27,7 @@ static int i915_gem_object_get_pages_phys(struct 
drm_i915_gem_object *obj)
void *dst;
int i;
 
-   if (drm_WARN_ON(obj->base.dev,
-   i915_gem_object_needs_bit17_swizzle(obj)))
+   if (GEM_WARN_ON(i915_gem_object_needs_bit17_swizzle(obj)))
return -EINVAL;
 
/*
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c 
b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 5d5d7eef3f43..19dd21a95c47 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -148,8 +148,7 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
last_pfn = page_to_pfn(page);
 
/* Check that the i965g/gm workaround works. */
-   drm_WARN_ON(&i915->drm,
-   (gfp & __GFP_DMA32) && (last_pfn >= 0x0010UL));
+   GEM_BUG_ON(gfp & __GFP_DMA32 && last_pfn >= 0x0010UL);
}
if (sg) { /* loop terminated early; short sg table */
sg_page_sizes |= sg->length;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c 
b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 8b0708708671..ec9d25680b41 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -235,7 +235,7 @@ i915_gem_userptr_init__mmu_notifier(struct 
drm_i915_gem_object *obj,
if (flags & I915_USERPTR_UNSYNCHRONIZED)
return capable(CAP_SYS_ADMIN) ? 0 : -EPERM;
 
-   if (drm_WARN_ON(obj->base.dev, obj->userptr.mm == NULL))
+   if (GEM_WARN_ON(obj->userptr.mm == NULL))
return -EINVAL;
 
mn = i915_mmu_notifier_find(obj->userptr.mm);
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915/ehl: Wa_22010271021

2020-05-19 Thread Dhanavanthri, Swathi
Maybe we can add JSL to the comment too.
Other than that looks good to me.

Reviewed-by: Swathi Dhanavanthri 

-Original Message-
From: Intel-gfx  On Behalf Of Matt 
Atwood
Sent: Tuesday, May 19, 2020 9:26 AM
To: intel-gfx@lists.freedesktop.org
Subject: [Intel-gfx] [PATCH] drm/i915/ehl: Wa_22010271021

Reflect recent Bspec changes.

Bspec: 33451

Signed-off-by: Matt Atwood 
---
 drivers/gpu/drm/i915/gt/intel_workarounds.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c 
b/drivers/gpu/drm/i915/gt/intel_workarounds.c
index 90a2b9e399b0..fa1e15657663 100644
--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
@@ -1484,6 +1484,12 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, 
struct i915_wa_list *wal)
wa_write_or(wal,
GEN7_FF_THREAD_MODE,
GEN12_FF_TESSELATION_DOP_GATE_DISABLE);
+
+   /* Wa_22010271021:ehl */
+   if (IS_ELKHARTLAKE(i915))
+   wa_masked_en(wal,
+GEN9_CS_DEBUG_MODE1,
+FF_DOP_CLOCK_GATE_DISABLE);
}
 
if (IS_GEN_RANGE(i915, 9, 12)) {
-- 
2.21.3

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v2] drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC

2020-05-19 Thread Swathi Dhanavanthri
This is a permanent w/a for JSL/EHL.This is to be applied to the
PCH types on JSL/EHL ie JSP/MCC
Bspec: 52888

v2: Fixed the wrong usage of logical OR(ville)

Signed-off-by: Swathi Dhanavanthri 
---
 drivers/gpu/drm/i915/i915_irq.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 4dc601dffc08..d60a66d8eb40 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2902,8 +2902,9 @@ static void gen11_display_irq_reset(struct 
drm_i915_private *dev_priv)
if (INTEL_PCH_TYPE(dev_priv) >= PCH_ICP)
GEN3_IRQ_RESET(uncore, SDE);
 
-   /* Wa_14010685332:icl */
-   if (INTEL_PCH_TYPE(dev_priv) == PCH_ICP) {
+   /* Wa_14010685332:icl,jsl,ehl */
+   if ((INTEL_PCH_TYPE(dev_priv) >= PCH_ICP) &&
+  (INTEL_PCH_TYPE(dev_priv) <= PCH_MCC)) {
intel_uncore_rmw(uncore, SOUTH_CHICKEN1,
 SBCLK_RUN_REFCLK_DIS, SBCLK_RUN_REFCLK_DIS);
intel_uncore_rmw(uncore, SOUTH_CHICKEN1,
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH i-g-t] i915: Add gem_exec_endless

2020-05-19 Thread Chris Wilson
Start our preparations for guaranteeing endless execution.

First, we just want to estimate the direct userspace dispatch overhead
of running an endless chain of batch buffers. The legacy binding process
here will be replaced by async VM_BIND, but for the moment this
suffices to construct the GTT as required for arbitrary indirect
execution.

Signed-off-by: Chris Wilson 
Cc: Joonas Lahtinen 
Cc: Mika Kuoppala 
---
 lib/igt_core.h|   1 +
 tests/Makefile.sources|   3 +
 tests/i915/gem_exec_endless.c | 353 ++
 tests/meson.build |   1 +
 4 files changed, 358 insertions(+)
 create mode 100644 tests/i915/gem_exec_endless.c

diff --git a/lib/igt_core.h b/lib/igt_core.h
index b97fa2faa..c58715204 100644
--- a/lib/igt_core.h
+++ b/lib/igt_core.h
@@ -1369,6 +1369,7 @@ void igt_kmsg(const char *format, ...);
 #define KMSG_DEBUG "<7>[IGT] "
 
 #define READ_ONCE(x) (*(volatile typeof(x) *)(&(x)))
+#define WRITE_ONCE(x, v) do *(volatile typeof(x) *)(&(x)) = (v); while (0)
 
 #define MSEC_PER_SEC (1000)
 #define USEC_PER_SEC (1000*MSEC_PER_SEC)
diff --git a/tests/Makefile.sources b/tests/Makefile.sources
index f1df13465..eaa6c0d04 100644
--- a/tests/Makefile.sources
+++ b/tests/Makefile.sources
@@ -265,6 +265,9 @@ gem_exec_schedule_SOURCES = i915/gem_exec_schedule.c
 TESTS_progs += gem_exec_store
 gem_exec_store_SOURCES = i915/gem_exec_store.c
 
+TESTS_progs += gem_exec_endless
+gem_exec_endless_SOURCES = i915/gem_exec_endless.c
+
 TESTS_progs += gem_exec_suspend
 gem_exec_suspend_SOURCES = i915/gem_exec_suspend.c
 
diff --git a/tests/i915/gem_exec_endless.c b/tests/i915/gem_exec_endless.c
new file mode 100644
index 0..4825aee8f
--- /dev/null
+++ b/tests/i915/gem_exec_endless.c
@@ -0,0 +1,353 @@
+/*
+ * Copyright © 2019 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include 
+
+#include "i915/gem.h"
+#include "i915/gem_ring.h"
+#include "igt.h"
+#include "sw_sync.h"
+
+#define MAX_ENGINES 64
+
+#define MI_SEMAPHORE_WAIT  (0x1c << 23)
+#define   MI_SEMAPHORE_POLL (1 << 15)
+#define   MI_SEMAPHORE_SAD_GT_SDD   (0 << 12)
+#define   MI_SEMAPHORE_SAD_GTE_SDD  (1 << 12)
+#define   MI_SEMAPHORE_SAD_LT_SDD   (2 << 12)
+#define   MI_SEMAPHORE_SAD_LTE_SDD  (3 << 12)
+#define   MI_SEMAPHORE_SAD_EQ_SDD   (4 << 12)
+#define   MI_SEMAPHORE_SAD_NEQ_SDD  (5 << 12)
+
+static uint32_t batch_create(int i915)
+{
+   const uint32_t bbe = MI_BATCH_BUFFER_END;
+   uint32_t handle = gem_create(i915, 4096);
+   gem_write(i915, handle, 0, &bbe, sizeof(bbe));
+   return handle;
+}
+
+struct supervisor {
+   int device;
+   uint32_t handle;
+   uint32_t context;
+
+   uint32_t *map;
+   uint32_t *semaphore;
+   uint32_t *terminate;
+   uint64_t *dispatch;
+};
+
+static unsigned int offset_in_page(void *addr)
+{
+   return (uintptr_t)addr & 4095;
+}
+
+static uint32_t __supervisor_create_context(int i915,
+   const struct 
intel_execution_engine2 *e)
+{
+   struct drm_i915_gem_context_create_ext_setparam p_ring = {
+   {
+   .name = I915_CONTEXT_CREATE_EXT_SETPARAM,
+   .next_extension = 0
+   },
+   {
+   .param = I915_CONTEXT_PARAM_RINGSIZE,
+   .value = 4096,
+   },
+   };
+   I915_DEFINE_CONTEXT_PARAM_ENGINES(engines, 2) = {
+   .engines = {
+   { e->class, e->instance },
+   { e->class, e->instance },
+   }
+   };
+   struct drm_i915_gem_context_create_ext_setparam p_engines = {
+   {
+   .name = I915_CONTEXT_CREATE_EXT_SETPARAM,
+   .next_extension = to_user_pointer(&p_ring)
+ 

[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Neuter virtual rq->engine on retire

2020-05-19 Thread Patchwork
== Series Details ==

Series: drm/i915: Neuter virtual rq->engine on retire
URL   : https://patchwork.freedesktop.org/series/77425/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17714


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17714/index.html

Known issues


  Here are the changes found in Patchwork_17714 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@i915_selftest@live@execlists:
- fi-skl-lmem:[PASS][1] -> [INCOMPLETE][2] ([i915#1874])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-skl-lmem/igt@i915_selftest@l...@execlists.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17714/fi-skl-lmem/igt@i915_selftest@l...@execlists.html

  * igt@kms_chamelium@hdmi-crc-fast:
- fi-kbl-7500u:   [PASS][3] -> [FAIL][4] ([i915#1372])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-7500u/igt@kms_chamel...@hdmi-crc-fast.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17714/fi-kbl-7500u/igt@kms_chamel...@hdmi-crc-fast.html

  
 Possible fixes 

  * igt@i915_selftest@live@execlists:
- fi-kbl-8809g:   [INCOMPLETE][5] ([i915#1874]) -> [PASS][6]
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17714/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html

  
  [i915#1372]: https://gitlab.freedesktop.org/drm/intel/issues/1372
  [i915#1874]: https://gitlab.freedesktop.org/drm/intel/issues/1874


Participating hosts (49 -> 43)
--

  Missing(6): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-byt-clapper fi-bdw-samus 


Build changes
-

  * Linux: CI_DRM_8506 -> Patchwork_17714

  CI-20190529: 20190529
  CI_DRM_8506: d6a73e9084ff6adfabbad014bc294d254484f304 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5661: a772a7c7a761c6125bc0af5284ad603478107737 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17714: a0cd84201117cc81f472e452c31a43ac972ed941 @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

a0cd84201117 drm/i915: Neuter virtual rq->engine on retire

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17714/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v9 6/7] drm/i915: Adjust CDCLK accordingly to our DBuf bw needs

2020-05-19 Thread Stanislav Lisovskiy
According to BSpec max BW per slice is calculated using formula
Max BW = CDCLK * 64. Currently when calculating min CDCLK we
account only per plane requirements, however in order to avoid
FIFO underruns we need to estimate accumulated BW consumed by
all planes(ddb entries basically) residing on that particular
DBuf slice. This will allow us to put CDCLK lower and save power
when we don't need that much bandwidth or gain additional
performance once plane consumption grows.

v2: - Fix long line warning
- Limited new DBuf bw checks to only gens >= 11

v3: - Lets track used Dbuf bw per slice and per crtc in bw state
  (or may be in DBuf state in future), that way we don't need
  to have all crtcs in state and those only if we detect if
  are actually going to change cdclk, just same way as we
  do with other stuff, i.e intel_atomic_serialize_global_state
  and co. Just as per Ville's paradigm.
- Made dbuf bw calculation procedure look nicer by introducing
  for_each_dbuf_slice_in_mask - we often will now need to iterate
  slices using mask.
- According to experimental results CDCLK * 64 accounts for
  overall bandwidth across all dbufs, not per dbuf.

v4: - Fixed missing const(Ville)
- Removed spurious whitespaces(Ville)
- Fixed local variable init(reduced scope where not needed)
- Added some comments about data rate for planar formats
- Changed struct intel_crtc_bw to intel_dbuf_bw
- Moved dbuf bw calculation to intel_compute_min_cdclk(Ville)

v5: - Removed unneeded macro

v6: - Prevent too frequent CDCLK switching back and forth:
  Always switch to higher CDCLK when needed to prevent bandwidth
  issues, however don't switch to lower CDCLK earlier than once
  in 30 minutes in order to prevent constant modeset blinking.
  We could of course not switch back at all, however this is
  bad from power consumption point of view.

v7: - Fixed to track cdclk using bw_state, modeset will be now
  triggered only when CDCLK change is really needed.

v8: - Lock global state if bw_state->min_cdclk is changed.
- Try getting bw_state only if there are crtcs in the commit
  (need to have read-locked global state)

v9: - Do not do Dbuf bw check for gens < 9 - triggers WARN
  as ddb_size is 0.

v10: - Lock global state for older gens as well.

v11: - Define new bw_calc_min_cdclk hook, instead of using
   a condition(Manasi Navare)

v12: - Fixed rebase conflict

Signed-off-by: Stanislav Lisovskiy 
---
 drivers/gpu/drm/i915/display/intel_bw.c  | 119 ++-
 drivers/gpu/drm/i915/display/intel_bw.h  |  10 ++
 drivers/gpu/drm/i915/display/intel_cdclk.c   |  28 -
 drivers/gpu/drm/i915/display/intel_cdclk.h   |   1 -
 drivers/gpu/drm/i915/display/intel_display.c |  39 +-
 drivers/gpu/drm/i915/i915_drv.h  |   1 +
 drivers/gpu/drm/i915/intel_pm.c  |  31 -
 drivers/gpu/drm/i915/intel_pm.h  |   4 +
 8 files changed, 218 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_bw.c 
b/drivers/gpu/drm/i915/display/intel_bw.c
index fef04e2d954e..cb614b624e20 100644
--- a/drivers/gpu/drm/i915/display/intel_bw.c
+++ b/drivers/gpu/drm/i915/display/intel_bw.c
@@ -6,11 +6,12 @@
 #include 
 
 #include "intel_bw.h"
+#include "intel_pm.h"
 #include "intel_display_types.h"
 #include "intel_sideband.h"
 #include "intel_atomic.h"
 #include "intel_pm.h"
-
+#include "intel_cdclk.h"
 
 /* Parameters for Qclk Geyserville (QGV) */
 struct intel_qgv_point {
@@ -343,7 +344,6 @@ static unsigned int intel_bw_crtc_data_rate(const struct 
intel_crtc_state *crtc_
 
return data_rate;
 }
-
 void intel_bw_crtc_update(struct intel_bw_state *bw_state,
  const struct intel_crtc_state *crtc_state)
 {
@@ -420,6 +420,121 @@ intel_atomic_get_bw_state(struct intel_atomic_state 
*state)
return to_intel_bw_state(bw_state);
 }
 
+int skl_bw_calc_min_cdclk(struct intel_atomic_state *state)
+{
+   struct drm_i915_private *dev_priv = to_i915(state->base.dev);
+   int i;
+   const struct intel_crtc_state *crtc_state;
+   struct intel_crtc *crtc;
+   int max_bw = 0;
+   int slice_id;
+   struct intel_bw_state *new_bw_state = NULL;
+   struct intel_bw_state *old_bw_state = NULL;
+
+   for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
+   enum plane_id plane_id;
+   struct intel_dbuf_bw *crtc_bw;
+
+   new_bw_state = intel_atomic_get_bw_state(state);
+   if (IS_ERR(new_bw_state))
+   return PTR_ERR(new_bw_state);
+
+   crtc_bw = &new_bw_state->dbuf_bw[crtc->pipe];
+
+   memset(&crtc_bw->used_bw, 0, sizeof(crtc_bw->used_bw));
+
+   for_each_plane_id_on_crtc(crtc, plane_id) {
+   const struct skl_ddb_entry *plane_alloc =
+   &crtc_state->wm.skl.plane_ddb_y[p

Re: [Intel-gfx] [PATCH] dma-fence: add might_sleep annotation to _wait()

2020-05-19 Thread Chris Wilson
Quoting Daniel Vetter (2020-05-19 14:27:56)
> Do it uncontionally, there's a separate peek function with
> dma_fence_is_signalled() which can be called from atomic context.
> 
> v2: Consensus calls for an unconditional might_sleep (Chris,
> Christian)
> 
> Full audit:
> - dma-fence.h: Uses MAX_SCHEDULE_TIMOUT, good chance this sleeps
> - dma-resv.c: Timeout always at least 1
> - st-dma-fence.c: Save to sleep in testcases
> - amdgpu_cs.c: Both callers are for variants of the wait ioctl
> - amdgpu_device.c: Two callers in vram recover code, both right next
>   to mutex_lock.
> - amdgpu_vm.c: Use in the vm_wait ioctl, next to _reserve/unreserve
> - remaining functions in amdgpu: All for test_ib implementations for
>   various engines, caller for that looks all safe (debugfs, driver
>   load, reset)
> - etnaviv: another wait ioctl
> - habanalabs: another wait ioctl
> - nouveau_fence.c: hardcoded 15*HZ ... glorious
> - nouveau_gem.c: hardcoded 2*HZ ... so not even super consistent, but
>   this one does have a WARN_ON :-/ At least this one is only a
>   fallback path for when kmalloc fails. Maybe this should be put onto
>   some worker list instead, instead of a work per unamp ...
> - i915/selftests: Hardecoded HZ / 4 or HZ / 8
> - i915/gt/selftests: Going up the callchain looks safe looking at
>   nearby callers
> - i915/gt/intel_gt_requests.c. Wrapped in a mutex_lock
> - i915/gem_i915_gem_wait.c: The i915-version which is called instead
>   for i915 fences already has a might_sleep() annotation, so all good
> 
> Cc: Alex Deucher 
> Cc: Lucas Stach 
> Cc: Jani Nikula 
> Cc: Joonas Lahtinen 
> Cc: Rodrigo Vivi 
> Cc: Ben Skeggs 
> Cc: "VMware Graphics" 
> Cc: Oded Gabbay 
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> Cc: linux-r...@vger.kernel.org
> Cc: amd-...@lists.freedesktop.org
> Cc: intel-gfx@lists.freedesktop.org
> Cc: Chris Wilson 
> Cc: Maarten Lankhorst 
> Cc: Christian König 
> Signed-off-by: Daniel Vetter 
> ---
>  drivers/dma-buf/dma-fence.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
> index 90edf2b281b0..656e9ac2d028 100644
> --- a/drivers/dma-buf/dma-fence.c
> +++ b/drivers/dma-buf/dma-fence.c
> @@ -208,6 +208,8 @@ dma_fence_wait_timeout(struct dma_fence *fence, bool 
> intr, signed long timeout)
> if (WARN_ON(timeout < 0))
> return -EINVAL;
>  
> +   might_sleep();

git grep matches your synopsis.

Reviewed-by: Chris Wilson 
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/gem: Suppress some random warnings

2020-05-19 Thread Patchwork
== Series Details ==

Series: drm/i915/gem: Suppress some random warnings
URL   : https://patchwork.freedesktop.org/series/77431/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
c47e2d0db533 drm/i915/gem: Suppress some random warnings
-:62: CHECK:COMPARISON_TO_NULL: Comparison to NULL could be written 
"!obj->userptr.mm"
#62: FILE: drivers/gpu/drm/i915/gem/i915_gem_userptr.c:238:
+   if (GEM_WARN_ON(obj->userptr.mm == NULL))

total: 0 errors, 0 warnings, 1 checks, 35 lines checked

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH] drm/i915/hdcp: Add additional R0' wait

2020-05-19 Thread Sean Paul
From: Sean Paul 

We're seeing some R0' mismatches in the field, particularly with
repeaters. I'm guessing the (already lenient) 300ms wait time isn't
enough for some setups. So add an additional wait when R0' is
mismatched.

Signed-off-by: Sean Paul 
---
 drivers/gpu/drm/i915/display/intel_hdcp.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/i915/display/intel_hdcp.c 
b/drivers/gpu/drm/i915/display/intel_hdcp.c
index 2cbc4619b4ce..924a717a4fa4 100644
--- a/drivers/gpu/drm/i915/display/intel_hdcp.c
+++ b/drivers/gpu/drm/i915/display/intel_hdcp.c
@@ -592,6 +592,9 @@ int intel_hdcp_auth_downstream(struct intel_connector 
*connector)
  bstatus);
if (!ret)
break;
+
+   /* Maybe the sink is lazy, give it some more time */
+   usleep_range(1, 5);
}
 
if (i == tries) {
-- 
Sean Paul, Software Engineer, Google / Chromium OS

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [CI] drm/i915/gt: Trace the CS interrupt

2020-05-19 Thread Chris Wilson
We have traces for the semaphore and the error, but not the far more
frequent CS interrupts. This is likely to be too much, but for the
purpose of live_unlite_preempt it may answer a question or two.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/gt/intel_gt_irq.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_irq.c 
b/drivers/gpu/drm/i915/gt/intel_gt_irq.c
index 0cc7dd54f4f9..4291d55c5457 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_irq.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_irq.c
@@ -48,8 +48,12 @@ cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
tasklet = true;
}
 
-   if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
+   if (iir & GT_CONTEXT_SWITCH_INTERRUPT) {
+   ENGINE_TRACE(engine, "CS: %x %x\n",
+ENGINE_READ_FW(engine, RING_EXECLIST_STATUS_HI),
+ENGINE_READ_FW(engine, RING_EXECLIST_STATUS_LO));
tasklet = true;
+   }
 
if (iir & GT_RENDER_USER_INTERRUPT) {
intel_engine_signal_breadcrumbs(engine);
-- 
2.20.1

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/gem: Suppress some random warnings

2020-05-19 Thread Patchwork
== Series Details ==

Series: drm/i915/gem: Suppress some random warnings
URL   : https://patchwork.freedesktop.org/series/77431/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17715


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17715/index.html

Known issues


  Here are the changes found in Patchwork_17715 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@kms_chamelium@dp-crc-fast:
- fi-icl-u2:  [PASS][1] -> [FAIL][2] ([i915#262])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-icl-u2/igt@kms_chamel...@dp-crc-fast.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17715/fi-icl-u2/igt@kms_chamel...@dp-crc-fast.html

  * igt@kms_chamelium@dp-edid-read:
- fi-icl-u2:  [PASS][3] -> [FAIL][4] ([i915#976])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-icl-u2/igt@kms_chamel...@dp-edid-read.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17715/fi-icl-u2/igt@kms_chamel...@dp-edid-read.html

  
 Possible fixes 

  * igt@i915_selftest@live@execlists:
- fi-kbl-8809g:   [INCOMPLETE][5] ([i915#1874]) -> [PASS][6]
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17715/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html

  
  [i915#1874]: https://gitlab.freedesktop.org/drm/intel/issues/1874
  [i915#262]: https://gitlab.freedesktop.org/drm/intel/issues/262
  [i915#976]: https://gitlab.freedesktop.org/drm/intel/issues/976


Participating hosts (49 -> 43)
--

  Additional (1): fi-kbl-7560u 
  Missing(7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-tgl-y 
fi-byt-clapper fi-bdw-samus 


Build changes
-

  * Linux: CI_DRM_8506 -> Patchwork_17715

  CI-20190529: 20190529
  CI_DRM_8506: d6a73e9084ff6adfabbad014bc294d254484f304 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5661: a772a7c7a761c6125bc0af5284ad603478107737 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17715: c47e2d0db5333eee93263b6a8fdd110fa51c8bb7 @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

c47e2d0db533 drm/i915/gem: Suppress some random warnings

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17715/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/ehl: Wa_22010271021 (rev2)

2020-05-19 Thread Patchwork
== Series Details ==

Series: drm/i915/ehl: Wa_22010271021 (rev2)
URL   : https://patchwork.freedesktop.org/series/77428/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
60adbb75a3d8 drm/i915/ehl: Wa_22010271021
-:12: WARNING:COMMIT_LOG_LONG_LINE: Possible unwrapped commit description 
(prefer a maximum 75 chars per line)
#12: 
From: Intel-gfx  On Behalf Of Matt 
Atwood

-:39: WARNING:NO_AUTHOR_SIGN_OFF: Missing Signed-off-by: line by nominal patch 
author 'Intel-gfx '

total: 0 errors, 2 warnings, 0 checks, 12 lines checked

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/ehl: Wa_22010271021 (rev2)

2020-05-19 Thread Patchwork
== Series Details ==

Series: drm/i915/ehl: Wa_22010271021 (rev2)
URL   : https://patchwork.freedesktop.org/series/77428/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17716


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17716/index.html

Known issues


  Here are the changes found in Patchwork_17716 that come from known issues:

### IGT changes ###

 Possible fixes 

  * igt@i915_selftest@live@execlists:
- fi-kbl-8809g:   [INCOMPLETE][1] ([i915#1874]) -> [PASS][2]
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17716/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html

  
 Warnings 

  * igt@i915_pm_rpm@module-reload:
- fi-kbl-x1275:   [SKIP][3] ([fdo#109271]) -> [FAIL][4] ([i915#62])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-x1275/igt@i915_pm_...@module-reload.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17716/fi-kbl-x1275/igt@i915_pm_...@module-reload.html

  
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [i915#1874]: https://gitlab.freedesktop.org/drm/intel/issues/1874
  [i915#62]: https://gitlab.freedesktop.org/drm/intel/issues/62


Participating hosts (49 -> 42)
--

  Missing(7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-tgl-y 
fi-byt-clapper fi-bdw-samus 


Build changes
-

  * Linux: CI_DRM_8506 -> Patchwork_17716

  CI-20190529: 20190529
  CI_DRM_8506: d6a73e9084ff6adfabbad014bc294d254484f304 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5661: a772a7c7a761c6125bc0af5284ad603478107737 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17716: 60adbb75a3d8f272c58ce2f5dc5bded2a5b2dc79 @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

60adbb75a3d8 drm/i915/ehl: Wa_22010271021

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17716/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC (rev2)

2020-05-19 Thread Patchwork
== Series Details ==

Series: drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC (rev2)
URL   : https://patchwork.freedesktop.org/series/77382/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
e71c461a0da4 drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC
-:26: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#26: FILE: drivers/gpu/drm/i915/i915_irq.c:2907:
+   if ((INTEL_PCH_TYPE(dev_priv) >= PCH_ICP) &&
+  (INTEL_PCH_TYPE(dev_priv) <= PCH_MCC)) {

total: 0 errors, 0 warnings, 1 checks, 11 lines checked

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC (rev2)

2020-05-19 Thread Patchwork
== Series Details ==

Series: drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC (rev2)
URL   : https://patchwork.freedesktop.org/series/77382/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17717


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17717/index.html

Known issues


  Here are the changes found in Patchwork_17717 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@i915_selftest@live@active:
- fi-icl-y:   [PASS][1] -> [DMESG-FAIL][2] ([i915#765])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-icl-y/igt@i915_selftest@l...@active.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17717/fi-icl-y/igt@i915_selftest@l...@active.html

  * igt@i915_selftest@live@sanitycheck:
- fi-bwr-2160:[PASS][3] -> [INCOMPLETE][4] ([i915#489])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-bwr-2160/igt@i915_selftest@l...@sanitycheck.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17717/fi-bwr-2160/igt@i915_selftest@l...@sanitycheck.html

  
 Possible fixes 

  * igt@i915_selftest@live@execlists:
- fi-kbl-8809g:   [INCOMPLETE][5] ([i915#1874]) -> [PASS][6]
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17717/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html

  
  [i915#1874]: https://gitlab.freedesktop.org/drm/intel/issues/1874
  [i915#489]: https://gitlab.freedesktop.org/drm/intel/issues/489
  [i915#765]: https://gitlab.freedesktop.org/drm/intel/issues/765


Participating hosts (49 -> 42)
--

  Missing(7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-tgl-y 
fi-byt-clapper fi-bdw-samus 


Build changes
-

  * Linux: CI_DRM_8506 -> Patchwork_17717

  CI-20190529: 20190529
  CI_DRM_8506: d6a73e9084ff6adfabbad014bc294d254484f304 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5661: a772a7c7a761c6125bc0af5284ad603478107737 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17717: e71c461a0da44e3fd095719790961a6606eb4e8c @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

e71c461a0da4 drm/i915/ehl: Extend w/a 14010685332 to JSP/MCC

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17717/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Consider DBuf bandwidth when calculating CDCLK (rev15)

2020-05-19 Thread Patchwork
== Series Details ==

Series: Consider DBuf bandwidth when calculating CDCLK (rev15)
URL   : https://patchwork.freedesktop.org/series/74739/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
42922a1cf4d9 drm/i915: Decouple cdclk calculation from modeset checks
a2e2a5f43cd7 drm/i915: Extract cdclk requirements checking to separate function
cc857a15d370 drm/i915: Check plane configuration properly
-:32: WARNING:NO_AUTHOR_SIGN_OFF: Missing Signed-off-by: line by nominal patch 
author 'Stanislav Lisovskiy '

total: 0 errors, 1 warnings, 0 checks, 14 lines checked
4765b732c387 drm/i915: Plane configuration affects CDCLK in Gen11+
-:23: WARNING:NO_AUTHOR_SIGN_OFF: Missing Signed-off-by: line by nominal patch 
author 'Stanislav Lisovskiy '

total: 0 errors, 1 warnings, 0 checks, 8 lines checked
2405cfa20a90 drm/i915: Introduce for_each_dbuf_slice_in_mask macro
-:25: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__slice' - possible 
side-effects?
#25: FILE: drivers/gpu/drm/i915/display/intel_display.h:190:
+#define for_each_dbuf_slice_in_mask(__slice, __mask) \
+   for ((__slice) = DBUF_S1; (__slice) < I915_MAX_DBUF_SLICES; 
(__slice)++) \
+   for_each_if((BIT(__slice)) & (__mask))

total: 0 errors, 0 warnings, 1 checks, 20 lines checked
b74b713c823f drm/i915: Adjust CDCLK accordingly to our DBuf bw needs
-:164: WARNING:LINE_SPACING: Missing a blank line after declarations
#164: FILE: drivers/gpu/drm/i915/display/intel_bw.c:492:
+   int ret = intel_atomic_lock_global_state(&new_bw_state->base);
+   if (ret)

-:203: WARNING:LINE_SPACING: Missing a blank line after declarations
#203: FILE: drivers/gpu/drm/i915/display/intel_bw.c:531:
+   int ret = intel_atomic_lock_global_state(&new_bw_state->base);
+   if (ret)

total: 0 errors, 2 warnings, 0 checks, 395 lines checked
2be3fecc6320 drm/i915: Remove unneeded hack now for CDCLK

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.SPARSE: warning for Consider DBuf bandwidth when calculating CDCLK (rev15)

2020-05-19 Thread Patchwork
== Series Details ==

Series: Consider DBuf bandwidth when calculating CDCLK (rev15)
URL   : https://patchwork.freedesktop.org/series/74739/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.0
Fast mode used, each commit won't be checked separately.
-
+drivers/gpu/drm/i915/display/intel_display.c:1222:22: error: Expected constant 
expression in case statement
+drivers/gpu/drm/i915/display/intel_display.c:1225:22: error: Expected constant 
expression in case statement
+drivers/gpu/drm/i915/display/intel_display.c:1228:22: error: Expected constant 
expression in case statement
+drivers/gpu/drm/i915/display/intel_display.c:1231:22: error: Expected constant 
expression in case statement
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2274:17: error: bad integer 
constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2275:17: error: bad integer 
constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2276:17: error: bad integer 
constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2277:17: error: bad integer 
constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2278:17: error: bad integer 
constant expression
+drivers/gpu/drm/i915/gem/i915_gem_context.c:2279:17: error: bad integer 
constant expression
+drivers/gpu/drm/i915/gt/intel_reset.c:1310:5: warning: context imbalance in 
'intel_gt_reset_trylock' - different lock contexts for basic block
+drivers/gpu/drm/i915/gt/sysfs_engines.c:61:10: error: bad integer constant 
expression
+drivers/gpu/drm/i915/gt/sysfs_engines.c:62:10: error: bad integer constant 
expression
+drivers/gpu/drm/i915/gt/sysfs_engines.c:66:10: error: bad integer constant 
expression
+drivers/gpu/drm/i915/gvt/mmio.c:287:23: warning: memcpy with byte count of 
279040
+drivers/gpu/drm/i915/i915_perf.c:1425:15: warning: memset with byte count of 
16777216
+drivers/gpu/drm/i915/i915_perf.c:1479:15: warning: memset with byte count of 
16777216
+drivers/gpu/drm/i915/intel_wakeref.c:137:19: warning: context imbalance in 
'wakeref_auto_timeout' - unexpected unlock
+./include/linux/compiler.h:199:9: warning: context imbalance in 
'engines_sample' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen11_fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 
'gen12_fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:408:9: warning: context imbalance in 'gen6_read16' 

Re: [Intel-gfx] [PATCH v9 6/7] drm/i915: Adjust CDCLK accordingly to our DBuf bw needs

2020-05-19 Thread Manasi Navare
On Wed, May 20, 2020 at 12:25:25AM +0300, Stanislav Lisovskiy wrote:
> According to BSpec max BW per slice is calculated using formula
> Max BW = CDCLK * 64. Currently when calculating min CDCLK we
> account only per plane requirements, however in order to avoid
> FIFO underruns we need to estimate accumulated BW consumed by
> all planes(ddb entries basically) residing on that particular
> DBuf slice. This will allow us to put CDCLK lower and save power
> when we don't need that much bandwidth or gain additional
> performance once plane consumption grows.
> 
> v2: - Fix long line warning
> - Limited new DBuf bw checks to only gens >= 11
> 
> v3: - Lets track used Dbuf bw per slice and per crtc in bw state
>   (or may be in DBuf state in future), that way we don't need
>   to have all crtcs in state and those only if we detect if
>   are actually going to change cdclk, just same way as we
>   do with other stuff, i.e intel_atomic_serialize_global_state
>   and co. Just as per Ville's paradigm.
> - Made dbuf bw calculation procedure look nicer by introducing
>   for_each_dbuf_slice_in_mask - we often will now need to iterate
>   slices using mask.
> - According to experimental results CDCLK * 64 accounts for
>   overall bandwidth across all dbufs, not per dbuf.
> 
> v4: - Fixed missing const(Ville)
> - Removed spurious whitespaces(Ville)
> - Fixed local variable init(reduced scope where not needed)
> - Added some comments about data rate for planar formats
> - Changed struct intel_crtc_bw to intel_dbuf_bw
> - Moved dbuf bw calculation to intel_compute_min_cdclk(Ville)
> 
> v5: - Removed unneeded macro
> 
> v6: - Prevent too frequent CDCLK switching back and forth:
>   Always switch to higher CDCLK when needed to prevent bandwidth
>   issues, however don't switch to lower CDCLK earlier than once
>   in 30 minutes in order to prevent constant modeset blinking.
>   We could of course not switch back at all, however this is
>   bad from power consumption point of view.
> 
> v7: - Fixed to track cdclk using bw_state, modeset will be now
>   triggered only when CDCLK change is really needed.
> 
> v8: - Lock global state if bw_state->min_cdclk is changed.
> - Try getting bw_state only if there are crtcs in the commit
>   (need to have read-locked global state)
> 
> v9: - Do not do Dbuf bw check for gens < 9 - triggers WARN
>   as ddb_size is 0.
> 
> v10: - Lock global state for older gens as well.
> 
> v11: - Define new bw_calc_min_cdclk hook, instead of using
>a condition(Manasi Navare)
> 
> v12: - Fixed rebase conflict
> 
> Signed-off-by: Stanislav Lisovskiy 

Looks good now with the hooks

Reviewed-by: Manasi Navare 

Manasi

> ---
>  drivers/gpu/drm/i915/display/intel_bw.c  | 119 ++-
>  drivers/gpu/drm/i915/display/intel_bw.h  |  10 ++
>  drivers/gpu/drm/i915/display/intel_cdclk.c   |  28 -
>  drivers/gpu/drm/i915/display/intel_cdclk.h   |   1 -
>  drivers/gpu/drm/i915/display/intel_display.c |  39 +-
>  drivers/gpu/drm/i915/i915_drv.h  |   1 +
>  drivers/gpu/drm/i915/intel_pm.c  |  31 -
>  drivers/gpu/drm/i915/intel_pm.h  |   4 +
>  8 files changed, 218 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_bw.c 
> b/drivers/gpu/drm/i915/display/intel_bw.c
> index fef04e2d954e..cb614b624e20 100644
> --- a/drivers/gpu/drm/i915/display/intel_bw.c
> +++ b/drivers/gpu/drm/i915/display/intel_bw.c
> @@ -6,11 +6,12 @@
>  #include 
>  
>  #include "intel_bw.h"
> +#include "intel_pm.h"
>  #include "intel_display_types.h"
>  #include "intel_sideband.h"
>  #include "intel_atomic.h"
>  #include "intel_pm.h"
> -
> +#include "intel_cdclk.h"
>  
>  /* Parameters for Qclk Geyserville (QGV) */
>  struct intel_qgv_point {
> @@ -343,7 +344,6 @@ static unsigned int intel_bw_crtc_data_rate(const struct 
> intel_crtc_state *crtc_
>  
>   return data_rate;
>  }
> -
>  void intel_bw_crtc_update(struct intel_bw_state *bw_state,
> const struct intel_crtc_state *crtc_state)
>  {
> @@ -420,6 +420,121 @@ intel_atomic_get_bw_state(struct intel_atomic_state 
> *state)
>   return to_intel_bw_state(bw_state);
>  }
>  
> +int skl_bw_calc_min_cdclk(struct intel_atomic_state *state)
> +{
> + struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> + int i;
> + const struct intel_crtc_state *crtc_state;
> + struct intel_crtc *crtc;
> + int max_bw = 0;
> + int slice_id;
> + struct intel_bw_state *new_bw_state = NULL;
> + struct intel_bw_state *old_bw_state = NULL;
> +
> + for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
> + enum plane_id plane_id;
> + struct intel_dbuf_bw *crtc_bw;
> +
> + new_bw_state = intel_atomic_get_bw_state(state);
> + if (IS_ERR(new_bw_state))
> + return PTR_ERR(ne

Re: [Intel-gfx] [PATCH v2 02/22] x86/gpu: add RKL stolen memory support

2020-05-19 Thread Lucas De Marchi
Cc'ing x...@kernel.org and maintainers

On Wed, May 6, 2020 at 4:52 AM Srivatsa, Anusha
 wrote:
>
>
>
> > -Original Message-
> > From: Intel-gfx  On Behalf Of Matt
> > Roper
> > Sent: Tuesday, May 5, 2020 4:22 AM
> > To: intel-gfx@lists.freedesktop.org
> > Cc: De Marchi, Lucas 
> > Subject: [Intel-gfx] [PATCH v2 02/22] x86/gpu: add RKL stolen memory support
> >
> > RKL re-uses the same stolen memory registers as TGL and ICL.
> >
> > Bspec: 52055
> > Bspec: 49589
> > Bspec: 49636
> > Cc: Lucas De Marchi 
> > Signed-off-by: Matt Roper 
>
> Confirmed with Spec.
> Reviewed-by: Anusha Srivatsa 
>
> > ---
> >  arch/x86/kernel/early-quirks.c | 1 +
> >  1 file changed, 1 insertion(+)
> >
> > diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
> > index 2f9ec14be3b1..a4b5af03dcc1 100644
> > --- a/arch/x86/kernel/early-quirks.c
> > +++ b/arch/x86/kernel/early-quirks.c
> > @@ -550,6 +550,7 @@ static const struct pci_device_id intel_early_ids[]
> > __initconst = {
> >   INTEL_ICL_11_IDS(&gen11_early_ops),
> >   INTEL_EHL_IDS(&gen11_early_ops),
> >   INTEL_TGL_12_IDS(&gen11_early_ops),
> > + INTEL_RKL_IDS(&gen11_early_ops),

Trying to apply to drm-intel-next-queued checkpatch rightfully complain:

35aad4f58736 (HEAD -> drm-intel-next-queued) x86/gpu: add RKL stolen
memory support
The following files are outside of i915 maintenance scope:
arch/x86/kernel/early-quirks.c

Can we get an ack?  Going forward, for simple changes like this, do
you prefer to still ack on it
or should we just apply to our tree?

thanks
Lucas De Marchi

> >  };
> >
> >  struct resource intel_graphics_stolen_res __ro_after_init =
> > DEFINE_RES_MEM(0, 0);
> > --
> > 2.24.1
> >
> > ___
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx



-- 
Lucas De Marchi
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✗ Fi.CI.BAT: failure for Consider DBuf bandwidth when calculating CDCLK (rev15)

2020-05-19 Thread Patchwork
== Series Details ==

Series: Consider DBuf bandwidth when calculating CDCLK (rev15)
URL   : https://patchwork.freedesktop.org/series/74739/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17718


Summary
---

  **FAILURE**

  Serious unknown changes coming with Patchwork_17718 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_17718, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/index.html

Possible new issues
---

  Here are the unknown changes that may have been introduced in Patchwork_17718:

### IGT changes ###

 Possible regressions 

  * igt@i915_selftest@live@client:
- fi-bsw-kefka:   [PASS][1] -> [INCOMPLETE][2]
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-bsw-kefka/igt@i915_selftest@l...@client.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/fi-bsw-kefka/igt@i915_selftest@l...@client.html

  
Known issues


  Here are the changes found in Patchwork_17718 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@i915_pm_rpm@module-reload:
- fi-glk-dsi: [PASS][3] -> [TIMEOUT][4] ([i915#1288])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-glk-dsi/igt@i915_pm_...@module-reload.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/fi-glk-dsi/igt@i915_pm_...@module-reload.html

  * igt@i915_selftest@live@execlists:
- fi-kbl-guc: [PASS][5] -> [INCOMPLETE][6] ([i915#1874])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-guc/igt@i915_selftest@l...@execlists.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/fi-kbl-guc/igt@i915_selftest@l...@execlists.html

  * igt@kms_chamelium@hdmi-hpd-fast:
- fi-kbl-7500u:   [PASS][7] -> [FAIL][8] ([i915#227])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-7500u/igt@kms_chamel...@hdmi-hpd-fast.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/fi-kbl-7500u/igt@kms_chamel...@hdmi-hpd-fast.html

  
 Possible fixes 

  * igt@i915_selftest@live@execlists:
- fi-kbl-8809g:   [INCOMPLETE][9] ([i915#1874]) -> [PASS][10]
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html

  
  [i915#1288]: https://gitlab.freedesktop.org/drm/intel/issues/1288
  [i915#1874]: https://gitlab.freedesktop.org/drm/intel/issues/1874
  [i915#227]: https://gitlab.freedesktop.org/drm/intel/issues/227


Participating hosts (49 -> 44)
--

  Additional (1): fi-kbl-7560u 
  Missing(6): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-byt-clapper fi-bdw-samus 


Build changes
-

  * Linux: CI_DRM_8506 -> Patchwork_17718

  CI-20190529: 20190529
  CI_DRM_8506: d6a73e9084ff6adfabbad014bc294d254484f304 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5661: a772a7c7a761c6125bc0af5284ad603478107737 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17718: 2be3fecc6320f61ccbd0898132dcb7eedae7640b @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

2be3fecc6320 drm/i915: Remove unneeded hack now for CDCLK
b74b713c823f drm/i915: Adjust CDCLK accordingly to our DBuf bw needs
2405cfa20a90 drm/i915: Introduce for_each_dbuf_slice_in_mask macro
4765b732c387 drm/i915: Plane configuration affects CDCLK in Gen11+
cc857a15d370 drm/i915: Check plane configuration properly
a2e2a5f43cd7 drm/i915: Extract cdclk requirements checking to separate function
42922a1cf4d9 drm/i915: Decouple cdclk calculation from modeset checks

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/hdcp: Add additional R0' wait

2020-05-19 Thread Patchwork
== Series Details ==

Series: drm/i915/hdcp: Add additional R0' wait
URL   : https://patchwork.freedesktop.org/series/77439/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17719


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17719/index.html

Known issues


  Here are the changes found in Patchwork_17719 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@kms_chamelium@dp-crc-fast:
- fi-icl-u2:  [PASS][1] -> [FAIL][2] ([i915#262])
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-icl-u2/igt@kms_chamel...@dp-crc-fast.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17719/fi-icl-u2/igt@kms_chamel...@dp-crc-fast.html

  * igt@kms_chamelium@hdmi-hpd-fast:
- fi-kbl-7500u:   [PASS][3] -> [FAIL][4] ([i915#227])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-7500u/igt@kms_chamel...@hdmi-hpd-fast.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17719/fi-kbl-7500u/igt@kms_chamel...@hdmi-hpd-fast.html

  
 Possible fixes 

  * igt@i915_selftest@live@execlists:
- fi-kbl-8809g:   [INCOMPLETE][5] ([i915#1874]) -> [PASS][6]
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17719/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html

  
  [i915#1874]: https://gitlab.freedesktop.org/drm/intel/issues/1874
  [i915#227]: https://gitlab.freedesktop.org/drm/intel/issues/227
  [i915#262]: https://gitlab.freedesktop.org/drm/intel/issues/262


Participating hosts (49 -> 44)
--

  Additional (1): fi-kbl-7560u 
  Missing(6): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-byt-clapper fi-bdw-samus 


Build changes
-

  * Linux: CI_DRM_8506 -> Patchwork_17719

  CI-20190529: 20190529
  CI_DRM_8506: d6a73e9084ff6adfabbad014bc294d254484f304 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5661: a772a7c7a761c6125bc0af5284ad603478107737 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17719: 3cae33a9b60fc776426fec12740eb692aa003dbe @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

3cae33a9b60f drm/i915/hdcp: Add additional R0' wait

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17719/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/gt: Trace the CS interrupt

2020-05-19 Thread Patchwork
== Series Details ==

Series: drm/i915/gt: Trace the CS interrupt
URL   : https://patchwork.freedesktop.org/series/77441/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17720


Summary
---

  **SUCCESS**

  No regressions found.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17720/index.html

Known issues


  Here are the changes found in Patchwork_17720 that come from known issues:

### IGT changes ###

 Possible fixes 

  * igt@i915_selftest@live@execlists:
- fi-kbl-8809g:   [INCOMPLETE][1] ([i915#1874]) -> [PASS][2]
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17720/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html

  
  [i915#1874]: https://gitlab.freedesktop.org/drm/intel/issues/1874


Participating hosts (49 -> 44)
--

  Additional (1): fi-kbl-7560u 
  Missing(6): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-byt-clapper fi-bdw-samus 


Build changes
-

  * Linux: CI_DRM_8506 -> Patchwork_17720

  CI-20190529: 20190529
  CI_DRM_8506: d6a73e9084ff6adfabbad014bc294d254484f304 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5661: a772a7c7a761c6125bc0af5284ad603478107737 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17720: 9458f7578d71b00ddc71153d8499ad2fbcedcf4c @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

9458f7578d71 drm/i915/gt: Trace the CS interrupt

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17720/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] ✓ Fi.CI.IGT: success for series starting with [CI,1/3] drm/i915/selftests: Add tests for timeslicing virtual engines

2020-05-19 Thread Patchwork
== Series Details ==

Series: series starting with [CI,1/3] drm/i915/selftests: Add tests for 
timeslicing virtual engines
URL   : https://patchwork.freedesktop.org/series/77414/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8505_full -> Patchwork_17712_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Known issues


  Here are the changes found in Patchwork_17712_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@kms_hdr@bpc-switch-suspend:
- shard-apl:  [PASS][1] -> [DMESG-WARN][2] ([i915#180]) +2 similar 
issues
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-apl7/igt@kms_...@bpc-switch-suspend.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/shard-apl1/igt@kms_...@bpc-switch-suspend.html

  * igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
- shard-skl:  [PASS][3] -> [FAIL][4] ([fdo#108145] / [i915#265]) +1 
similar issue
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-skl3/igt@kms_plane_alpha_bl...@pipe-c-coverage-7efc.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/shard-skl2/igt@kms_plane_alpha_bl...@pipe-c-coverage-7efc.html

  * igt@kms_psr@psr2_primary_page_flip:
- shard-iclb: [PASS][5] -> [SKIP][6] ([fdo#109441]) +2 similar 
issues
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-iclb2/igt@kms_psr@psr2_primary_page_flip.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/shard-iclb8/igt@kms_psr@psr2_primary_page_flip.html

  
 Possible fixes 

  * igt@gem_workarounds@suspend-resume:
- shard-apl:  [DMESG-WARN][7] ([i915#180] / [i915#95]) -> [PASS][8]
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-apl4/igt@gem_workarou...@suspend-resume.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/shard-apl1/igt@gem_workarou...@suspend-resume.html
- shard-kbl:  [DMESG-WARN][9] ([i915#180] / [i915#93] / [i915#95]) 
-> [PASS][10]
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-kbl6/igt@gem_workarou...@suspend-resume.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/shard-kbl6/igt@gem_workarou...@suspend-resume.html

  * igt@i915_pm_dc@dc5-psr:
- shard-skl:  [INCOMPLETE][11] ([i915#198]) -> [PASS][12] +1 
similar issue
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-skl5/igt@i915_pm...@dc5-psr.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/shard-skl10/igt@i915_pm...@dc5-psr.html

  * igt@i915_suspend@fence-restore-tiled2untiled:
- shard-apl:  [DMESG-WARN][13] ([i915#180]) -> [PASS][14] +2 
similar issues
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-apl6/igt@i915_susp...@fence-restore-tiled2untiled.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/shard-apl2/igt@i915_susp...@fence-restore-tiled2untiled.html
- shard-skl:  [INCOMPLETE][15] ([i915#69]) -> [PASS][16]
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-skl10/igt@i915_susp...@fence-restore-tiled2untiled.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/shard-skl3/igt@i915_susp...@fence-restore-tiled2untiled.html

  * igt@kms_cursor_crc@pipe-b-cursor-256x256-sliding:
- shard-apl:  [FAIL][17] ([i915#54]) -> [PASS][18]
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-apl4/igt@kms_cursor_...@pipe-b-cursor-256x256-sliding.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/shard-apl3/igt@kms_cursor_...@pipe-b-cursor-256x256-sliding.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
- shard-kbl:  [DMESG-WARN][19] ([i915#180]) -> [PASS][20] +4 
similar issues
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-kbl7/igt@kms_cursor_...@pipe-c-cursor-suspend.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/shard-kbl6/igt@kms_cursor_...@pipe-c-cursor-suspend.html

  * igt@kms_hdr@bpc-switch:
- shard-skl:  [FAIL][21] ([i915#1188]) -> [PASS][22]
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-skl2/igt@kms_...@bpc-switch.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/shard-skl9/igt@kms_...@bpc-switch.html

  * igt@kms_pipe_crc_basic@hang-read-crc-pipe-a:
- shard-skl:  [FAIL][23] ([i915#53]) -> [PASS][24]
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-skl6/igt@kms_pipe_crc_ba...@hang-read-crc-pipe-a.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17712/shard-skl10/igt@kms_pipe_crc_ba...@hang-read-crc-pipe-a.html

  * igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min:
- shard-skl:  [FAIL][25] ([fdo#108145] / [i915#265]) -> [PASS][26] 
+2 similar issues
   [25]: 
https://intel-

[Intel-gfx] ✓ Fi.CI.IGT: success for dma-fence: add might_sleep annotation to _wait()

2020-05-19 Thread Patchwork
== Series Details ==

Series: dma-fence: add might_sleep annotation to _wait()
URL   : https://patchwork.freedesktop.org/series/77417/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8505_full -> Patchwork_17713_full


Summary
---

  **SUCCESS**

  No regressions found.

  

Possible new issues
---

  Here are the unknown changes that may have been introduced in 
Patchwork_17713_full:

### IGT changes ###

 Suppressed 

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * {igt@gem_exec_balancer@sliced}:
- shard-tglb: [PASS][1] -> [FAIL][2]
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-tglb7/igt@gem_exec_balan...@sliced.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/shard-tglb1/igt@gem_exec_balan...@sliced.html
- shard-iclb: [PASS][3] -> [FAIL][4]
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-iclb7/igt@gem_exec_balan...@sliced.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/shard-iclb2/igt@gem_exec_balan...@sliced.html

  
Known issues


  Here are the changes found in Patchwork_17713_full that come from known 
issues:

### IGT changes ###

 Issues hit 

  * igt@gen9_exec_parse@allowed-all:
- shard-apl:  [PASS][5] -> [DMESG-WARN][6] ([i915#1436] / 
[i915#716])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-apl7/igt@gen9_exec_pa...@allowed-all.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/shard-apl2/igt@gen9_exec_pa...@allowed-all.html

  * igt@i915_suspend@fence-restore-untiled:
- shard-apl:  [PASS][7] -> [DMESG-WARN][8] ([i915#180])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-apl6/igt@i915_susp...@fence-restore-untiled.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/shard-apl6/igt@i915_susp...@fence-restore-untiled.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
- shard-skl:  [PASS][9] -> [FAIL][10] ([i915#54])
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-skl9/igt@kms_cursor_...@pipe-c-cursor-suspend.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/shard-skl4/igt@kms_cursor_...@pipe-c-cursor-suspend.html

  * igt@kms_fbcon_fbt@psr-suspend:
- shard-skl:  [PASS][11] -> [INCOMPLETE][12] ([i915#69])
   [11]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-skl6/igt@kms_fbcon_...@psr-suspend.html
   [12]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/shard-skl1/igt@kms_fbcon_...@psr-suspend.html

  * igt@kms_flip_tiling@flip-changes-tiling-y:
- shard-kbl:  [PASS][13] -> [FAIL][14] ([i915#699] / [i915#93] / 
[i915#95])
   [13]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-kbl1/igt@kms_flip_til...@flip-changes-tiling-y.html
   [14]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/shard-kbl2/igt@kms_flip_til...@flip-changes-tiling-y.html

  * igt@kms_hdr@bpc-switch-dpms:
- shard-skl:  [PASS][15] -> [FAIL][16] ([i915#1188])
   [15]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-skl3/igt@kms_...@bpc-switch-dpms.html
   [16]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/shard-skl4/igt@kms_...@bpc-switch-dpms.html

  * igt@kms_psr@psr2_primary_page_flip:
- shard-iclb: [PASS][17] -> [SKIP][18] ([fdo#109441]) +2 similar 
issues
   [17]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-iclb2/igt@kms_psr@psr2_primary_page_flip.html
   [18]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/shard-iclb7/igt@kms_psr@psr2_primary_page_flip.html

  * igt@kms_vblank@pipe-c-ts-continuation-suspend:
- shard-kbl:  [PASS][19] -> [DMESG-WARN][20] ([i915#180]) +1 
similar issue
   [19]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-kbl4/igt@kms_vbl...@pipe-c-ts-continuation-suspend.html
   [20]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/shard-kbl1/igt@kms_vbl...@pipe-c-ts-continuation-suspend.html

  
 Possible fixes 

  * igt@gem_workarounds@suspend-resume:
- shard-apl:  [DMESG-WARN][21] ([i915#180] / [i915#95]) -> 
[PASS][22]
   [21]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-apl4/igt@gem_workarou...@suspend-resume.html
   [22]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/shard-apl4/igt@gem_workarou...@suspend-resume.html
- shard-kbl:  [DMESG-WARN][23] ([i915#180] / [i915#93] / [i915#95]) 
-> [PASS][24]
   [23]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8505/shard-kbl6/igt@gem_workarou...@suspend-resume.html
   [24]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17713/shard-kbl1/igt@gem_workarou...@suspend-resume.html

  * igt@i915_pm_dc@dc5-psr:
- shard-skl:  [INCOMPLETE][25] ([i915#198]) -> [P

Re: [Intel-gfx] ✗ Fi.CI.BAT: failure for Consider DBuf bandwidth when calculating CDCLK (rev15)

2020-05-19 Thread Lisovskiy, Stanislav
Self test failure as usual. And as usual not related to the patch.

Best Regards,

Lisovskiy Stanislav


From: Patchwork 
Sent: Wednesday, May 20, 2020 2:59 AM
To: Lisovskiy, Stanislav
Cc: intel-gfx@lists.freedesktop.org
Subject: ✗ Fi.CI.BAT: failure for Consider DBuf bandwidth when calculating 
CDCLK (rev15)

== Series Details ==

Series: Consider DBuf bandwidth when calculating CDCLK (rev15)
URL   : https://patchwork.freedesktop.org/series/74739/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_8506 -> Patchwork_17718


Summary
---

  **FAILURE**

  Serious unknown changes coming with Patchwork_17718 absolutely need to be
  verified manually.

  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_17718, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/index.html

Possible new issues
---

  Here are the unknown changes that may have been introduced in Patchwork_17718:

### IGT changes ###

 Possible regressions 

  * igt@i915_selftest@live@client:
- fi-bsw-kefka:   [PASS][1] -> [INCOMPLETE][2]
   [1]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-bsw-kefka/igt@i915_selftest@l...@client.html
   [2]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/fi-bsw-kefka/igt@i915_selftest@l...@client.html


Known issues


  Here are the changes found in Patchwork_17718 that come from known issues:

### IGT changes ###

 Issues hit 

  * igt@i915_pm_rpm@module-reload:
- fi-glk-dsi: [PASS][3] -> [TIMEOUT][4] ([i915#1288])
   [3]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-glk-dsi/igt@i915_pm_...@module-reload.html
   [4]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/fi-glk-dsi/igt@i915_pm_...@module-reload.html

  * igt@i915_selftest@live@execlists:
- fi-kbl-guc: [PASS][5] -> [INCOMPLETE][6] ([i915#1874])
   [5]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-guc/igt@i915_selftest@l...@execlists.html
   [6]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/fi-kbl-guc/igt@i915_selftest@l...@execlists.html

  * igt@kms_chamelium@hdmi-hpd-fast:
- fi-kbl-7500u:   [PASS][7] -> [FAIL][8] ([i915#227])
   [7]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-7500u/igt@kms_chamel...@hdmi-hpd-fast.html
   [8]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/fi-kbl-7500u/igt@kms_chamel...@hdmi-hpd-fast.html


 Possible fixes 

  * igt@i915_selftest@live@execlists:
- fi-kbl-8809g:   [INCOMPLETE][9] ([i915#1874]) -> [PASS][10]
   [9]: 
https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8506/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html
   [10]: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/fi-kbl-8809g/igt@i915_selftest@l...@execlists.html


  [i915#1288]: https://gitlab.freedesktop.org/drm/intel/issues/1288
  [i915#1874]: https://gitlab.freedesktop.org/drm/intel/issues/1874
  [i915#227]: https://gitlab.freedesktop.org/drm/intel/issues/227


Participating hosts (49 -> 44)
--

  Additional (1): fi-kbl-7560u
  Missing(6): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan 
fi-byt-clapper fi-bdw-samus


Build changes
-

  * Linux: CI_DRM_8506 -> Patchwork_17718

  CI-20190529: 20190529
  CI_DRM_8506: d6a73e9084ff6adfabbad014bc294d254484f304 @ 
git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5661: a772a7c7a761c6125bc0af5284ad603478107737 @ 
git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_17718: 2be3fecc6320f61ccbd0898132dcb7eedae7640b @ 
git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

2be3fecc6320 drm/i915: Remove unneeded hack now for CDCLK
b74b713c823f drm/i915: Adjust CDCLK accordingly to our DBuf bw needs
2405cfa20a90 drm/i915: Introduce for_each_dbuf_slice_in_mask macro
4765b732c387 drm/i915: Plane configuration affects CDCLK in Gen11+
cc857a15d370 drm/i915: Check plane configuration properly
a2e2a5f43cd7 drm/i915: Extract cdclk requirements checking to separate function
42922a1cf4d9 drm/i915: Decouple cdclk calculation from modeset checks

== Logs ==

For more details see: 
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17718/index.html
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915/hdcp: Add additional R0' wait

2020-05-19 Thread Ramalingam C
On 2020-05-19 at 18:16:21 -0400, Sean Paul wrote:
> From: Sean Paul 
> 
> We're seeing some R0' mismatches in the field, particularly with
I think you want to say Vprime verification? delay is added in between
the retry for vprime verfication.

-Ram
> repeaters. I'm guessing the (already lenient) 300ms wait time isn't
> enough for some setups. So add an additional wait when R0' is
> mismatched.
> 
> Signed-off-by: Sean Paul 
> ---
>  drivers/gpu/drm/i915/display/intel_hdcp.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_hdcp.c 
> b/drivers/gpu/drm/i915/display/intel_hdcp.c
> index 2cbc4619b4ce..924a717a4fa4 100644
> --- a/drivers/gpu/drm/i915/display/intel_hdcp.c
> +++ b/drivers/gpu/drm/i915/display/intel_hdcp.c
> @@ -592,6 +592,9 @@ int intel_hdcp_auth_downstream(struct intel_connector 
> *connector)
> bstatus);
>   if (!ret)
>   break;
> +
> + /* Maybe the sink is lazy, give it some more time */
> + usleep_range(1, 5);
>   }
>  
>   if (i == tries) {
> -- 
> Sean Paul, Software Engineer, Google / Chromium OS
> 
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] dma-fence: add might_sleep annotation to _wait()

2020-05-19 Thread Christian König

Am 19.05.20 um 15:27 schrieb Daniel Vetter:

Do it uncontionally, there's a separate peek function with
dma_fence_is_signalled() which can be called from atomic context.

v2: Consensus calls for an unconditional might_sleep (Chris,
Christian)

Full audit:
- dma-fence.h: Uses MAX_SCHEDULE_TIMOUT, good chance this sleeps
- dma-resv.c: Timeout always at least 1
- st-dma-fence.c: Save to sleep in testcases
- amdgpu_cs.c: Both callers are for variants of the wait ioctl
- amdgpu_device.c: Two callers in vram recover code, both right next
   to mutex_lock.
- amdgpu_vm.c: Use in the vm_wait ioctl, next to _reserve/unreserve
- remaining functions in amdgpu: All for test_ib implementations for
   various engines, caller for that looks all safe (debugfs, driver
   load, reset)
- etnaviv: another wait ioctl
- habanalabs: another wait ioctl
- nouveau_fence.c: hardcoded 15*HZ ... glorious
- nouveau_gem.c: hardcoded 2*HZ ... so not even super consistent, but
   this one does have a WARN_ON :-/ At least this one is only a
   fallback path for when kmalloc fails. Maybe this should be put onto
   some worker list instead, instead of a work per unamp ...
- i915/selftests: Hardecoded HZ / 4 or HZ / 8
- i915/gt/selftests: Going up the callchain looks safe looking at
   nearby callers
- i915/gt/intel_gt_requests.c. Wrapped in a mutex_lock
- i915/gem_i915_gem_wait.c: The i915-version which is called instead
   for i915 fences already has a might_sleep() annotation, so all good

Cc: Alex Deucher 
Cc: Lucas Stach 
Cc: Jani Nikula 
Cc: Joonas Lahtinen 
Cc: Rodrigo Vivi 
Cc: Ben Skeggs 
Cc: "VMware Graphics" 
Cc: Oded Gabbay 
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
Cc: linux-r...@vger.kernel.org
Cc: amd-...@lists.freedesktop.org
Cc: intel-gfx@lists.freedesktop.org
Cc: Chris Wilson 
Cc: Maarten Lankhorst 
Cc: Christian König 
Signed-off-by: Daniel Vetter 


Reviewed-by: Christian König 


---
  drivers/dma-buf/dma-fence.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 90edf2b281b0..656e9ac2d028 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -208,6 +208,8 @@ dma_fence_wait_timeout(struct dma_fence *fence, bool intr, 
signed long timeout)
if (WARN_ON(timeout < 0))
return -EINVAL;
  
+	might_sleep();

+
trace_dma_fence_wait_start(fence);
if (fence->ops->wait)
ret = fence->ops->wait(fence, intr, timeout);


___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx