Re: [Intel-gfx] [PATCH v7 5/5] drm/i915: Tidy up execbuffer command parsing code

2014-12-13 Thread shuang . he
Tested-By: PRC QA PRTS (Patch Regression Test System Contact: 
shuang...@intel.com)
-Summary-
Platform  Delta  drm-intel-nightly  Series Applied
PNV  364/364  364/364
ILK  +1-3  364/366  362/366
SNB  448/450  448/450
IVB  497/498  497/498
BYT  289/289  289/289
HSW  563/564  563/564
BDW  417/417  417/417
-Detailed-
Platform  Testdrm-intel-nightly  Series 
Applied
 ILK  igt_kms_flip_flip-vs-panning  DMESG_WARN(1, M26)PASS(8, M26)  
DMESG_WARN(1, M26)
 ILK  igt_kms_flip_plain-flip-fb-recreate-interruptible  DMESG_WARN(1, 
M26)PASS(5, M26)  DMESG_WARN(1, M26)
*ILK  igt_kms_flip_rcs-flip-vs-panning  PASS(5, M26)  DMESG_WARN(1, M26)
 ILK  igt_kms_flip_wf_vblank-ts-check  DMESG_WARN(8, M26)PASS(26, M26M37)   
   PASS(1, M26)
Note: You need to pay more attention to line start with '*'
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: Refactor work that can sleep out of commit (v4)

2014-12-13 Thread shuang . he
Tested-By: PRC QA PRTS (Patch Regression Test System Contact: 
shuang...@intel.com)
-Summary-
Platform  Delta  drm-intel-nightly  Series Applied
PNV  364/364  364/364
ILK  +2 362/366  364/366
SNB  448/450  448/450
IVB  497/498  497/498
BYT  289/289  289/289
HSW  563/564  563/564
BDW  417/417  417/417
-Detailed-
Platform  Testdrm-intel-nightly  Series 
Applied
 ILK  igt_kms_flip_nonexisting-fb  DMESG_WARN(1, M26)PASS(1, M37)  
PASS(1, M37)
 ILK  igt_kms_flip_rcs-flip-vs-panning-interruptible  DMESG_WARN(1, 
M26)PASS(1, M37)  PASS(1, M37)
Note: You need to pay more attention to line start with '*'
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [Regression] 83f45fc turns machine's screen off

2014-12-13 Thread Emmanuel Benisty
Hi Daniel,

> On Mon, Nov 10, 2014 at 10:19 PM, Daniel Vetter  
> wrote:
>> Adding relevant mailing lists.
>>
>>
>> On Sat, Nov 8, 2014 at 7:34 PM, Emmanuel Benisty  wrote:
>>> Hi,
>>>
>>> The following commit permanently turns my screen off when x server is
>>> started (i3 330M Ironlake):
>>>
>>> [83f45fc360c8e16a330474860ebda872d1384c8c] drm: Don't grab an fb
>>> reference for the idr
>>>
>>> Reverting this commit fixed the issue.
>>
>> This is definitely unexpected. I think we need a bit more data to
>> figure out what's going on here:
>> - Please boot with drm.debug=0xe added to your kernel cmdline and grab
>> the dmesg right after boot-up for both a working or broken kernel.
>
> Please see attached files.
>
>> - Are you using any special i915 cmdline options?
>
> Nope.

Is there anything else I could provide to help fixing this issue? It's
still in Linus' tree.

Thanks in advance.
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 3/4] drm/cache: Return what type of cache flush occurred

2014-12-13 Thread Ben Widawsky
The caller of the cache flush APIs can sometimes do useful things with the
information of how the cache was flushed. For instance, when giving buffers to
the GPU to read, we need to make sure all of them have properly invalidated the
caches (when not using LLC). If we can determine a wbinvd() occurred though, we
can skip trying to clflush all remaining objects.

There is a further optimization to be made here in the driver specific code
where it can try to flush the largest object first in hopes of it needing a
wbinvd(). I haven't implemented that yet.

The enum parts of this were very minimally considered for the sake of getting
the data for the profile.

Cc: Intel GFX 
Signed-off-by: Ben Widawsky 
---
 drivers/gpu/drm/drm_cache.c | 34 +-
 include/drm/drmP.h  | 13 ++---
 2 files changed, 35 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c
index 6009c2d..433b15d 100644
--- a/drivers/gpu/drm/drm_cache.c
+++ b/drivers/gpu/drm/drm_cache.c
@@ -80,21 +80,25 @@ drm_cache_should_clflush(unsigned long num_pages)
 }
 #endif
 
-void
+int
 drm_clflush_pages(struct page *pages[], unsigned long num_pages)
 {
 
 #if defined(CONFIG_X86)
if (cpu_has_clflush && drm_cache_should_clflush(num_pages)) {
drm_cache_flush_clflush(pages, num_pages);
-   return;
+   return DRM_CACHE_FLUSH_CL;
}
 
-   if (wbinvd_on_all_cpus())
+   if (wbinvd_on_all_cpus()) {
printk(KERN_ERR "Timed out waiting for cache flush.\n");
+   return DRM_CACHE_FLUSH_ERROR;
+   } else
+   return DRM_CACHE_FLUSH_WBINVD;
 
 #elif defined(__powerpc__)
unsigned long i;
+   int ret = DRM_CACHE_FLUSH_NONE;
for (i = 0; i < num_pages; i++) {
struct page *page = pages[i];
void *page_virtual;
@@ -106,15 +110,19 @@ drm_clflush_pages(struct page *pages[], unsigned long 
num_pages)
flush_dcache_range((unsigned long)page_virtual,
   (unsigned long)page_virtual + PAGE_SIZE);
kunmap_atomic(page_virtual);
+   ret = DRM_CACHE_FLUSH_DCACHE;
}
+   WARN_ON(ret == DRM_CACHE_FLUSH_NONE);
+   return ret;
 #else
printk(KERN_ERR "Architecture has no drm_cache.c support\n");
WARN_ON_ONCE(1);
+   return DRM_CACHE_FLUSH_NONE;
 #endif
 }
 EXPORT_SYMBOL(drm_clflush_pages);
 
-void
+int
 drm_clflush_sg(struct sg_table *st)
 {
 #if defined(CONFIG_X86)
@@ -126,19 +134,23 @@ drm_clflush_sg(struct sg_table *st)
drm_clflush_page(sg_page_iter_page(&sg_iter));
mb();
 
-   return;
+   return DRM_CACHE_FLUSH_CL;
}
 
-   if (wbinvd_on_all_cpus())
+   if (wbinvd_on_all_cpus()) {
printk(KERN_ERR "Timed out waiting for cache flush.\n");
+   return DRM_CACHE_FLUSH_ERROR;
+   } else
+   return DRM_CACHE_FLUSH_WBINVD;
 #else
printk(KERN_ERR "Architecture has no drm_cache.c support\n");
WARN_ON_ONCE(1);
+   return DRM_CACHE_FLUSH_NONE;
 #endif
 }
 EXPORT_SYMBOL(drm_clflush_sg);
 
-void
+int
 drm_clflush_virt_range(void *addr, unsigned long length)
 {
 #if defined(CONFIG_X86)
@@ -149,14 +161,18 @@ drm_clflush_virt_range(void *addr, unsigned long length)
clflushopt(addr);
clflushopt(end - 1);
mb();
-   return;
+   return DRM_CACHE_FLUSH_CL;
}
 
-   if (wbinvd_on_all_cpus())
+   if (wbinvd_on_all_cpus()) {
printk(KERN_ERR "Timed out waiting for cache flush.\n");
+   return DRM_CACHE_FLUSH_ERROR;
+   } else
+   return DRM_CACHE_FLUSH_WBINVD;
 #else
printk(KERN_ERR "Architecture has no drm_cache.c support\n");
WARN_ON_ONCE(1);
+   return DRM_CACHE_FLUSH_NONE;
 #endif
 }
 EXPORT_SYMBOL(drm_clflush_virt_range);
diff --git a/include/drm/drmP.h b/include/drm/drmP.h
index 8ba35c6..09ebe46 100644
--- a/include/drm/drmP.h
+++ b/include/drm/drmP.h
@@ -884,9 +884,16 @@ int drm_noop(struct drm_device *dev, void *data,
 struct drm_file *file_priv);
 
 /* Cache management (drm_cache.c) */
-void drm_clflush_pages(struct page *pages[], unsigned long num_pages);
-void drm_clflush_sg(struct sg_table *st);
-void drm_clflush_virt_range(void *addr, unsigned long length);
+enum drm_cache_flush {
+   DRM_CACHE_FLUSH_NONE=0,
+   DRM_CACHE_FLUSH_ERROR,
+   DRM_CACHE_FLUSH_CL,
+   DRM_CACHE_FLUSH_WBINVD,
+   DRM_CACHE_FLUSH_DCACHE,
+};
+int drm_clflush_pages(struct page *pages[], unsigned long num_pages);
+int drm_clflush_sg(struct sg_table *st);
+int drm_clflush_virt_range(void *addr, unsigned long length);
 
 /*
  * These are exported to drivers so that they can implement fencing using
-- 
2.1.3

_

[Intel-gfx] [PATCH 4/4] drm/i915: Opportunistically reduce flushing at execbuf

2014-12-13 Thread Ben Widawsky
If we're moving a bunch of buffers from the CPU domain to the GPU domain, and
we've already blown out the entire cache via a wbinvd, there is nothing more to
do.

With this and the previous patches, I am seeing a 3x FPS increase on a certain
benchmark which uses a giant 2d array texture. Unless I missed something in the
code, it should only effect non-LLC i915 platforms.

I haven't yet run any numbers for other benchmarks, nor have I attempted to
check if various conformance tests still pass.

NOTE: As mentioned in the previous patch, if one can easily obtain the largest
buffer and attempt to flush it first, the results would be even more desirable.

Cc: DRI Development 
Signed-off-by: Ben Widawsky 
---
 drivers/gpu/drm/i915/i915_drv.h|  3 ++-
 drivers/gpu/drm/i915/i915_gem.c| 12 +---
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |  8 +---
 drivers/gpu/drm/i915/intel_lrc.c   |  8 +---
 4 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index d68c75f..fdb92a3 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2642,7 +2642,8 @@ static inline bool i915_stop_ring_allow_warn(struct 
drm_i915_private *dev_priv)
 }
 
 void i915_gem_reset(struct drm_device *dev);
-bool i915_gem_clflush_object(struct drm_i915_gem_object *obj, bool force);
+enum drm_cache_flush
+i915_gem_clflush_object(struct drm_i915_gem_object *obj, bool force);
 int __must_check i915_gem_object_finish_gpu(struct drm_i915_gem_object *obj);
 int __must_check i915_gem_init(struct drm_device *dev);
 int i915_gem_init_rings(struct drm_device *dev);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index de241eb..3746738 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -3608,7 +3608,7 @@ err_unpin:
return vma;
 }
 
-bool
+enum drm_cache_flush
 i915_gem_clflush_object(struct drm_i915_gem_object *obj,
bool force)
 {
@@ -3617,14 +3617,14 @@ i915_gem_clflush_object(struct drm_i915_gem_object *obj,
 * again at bind time.
 */
if (obj->pages == NULL)
-   return false;
+   return DRM_CACHE_FLUSH_NONE;
 
/*
 * Stolen memory is always coherent with the GPU as it is explicitly
 * marked as wc by the system, or the system is cache-coherent.
 */
if (obj->stolen || obj->phys_handle)
-   return false;
+   return DRM_CACHE_FLUSH_NONE;
 
/* If the GPU is snooping the contents of the CPU cache,
 * we do not need to manually clear the CPU cache lines.  However,
@@ -3635,12 +3635,10 @@ i915_gem_clflush_object(struct drm_i915_gem_object *obj,
 * tracking.
 */
if (!force && cpu_cache_is_coherent(obj->base.dev, obj->cache_level))
-   return false;
+   return DRM_CACHE_FLUSH_NONE;
 
trace_i915_gem_object_clflush(obj);
-   drm_clflush_sg(obj->pages);
-
-   return true;
+   return drm_clflush_sg(obj->pages);
 }
 
 /** Flushes the GTT write domain for the object if it's dirty. */
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c 
b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 0c25f62..e8eb9e9 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -827,7 +827,7 @@ i915_gem_execbuffer_move_to_gpu(struct intel_engine_cs 
*ring,
 {
struct i915_vma *vma;
uint32_t flush_domains = 0;
-   bool flush_chipset = false;
+   enum drm_cache_flush flush_chipset = DRM_CACHE_FLUSH_NONE;
int ret;
 
list_for_each_entry(vma, vmas, exec_list) {
@@ -836,8 +836,10 @@ i915_gem_execbuffer_move_to_gpu(struct intel_engine_cs 
*ring,
if (ret)
return ret;
 
-   if (obj->base.write_domain & I915_GEM_DOMAIN_CPU)
-   flush_chipset |= i915_gem_clflush_object(obj, false);
+   if (obj->base.write_domain & I915_GEM_DOMAIN_CPU &&
+   flush_chipset != DRM_CACHE_FLUSH_WBINVD) {
+   flush_chipset = i915_gem_clflush_object(obj, false);
+   }
 
flush_domains |= obj->base.write_domain;
}
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 89b5577..a6c6ebd 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -611,7 +611,7 @@ static int execlists_move_to_gpu(struct intel_ringbuffer 
*ringbuf,
struct intel_engine_cs *ring = ringbuf->ring;
struct i915_vma *vma;
uint32_t flush_domains = 0;
-   bool flush_chipset = false;
+   enum drm_cache_flush flush_chipset = DRM_CACHE_FLUSH_NONE;
int ret;
 
list_for_each_entry(vma, vmas, exec_list) {
@@ -621,8 +621,10 @@ static int execlists_move_to_gpu(struct intel_ringbu

[Intel-gfx] [PATCH 1/4] drm/cache: Use wbinvd helpers

2014-12-13 Thread Ben Widawsky
When the original drm code was written there were no centralized functions for
doing a coordinated wbinvd across all CPUs. Now (since 2010) there are, so use
them instead of rolling a new one.

Cc: Intel GFX 
Signed-off-by: Ben Widawsky 
---
 drivers/gpu/drm/drm_cache.c | 12 +++-
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c
index a6b6906..d7797e8 100644
--- a/drivers/gpu/drm/drm_cache.c
+++ b/drivers/gpu/drm/drm_cache.c
@@ -64,12 +64,6 @@ static void drm_cache_flush_clflush(struct page *pages[],
drm_clflush_page(*pages++);
mb();
 }
-
-static void
-drm_clflush_ipi_handler(void *null)
-{
-   wbinvd();
-}
 #endif
 
 void
@@ -82,7 +76,7 @@ drm_clflush_pages(struct page *pages[], unsigned long 
num_pages)
return;
}
 
-   if (on_each_cpu(drm_clflush_ipi_handler, NULL, 1) != 0)
+   if (wbinvd_on_all_cpus())
printk(KERN_ERR "Timed out waiting for cache flush.\n");
 
 #elif defined(__powerpc__)
@@ -121,7 +115,7 @@ drm_clflush_sg(struct sg_table *st)
return;
}
 
-   if (on_each_cpu(drm_clflush_ipi_handler, NULL, 1) != 0)
+   if (wbinvd_on_all_cpus())
printk(KERN_ERR "Timed out waiting for cache flush.\n");
 #else
printk(KERN_ERR "Architecture has no drm_cache.c support\n");
@@ -144,7 +138,7 @@ drm_clflush_virt_range(void *addr, unsigned long length)
return;
}
 
-   if (on_each_cpu(drm_clflush_ipi_handler, NULL, 1) != 0)
+   if (wbinvd_on_all_cpus())
printk(KERN_ERR "Timed out waiting for cache flush.\n");
 #else
printk(KERN_ERR "Architecture has no drm_cache.c support\n");
-- 
2.1.3

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 2/4] drm/cache: Try to be smarter about clflushing on x86

2014-12-13 Thread Ben Widawsky
Any GEM driver which has very large objects and a slow CPU is subject to very
long waits simply for clflushing incoherent objects. Generally, each individual
object is not a problem, but if you have very large objects, or very many
objects, the flushing begins to show up in profiles. Because on x86 we know the
cache size, we can easily determine when an object will use all the cache, and
forego iterating over each cacheline.

We need to be careful when using wbinvd. wbinvd() is itself potentially slow
because it requires synchronizing the flush across all CPUs so they have a
coherent view of memory. This can result in either stalling work being done on
other CPUs, or this call itself stalling while waiting for a CPU to accept the
interrupt. Also, wbinvd() also has the downside of invalidating all cachelines,
so we don't want to use it unless we're sure we already own most of the
cachelines.

The current algorithm is very naive. I think it can be tweaked more, and it
would be good if someone else gave it some thought. I am pretty confident in
i915, we can even skip the IPI in the execbuf path with minimal code change (or
perhaps just some verifying of the existing code). It would be nice to hear what
other developers who depend on this code think.

Cc: Intel GFX 
Signed-off-by: Ben Widawsky 
---
 drivers/gpu/drm/drm_cache.c | 20 +---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c
index d7797e8..6009c2d 100644
--- a/drivers/gpu/drm/drm_cache.c
+++ b/drivers/gpu/drm/drm_cache.c
@@ -64,6 +64,20 @@ static void drm_cache_flush_clflush(struct page *pages[],
drm_clflush_page(*pages++);
mb();
 }
+
+static bool
+drm_cache_should_clflush(unsigned long num_pages)
+{
+   const int cache_size = boot_cpu_data.x86_cache_size;
+
+   /* For now the algorithm simply checks if the number of pages to be
+* flushed is greater than the entire system cache. One could make the
+* function more aware of the actual system (ie. if SMP, how large is
+* the cache, CPU freq. etc. All those help to determine when to
+* wbinvd() */
+   WARN_ON_ONCE(!cache_size);
+   return !cache_size || num_pages < (cache_size >> 2);
+}
 #endif
 
 void
@@ -71,7 +85,7 @@ drm_clflush_pages(struct page *pages[], unsigned long 
num_pages)
 {
 
 #if defined(CONFIG_X86)
-   if (cpu_has_clflush) {
+   if (cpu_has_clflush && drm_cache_should_clflush(num_pages)) {
drm_cache_flush_clflush(pages, num_pages);
return;
}
@@ -104,7 +118,7 @@ void
 drm_clflush_sg(struct sg_table *st)
 {
 #if defined(CONFIG_X86)
-   if (cpu_has_clflush) {
+   if (cpu_has_clflush && drm_cache_should_clflush(st->nents)) {
struct sg_page_iter sg_iter;
 
mb();
@@ -128,7 +142,7 @@ void
 drm_clflush_virt_range(void *addr, unsigned long length)
 {
 #if defined(CONFIG_X86)
-   if (cpu_has_clflush) {
+   if (cpu_has_clflush && drm_cache_should_clflush(length / PAGE_SIZE)) {
void *end = addr + length;
mb();
for (; addr < end; addr += boot_cpu_data.x86_clflush_size)
-- 
2.1.3

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 2/4] drm/cache: Try to be smarter about clflushing on x86

2014-12-13 Thread Matt Turner
On Sat, Dec 13, 2014 at 7:08 PM, Ben Widawsky
 wrote:
> Any GEM driver which has very large objects and a slow CPU is subject to very
> long waits simply for clflushing incoherent objects. Generally, each 
> individual
> object is not a problem, but if you have very large objects, or very many
> objects, the flushing begins to show up in profiles. Because on x86 we know 
> the
> cache size, we can easily determine when an object will use all the cache, and
> forego iterating over each cacheline.
>
> We need to be careful when using wbinvd. wbinvd() is itself potentially slow
> because it requires synchronizing the flush across all CPUs so they have a
> coherent view of memory. This can result in either stalling work being done on
> other CPUs, or this call itself stalling while waiting for a CPU to accept the
> interrupt. Also, wbinvd() also has the downside of invalidating all 
> cachelines,
> so we don't want to use it unless we're sure we already own most of the
> cachelines.
>
> The current algorithm is very naive. I think it can be tweaked more, and it
> would be good if someone else gave it some thought. I am pretty confident in
> i915, we can even skip the IPI in the execbuf path with minimal code change 
> (or
> perhaps just some verifying of the existing code). It would be nice to hear 
> what
> other developers who depend on this code think.
>
> Cc: Intel GFX 
> Signed-off-by: Ben Widawsky 
> ---
>  drivers/gpu/drm/drm_cache.c | 20 +---
>  1 file changed, 17 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c
> index d7797e8..6009c2d 100644
> --- a/drivers/gpu/drm/drm_cache.c
> +++ b/drivers/gpu/drm/drm_cache.c
> @@ -64,6 +64,20 @@ static void drm_cache_flush_clflush(struct page *pages[],
> drm_clflush_page(*pages++);
> mb();
>  }
> +
> +static bool
> +drm_cache_should_clflush(unsigned long num_pages)
> +{
> +   const int cache_size = boot_cpu_data.x86_cache_size;
> +
> +   /* For now the algorithm simply checks if the number of pages to be
> +* flushed is greater than the entire system cache. One could make the
> +* function more aware of the actual system (ie. if SMP, how large is
> +* the cache, CPU freq. etc. All those help to determine when to
> +* wbinvd() */
> +   WARN_ON_ONCE(!cache_size);
> +   return !cache_size || num_pages < (cache_size >> 2);
> +}
>  #endif
>
>  void
> @@ -71,7 +85,7 @@ drm_clflush_pages(struct page *pages[], unsigned long 
> num_pages)
>  {
>
>  #if defined(CONFIG_X86)
> -   if (cpu_has_clflush) {
> +   if (cpu_has_clflush && drm_cache_should_clflush(num_pages)) {
> drm_cache_flush_clflush(pages, num_pages);
> return;
> }
> @@ -104,7 +118,7 @@ void
>  drm_clflush_sg(struct sg_table *st)
>  {
>  #if defined(CONFIG_X86)
> -   if (cpu_has_clflush) {
> +   if (cpu_has_clflush && drm_cache_should_clflush(st->nents)) {
> struct sg_page_iter sg_iter;
>
> mb();
> @@ -128,7 +142,7 @@ void
>  drm_clflush_virt_range(void *addr, unsigned long length)
>  {
>  #if defined(CONFIG_X86)
> -   if (cpu_has_clflush) {
> +   if (cpu_has_clflush && drm_cache_should_clflush(length / PAGE_SIZE)) {

If length isn't a multiple of page size, isn't this ignoring the
remainder? Should it be rounding length up to the next multiple of
PAGE_SIZE, like ROUND_UP_TO?
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: fix use after free during eDP encoder destroying

2014-12-13 Thread shuang . he
Tested-By: PRC QA PRTS (Patch Regression Test System Contact: 
shuang...@intel.com)
-Summary-
Platform  Delta  drm-intel-nightly  Series Applied
PNV  364/364  364/364
ILK  +3-3  362/366  362/366
SNB  448/450  448/450
IVB -1  497/498  496/498
BYT  289/289  289/289
HSW  563/564  563/564
BDW  417/417  417/417
-Detailed-
Platform  Testdrm-intel-nightly  Series 
Applied
 ILK  igt_kms_flip_nonexisting-fb  DMESG_WARN(1, M26)PASS(3, M37M26)  
PASS(1, M26)
 ILK  igt_kms_flip_rcs-flip-vs-panning-interruptible  DMESG_WARN(1, 
M26)PASS(3, M37M26)  PASS(1, M26)
 ILK  igt_kms_flip_rcs-wf_vblank-vs-dpms-interruptible  DMESG_WARN(1, 
M26)PASS(2, M26M37)  PASS(1, M26)
 ILK  igt_kms_flip_busy-flip-interruptible  DMESG_WARN(1, M26)PASS(1, M26)  
DMESG_WARN(1, M26)
*ILK  igt_kms_flip_rcs-flip-vs-panning  PASS(2, M26)  DMESG_WARN(1, M26)
*ILK  igt_kms_flip_vblank-vs-hang  PASS(2, M26)  NSPT(1, M26)
*IVB  igt_kms_cursor_crc_cursor-256x256-offscreen  PASS(2, M4M21)  
DMESG_WARN(1, M21)
Note: You need to pay more attention to line start with '*'
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 3/3] drm/i915: Track & check calls to intel(_logical)_ring_{begin, advance}

2014-12-13 Thread shuang . he
Tested-By: PRC QA PRTS (Patch Regression Test System Contact: 
shuang...@intel.com)
-Summary-
Platform  Delta  drm-intel-nightly  Series Applied
PNV  364/364  364/364
ILK  +3 362/366  365/366
SNB  448/450  448/450
IVB  497/498  497/498
BYT  289/289  289/289
HSW  563/564  563/564
BDW  417/417  417/417
-Detailed-
Platform  Testdrm-intel-nightly  Series 
Applied
 ILK  igt_kms_flip_nonexisting-fb  DMESG_WARN(1, M26)PASS(3, M37M26)  
PASS(1, M37)
 ILK  igt_kms_flip_rcs-flip-vs-panning-interruptible  DMESG_WARN(1, 
M26)PASS(3, M37M26)  PASS(1, M37)
 ILK  igt_kms_flip_rcs-wf_vblank-vs-dpms-interruptible  DMESG_WARN(1, 
M26)PASS(2, M26M37)  PASS(1, M37)
Note: You need to pay more attention to line start with '*'
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 2/2] drm/i915/skl: Skylake also supports DP MST

2014-12-13 Thread shuang . he
Tested-By: PRC QA PRTS (Patch Regression Test System Contact: 
shuang...@intel.com)
-Summary-
Platform  Delta  drm-intel-nightly  Series Applied
PNV  364/364  364/364
ILK  +3-2  362/366  363/366
SNB  448/450  448/450
IVB  497/498  497/498
BYT -2  289/289  287/289
HSW -1  563/564  562/564
BDW  417/417  417/417
-Detailed-
Platform  Testdrm-intel-nightly  Series 
Applied
 ILK  igt_kms_flip_nonexisting-fb  DMESG_WARN(1, M26)PASS(4, M37M26)  
PASS(1, M26)
 ILK  igt_kms_flip_rcs-flip-vs-panning-interruptible  DMESG_WARN(2, 
M26)PASS(3, M37M26)  PASS(1, M26)
 ILK  igt_kms_flip_rcs-wf_vblank-vs-dpms-interruptible  DMESG_WARN(1, 
M26)PASS(3, M26M37)  PASS(1, M26)
*ILK  igt_kms_flip_plain-flip-ts-check-interruptible  PASS(2, M26)  
DMESG_WARN(1, M26)
*ILK  igt_kms_flip_wf_vblank-ts-check  PASS(2, M26)  DMESG_WARN(1, M26)
*BYT  igt_drm_import_export_flink  PASS(2, M48M50)  DMESG_WARN(1, M50)
*BYT  igt_drm_vma_limiter_gtt  PASS(2, M48M50)  TIMEOUT(1, M50)
*HSW  igt_gem_concurrent_blit_gpu-rcs-overwrite-source-forked  PASS(2, 
M40M19)  DMESG_WARN(1, M19)
Note: You need to pay more attention to line start with '*'
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx