On 25/06/15 14:32, Eero Tamminen wrote:
Hi,
On 06/25/2015 03:53 PM, Davin McCall wrote:
On 25/06/15 12:27, Eero Tamminen wrote:
On 06/25/2015 02:48 AM, Davin McCall wrote:
In terms of performance:
(export LIBGL_ALWAYS_SOFTWARE=1; time glmark2)
For Intel driver, INTEL_NO_HW=1 could be used.
(Do other drivers have something similar?)
Unfortunately I do not have an Intel display set up.
If you can get libINTEL_DEVID_OVERRIDEdrm to use libdrm_intel, you can
fake desired
InteL HW to Mesa with INTEL_DEVID_OVERRIDE environment variable.
Similarly to INTEL_NO_HW, it prevents batches from being submitted
to GPU.
Ok, thanks, I'll look into this shortly. Any pointers on how to get
libdrm to use libdrm_intel?
When testing 3D driver CPU side optimizations, one should either use
test specifically written for testing driver CPU overhead (proprietary
benchmarks have such tests) or force test-case to be CPU bound e.g.
with INTEL_NO_HW.
Understood. The 'user' time divided by the glmark2 score should however
give a relative indication of the CPU processing required per frame, right?
Davin
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev