Hi

Am 05.06.25 um 19:38 schrieb Michael Kelley:
[...]
I try to address the problem with the patches at

https://lore.kernel.org/dri-devel/20250605152637.98493-1-tzimmerm...@suse.de/

Testing and feedback is much appreciated.

Nice!

I ran the same test case with your patches, and everything works well. The
hyperv_drm numbers are now pretty much the same as the hyperv_fb
numbers for both elapsed time and system CPU time -- within a few percent.
For hyperv_drm, there's no longer a gap in the elapsed time and system
CPU time. No errors due to the guest-to-host ring buffer being full. Total
messages to Hyper-V for hyperv_drm are now a few hundred instead of 3M.

Sounds great. Credit also goes to the vkms devs, which already have the software vblank in their driver.

This might need better support for cases where display updates take exceptionally long, but I can see this being merged as a DRM feature.

The hyperv_drm message count is still a little higher than for hyperv_fb,
presumably because the simulated vblank rate in hyperv_drm is higher than
the 20 Hz rate used by hyperv_fb deferred I/O. But the overall numbers are
small enough that the difference is in the noise. Question: what is the default
value for the simulated vblank rate? Just curious ...

As with a hardware interrupt, the vblank rate comes from the programmed display mode, so most likely 60 Hz. The difference in the update frequency could explain the remaining differences to hyperv_fb.

Best regards
Thomas


Michael

--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)

Reply via email to