On Wed, Sep 12, 2012 at 06:06:47PM +0300, Gleb Natapov wrote: > On Wed, Sep 12, 2012 at 09:44:10AM -0500, Anthony Liguori wrote: > > Jan Kiszka <jan.kis...@siemens.com> writes: > > > > > On 2012-09-12 15:54, Anthony Liguori wrote: > > >> > > >> Hi, > > >> > > >> We've been running into a lot of problems lately with Windows guests and > > >> I think they all ultimately could be addressed by revisiting the missed > > >> tick catchup algorithms that we use. Mike and I spent a while talking > > >> about it yesterday and I wanted to take the discussion to the list to > > >> get some additional input. > > >> > > >> Here are the problems we're seeing: > > >> > > >> 1) Rapid reinjection can lead to time moving faster for short bursts of > > >> time. We've seen a number of RTC watchdog BSoDs and it's possible > > >> that at least one cause is reinjection speed. > > >> > > >> 2) When hibernating a host system, the guest gets is essentially paused > > >> for a long period of time. This results in a very large tick catchup > > >> while also resulting in a large skew in guest time. > > >> > > >> I've gotten reports of the tick catchup consuming a lot of CPU time > > >> from rapid delivery of interrupts (although I haven't reproduced this > > >> yet). > > >> > > >> 3) Windows appears to have a service that periodically syncs the guest > > >> time with the hardware clock. I've been told the resync period is an > > >> hour. For large clock skews, this can compete with reinjection > > >> resulting in a positive skew in time (the guest can be ahead of the > > >> host). > > >> > > >> I've been thinking about an algorithm like this to address these > > >> problems: > > >> > > >> A) Limit the number of interrupts that we reinject to the equivalent of > > >> a small period of wallclock time. Something like 60 seconds. > > >> > > >> B) In the event of (A), trigger a notification in QEMU. This is easy > > >> for the RTC but harder for the in-kernel PIT. Maybe it's a good time > > >> to > > >> revisit usage of the in-kernel PIT? > > >> > > >> C) On acculumated tick overflow, rely on using a qemu-ga command to > > >> force a resync of the guest's time to the hardware wallclock time. > > >> > > >> D) Whenever the guest reads the wallclock time from the RTC, reset all > > >> accumulated ticks. > > >> > > >> In order to do (C), we'll need to plumb qemu-ga through QMP. Mike and I > > >> discussed a low-impact way of doing this (having a separate dispatch > > >> path for guest agent commands) and I'm confident we could do this for > > >> 1.3. > > >> > > >> This would mean that management tools would need to consume qemu-ga > > >> through QMP. Not sure if this is a problem for anyone. > > >> > > >> I'm not sure whether it's worth trying to support this with the > > >> in-kernel PIT or not either. > > > > > > As with our current discussion around fixing the PIC and its impact on > > > the PIT, we should try on the userspace model first and then check if > > > the design can be adapted to support in-kernel as well. > > > > > > For which guests is the PIT important again? Old Linux kernels? Windows > > > should be mostly happy with the RTC - or the HPET. > > > > I thought that only 64-bit Win2k8+ used the RTC. > > > > I thought win2k3 and even 32-bit win2k8 still used the PIT. > > > Only WindowsXP non-acpi hal uses PIT. Any other windows uses RTC. In > other words we do not care about PIT. > Small clarification. They use RTC if HPET is not present. I don't know at what version Windows started to prefer HPET over RTC.
-- Gleb.