If the clock becomes unstable while we're reading it, we need to
bail.  We can do this by simply moving the check into the seqcount
loop.

Reported-by: Marcelo Tosatti <mtosa...@redhat.com>
Signed-off-by: Andy Lutomirski <l...@kernel.org>
---

Marcelo, how's this?

arch/x86/entry/vdso/vclock_gettime.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/entry/vdso/vclock_gettime.c 
b/arch/x86/entry/vdso/vclock_gettime.c
index 8602f06c759f..1a50e09c945b 100644
--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -126,23 +126,23 @@ static notrace cycle_t vread_pvclock(int *mode)
         *
         * On Xen, we don't appear to have that guarantee, but Xen still
         * supplies a valid seqlock using the version field.
-
+        *
         * We only do pvclock vdso timing at all if
         * PVCLOCK_TSC_STABLE_BIT is set, and we interpret that bit to
         * mean that all vCPUs have matching pvti and that the TSC is
         * synced, so we can just look at vCPU 0's pvti.
         */
 
-       if (unlikely(!(pvti->flags & PVCLOCK_TSC_STABLE_BIT))) {
-               *mode = VCLOCK_NONE;
-               return 0;
-       }
-
        do {
                version = pvti->version;
 
                smp_rmb();
 
+               if (unlikely(!(pvti->flags & PVCLOCK_TSC_STABLE_BIT))) {
+                       *mode = VCLOCK_NONE;
+                       return 0;
+               }
+
                tsc = rdtsc_ordered();
                pvti_tsc_to_system_mul = pvti->tsc_to_system_mul;
                pvti_tsc_shift = pvti->tsc_shift;
-- 
2.4.3

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to