Hi, I seem to have hit the same bug again with 2.6.32-38 domU (amd64). After I rebooted dom0 to 2.6.32-41 I see that the jiffies value of the domU does not increase but cpu_time in xm list -l output does. Value of "last_value" seems to increase (now around 1804512135271).
I looked at linux-source-2.6.32 2.6.32-38 and I see that it contains a call to pvclock_resume in xen_timer_resume so the patch should still be there. Is this a new bug that just has the same symptoms? KERNEL: /usr/lib/debug/boot/vmlinux-2.6.32-5-amd64 DUMPFILE: /local/xen/lindi1/core CPUS: 6 DATE: Mon Mar 5 10:02:26 2012 UPTIME: 54 days, 21:35:49 LOAD AVERAGE: 0.04, 0.01, 0.00 TASKS: 307 NODENAME: lindi1 RELEASE: 2.6.32-5-amd64 VERSION: #1 SMP Mon Oct 3 03:59:20 UTC 2011 MACHINE: x86_64 (3210 Mhz) MEMORY: 2 GB PANIC: "" PID: 0 COMMAND: "swapper" TASK: ffffffff814611f0 (1 of 6) [THREAD_INFO: ffffffff8142c000] CPU: 0 STATE: TASK_RUNNING (ACTIVE) WARNING: panic task not found crash> disassemble xen_timer_resume Dump of assembler code for function xen_timer_resume: 0xffffffff8100de32 <xen_timer_resume+0>: push %rbx 0xffffffff8100de33 <xen_timer_resume+1>: callq 0xffffffff8102cd7e <pvclock_resume> ^^^^^^^^^^^^^^ ... crash> disassemble pvclock_resume Dump of assembler code for function pvclock_resume: 0xffffffff8102cd7e <pvclock_resume+0>: movq $0x0,0x5a42cf(%rip) # 0xffffffff815d1058 0xffffffff8102cd89 <pvclock_resume+11>: retq crash> x/x 0xffffffff815d1058 0xffffffff815d1058: 0x000001a4254e0867 crash> p/x last_value $17 = { counter = 0x1a4254e0867 } crash> ps | grep -v IN PID PPID CPU TASK ST %MEM VSZ RSS COMM > 0 0 0 ffffffff814611f0 RU 0.0 0 0 [swapper] > 0 0 1 ffff88007ff50e20 RU 0.0 0 0 [swapper] > 0 0 2 ffff88007ff51530 RU 0.0 0 0 [swapper] > 0 0 3 ffff88007ff51c40 RU 0.0 0 0 [swapper] > 0 0 4 ffff88007ff52350 RU 0.0 0 0 [swapper] > 0 0 5 ffff88007ff52a60 RU 0.0 0 0 [swapper] 21 2 0 ffff88007ffab170 UN 0.0 0 0 [events/0] 26 2 5 ffff88007ffad4c0 UN 0.0 0 0 [events/5] 32 2 5 ffff88007f418000 UN 0.0 0 0 [xenwatch] crash> bt 0 21 26 32 PID: 0 TASK: ffffffff814611f0 CPU: 0 COMMAND: "swapper" #0 [ffffffff8142df70] xen_safe_halt at ffffffff8100dcbf #1 [ffffffff8142df78] xen_idle at ffffffff8100be63 #2 [ffffffff8142df90] cpu_idle at ffffffff8100fe97 PID: 0 TASK: ffff88007ff50e20 CPU: 1 COMMAND: "swapper" #0 [ffff88007ff5de50] schedule at ffffffff812fb2a7 #1 [ffff88007ff5de68] xen_force_evtchn_callback at ffffffff8100dc41 #2 [ffff88007ff5de70] check_events at ffffffff8100e252 #3 [ffff88007ff5dec8] tick_nohz_stop_sched_tick at ffffffff81070d4e #4 [ffff88007ff5df28] cpu_idle at ffffffff8100fe97 PID: 0 TASK: ffff88007ff51530 CPU: 2 COMMAND: "swapper" #0 [ffff88007ff5fe50] schedule at ffffffff812fb2a7 #1 [ffff88007ff5fe68] xen_force_evtchn_callback at ffffffff8100dc41 #2 [ffff88007ff5fe70] check_events at ffffffff8100e252 #3 [ffff88007ff5fec8] tick_nohz_stop_sched_tick at ffffffff81070d4e #4 [ffff88007ff5ff28] cpu_idle at ffffffff8100fe97 PID: 0 TASK: ffff88007ff51c40 CPU: 3 COMMAND: "swapper" #0 [ffff88007ff69e50] schedule at ffffffff812fb2a7 #1 [ffff88007ff69e68] xen_force_evtchn_callback at ffffffff8100dc41 #2 [ffff88007ff69e70] check_events at ffffffff8100e252 #3 [ffff88007ff69ec8] tick_nohz_stop_sched_tick at ffffffff81070d4e #4 [ffff88007ff69f28] cpu_idle at ffffffff8100fe97 PID: 0 TASK: ffff88007ff52350 CPU: 4 COMMAND: "swapper" #0 [ffff88007ff6be50] schedule at ffffffff812fb2a7 #1 [ffff88007ff6be68] xen_force_evtchn_callback at ffffffff8100dc41 #2 [ffff88007ff6be70] check_events at ffffffff8100e252 #3 [ffff88007ff6bec8] tick_nohz_stop_sched_tick at ffffffff81070d4e #4 [ffff88007ff6bf28] cpu_idle at ffffffff8100fe97 PID: 0 TASK: ffff88007ff52a60 CPU: 5 COMMAND: "swapper" #0 [ffff88007ff6de50] schedule at ffffffff812fb2a7 #1 [ffff88007ff6de68] xen_force_evtchn_callback at ffffffff8100dc41 #2 [ffff88007ff6de70] check_events at ffffffff8100e252 #3 [ffff88007ff6dec8] tick_nohz_stop_sched_tick at ffffffff81070d4e #4 [ffff88007ff6df28] cpu_idle at ffffffff8100fe97 PID: 21 TASK: ffff88007ffab170 CPU: 0 COMMAND: "events/0" #0 [ffff88007ffc3b90] schedule at ffffffff812fb2a7 #1 [ffff88007ffc3c68] schedule_timeout at ffffffff812fb6dd #2 [ffff88007ffc3ce8] wait_for_common at ffffffff812fb594 #3 [ffff88007ffc3d78] synchronize_sched at ffffffff8106307b #4 [ffff88007ffc3db8] dev_deactivate at ffffffff81262d5f #5 [ffff88007ffc3de8] __linkwatch_run_queue at ffffffff8125a8ea #6 [ffff88007ffc3e28] linkwatch_event at ffffffff8125a954 #7 [ffff88007ffc3e38] worker_thread at ffffffff8106195f #8 [ffff88007ffc3ee8] kthread at ffffffff81064cc5 #9 [ffff88007ffc3f48] kernel_thread at ffffffff81011baa PID: 26 TASK: ffff88007ffad4c0 CPU: 5 COMMAND: "events/5" #0 [ffff88007ffd5cc0] schedule at ffffffff812fb2a7 #1 [ffff88007ffd5d98] __mutex_lock_common at ffffffff812fbb3b #2 [ffff88007ffd5e08] mutex_lock at ffffffff812fbc63 #3 [ffff88007ffd5e28] linkwatch_event at ffffffff8125a93d #4 [ffff88007ffd5e38] worker_thread at ffffffff8106195f #5 [ffff88007ffd5ee8] kthread at ffffffff81064cc5 #6 [ffff88007ffd5f48] kernel_thread at ffffffff81011baa PID: 32 TASK: ffff88007f418000 CPU: 5 COMMAND: "xenwatch" #0 [ffff88007f413c60] schedule at ffffffff812fb2a7 #1 [ffff88007f413d38] __mutex_lock_common at ffffffff812fbb3b #2 [ffff88007f413da8] mutex_lock at ffffffff812fbc63 #3 [ffff88007f413dc8] netif_notify_peers at ffffffff8126316d #4 [ffff88007f413dd8] backend_changed at ffffffffa000a1b0 #5 [ffff88007f413e78] xenwatch_thread at ffffffff811f1628 #6 [ffff88007f413ee8] kthread at ffffffff81064cc5 #7 [ffff88007f413f48] kernel_thread at ffffffff81011baa -Timo -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/84linf8gxp....@sauna.l.org