On Sat, Nov 17, 2012 at 11:29 PM, Adam Vande More <amvandem...@gmail.com> wrote: > On Sat, Nov 17, 2012 at 1:19 PM, Alex Chistyakov <alexcl...@gmail.com> > wrote: >> >> Okay the situation has changed radically after upgrade to 4.2.4, >> VBoxHeadless does not consume 100% CPU anymore but now I have another >> problem: >> >> --- 192.168.221.11 ping statistics --- >> 677 packets transmitted, 677 packets received, 0.0% packet loss >> round-trip min/avg/max/stddev = 0.233/4997.489/52552.511/10627.598 ms >> >> This is results of ping between host and guest systems. >> >> A common pattern is like this: >> >> 64 bytes from 192.168.221.11: icmp_seq=63 ttl=64 time=21.168 ms >> 64 bytes from 192.168.221.11: icmp_seq=64 ttl=64 time=1.914 ms >> 64 bytes from 192.168.221.11: icmp_seq=65 ttl=64 time=49.005 ms >> 64 bytes from 192.168.221.11: icmp_seq=66 ttl=64 time=7.190 ms >> 64 bytes from 192.168.221.11: icmp_seq=67 ttl=64 time=56.000 ms >> 64 bytes from 192.168.221.11: icmp_seq=68 ttl=64 time=0.276 ms >> 64 bytes from 192.168.221.11: icmp_seq=69 ttl=64 time=0.817 ms >> 64 bytes from 192.168.221.11: icmp_seq=70 ttl=64 time=12.177 ms >> 64 bytes from 192.168.221.11: icmp_seq=71 ttl=64 time=11.181 ms >> 64 bytes from 192.168.221.11: icmp_seq=72 ttl=64 time=19790.362 ms >> 64 bytes from 192.168.221.11: icmp_seq=73 ttl=64 time=18789.374 ms >> 64 bytes from 192.168.221.11: icmp_seq=74 ttl=64 time=17788.379 ms >> >> >> I also have got a lot of "soft lockup - CPU#0 stuck for 22s!" Linux >> kernel messages on the guest. >> Since this is a periodic problem I guess the best way to track it down >> is to get thread stack dump samples using gdb when the lock occures >> but unfortunately I am not familiar with FreeBSD flavour of gdb, it >> seems to be quite different. But I will try anyway. > > > Does 'sysctl kern.eventtimer.periodic=1' help?
No it does not. I also tried to boot the guest using highrez=off and nohz=off and got steady 100% CPU consumption immediately. Sampling result on host is: 46.04% [11023] cpu_search_highest @ /boot/kernel/kernel 81.93% [9031] cpu_search_highest 59.55% [5378] sched_idletd 100.0% [5378] fork_exit 40.45% [3653] cpu_search_highest 100.0% [3653] sched_idletd 17.92% [1975] sched_idletd 100.0% [1975] fork_exit 00.15% [17] fork_exit Looks like lots of extra rescheduling and I wonder if setting CPU affinity can help with this. Thanks, -- SY, Alex _______________________________________________ freebsd-emulation@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-emulation To unsubscribe, send any mail to "freebsd-emulation-unsubscr...@freebsd.org"