On Thu, Sep 4, 2014 at 1:46 PM, Kirill Batuzov <batuz...@ispras.ru> wrote: > On Wed, 3 Sep 2014, Andrey Korolyov wrote: > >> Given 2.1 and isa-serial output, set as ttyS0 for the guest VM with >> 9600 baud rate. >> >> The test case is quite simple - display as much data as possible over >> serial console and do not hang the system. While qemu-1.1 works >> perfectly, with complaining for lost interrupts (known bug for used >> guest kernel), 2.1 just hangs after some seconds, eating up all >> available cpu quota. >> >> Test case is 'while true; dmesg; done' in serial console. I`d like to >> ask to consider this bug as very serious as VM going completely >> unresponsive in matter of tens of seconds and there are a lot of side >> attacks to produce enough number of printk() to the ttyS0 with serial >> console being set up and default settings for almost any distro in >> such a way that message suppression would not work and VM can be DoSed >> by an unprivileged user. >> >> > > I tried to reproduce the described behaviour with aboriginal linux and > QEMU 2.1.0 but without luck. > > The configurations I tried: > > qemu-system-i386 -cpu pentium3 -no-reboot -kernel bzImage -hda hda.sqf \ > -append "root=/dev/hda rw init=/sbin/init.sh panic=1 console=ttyS0 > HOST=i686" > > qemu-system-i386 -cpu pentium3 -no-reboot -kernel bzImage -hda hda.sqf \ > -append "root=/dev/hda rw init=/bin/ash panic=1 console=ttyS0,9600 > HOST=i686" > > With all output the system did not hang. In particular I alway could > switch to QEMU monitor and stop the VM from there. > > Can you give an exact QEMU command line which leads to the bug? > > -- > Kirill
Thanks, the launch string can be borrowed from attach here: http://lists.nongnu.org/archive/html/qemu-devel/2014-09/msg00482.html, the same VM is going under test. By hang I mean stopping ability to send icmp replies, it is like a kind of a watermark for issues I count serious after. Just tested again, the ceiling is not exactly representing all available cpu quota *every* time but is rounded by seemingly random count of cores from 3. to 9 in mine series of tests, with quota limit of 12. VM became unresponsive in matter of seconds, consumption raising by 'clicking' core count for about a half of minute, stabilizing then. Guest args are console=tty0 console=ttyS0,9600n8.