On Fri, Jun 07, 2019 at 12:26:48AM -0300, Eduardo Habkost wrote: > On Fri, Feb 01, 2019 at 11:10:31AM -0500, Cleber Rosa wrote: > > > > > > On 1/31/19 4:26 PM, Cleber Rosa wrote: > > > > > > > > > On 1/31/19 3:21 PM, Cleber Rosa wrote: > > >> > > >> > > >> On 1/31/19 3:02 PM, Wainer dos Santos Moschetta wrote: > > >>> > > >>> On 01/17/2019 04:56 PM, Cleber Rosa wrote: > > >>>> Just like the previous tests, boots a Linux kernel on a aarch64 target > > >>>> using the virt machine. > > >>>> > > >>>> One special option added is the CPU type, given that the kernel > > >>>> selected fails to boot on the virt machine's default CPU (cortex-a15). > > >>>> > > >>>> Signed-off-by: Cleber Rosa <cr...@redhat.com> > > >>>> --- > > >>>> .travis.yml | 2 +- > > >>>> tests/acceptance/boot_linux_console.py | 20 ++++++++++++++++++++ > > >>>> 2 files changed, 21 insertions(+), 1 deletion(-) > > >>>> > > >>>> diff --git a/.travis.yml b/.travis.yml > > >>>> index 54100eea5a..595e8c0b6c 100644 > > >>>> --- a/.travis.yml > > >>>> +++ b/.travis.yml > > >>>> @@ -187,7 +187,7 @@ matrix: > > >>>> # Acceptance (Functional) tests > > >>>> - env: > > >>>> - - CONFIG="--python=/usr/bin/python3 > > >>>> --target-list=x86_64-softmmu,mips-softmmu,mips64el-softmmu,ppc64-softmmu" > > >>>> + - CONFIG="--python=/usr/bin/python3 > > >>>> --target-list=x86_64-softmmu,mips-softmmu,mips64el-softmmu,ppc64-softmmu,aarch64-softmmu" > > >>>> > > >>>> - TEST_CMD="make check-acceptance" > > >>>> addons: > > >>>> apt: > > >>>> diff --git a/tests/acceptance/boot_linux_console.py > > >>>> b/tests/acceptance/boot_linux_console.py > > >>>> index f3ccd23a7a..107700b517 100644 > > >>>> --- a/tests/acceptance/boot_linux_console.py > > >>>> +++ b/tests/acceptance/boot_linux_console.py > > >>>> @@ -138,3 +138,23 @@ class BootLinuxConsole(Test): > > >>>> self.vm.launch() > > >>>> console_pattern = 'Kernel command line: %s' % > > >>>> kernel_command_line > > >>>> self.wait_for_console_pattern(console_pattern) > > >>>> + > > >>>> + def test_aarch64_virt(self): > > >>> > > >>> That test case fails on my system (Fedora 29 x86_64). Avocado seems > > >>> unable to kill the VM so it reaches the timeout. > > >>> > > >>> I compiled QEMU with default configuration: > > >>> > > >>> $ configure --python=/usr/bin/python3 --target-list=x86_64-softmmu > > >>> --target-list=x86_64-softmmu,mips-softmmu,mips64el-softmmu,ppc64-softmmu,aarch64-softmmu) > > >>> > > >>> > > >>> Follows a snippet of the Avocado's job.log file: > > >>> ---- > > >>> 2019-01-31 14:41:34,912 test L0602 INFO | START > > >>> 07-/root/src/qemu/tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_aarch64_virt > > >>> > > >>> 2019-01-31 14:41:34,912 test L0298 DEBUG| DATA > > >>> (filename=output.expected) => NOT FOUND (data sources: variant, test, > > >>> file) > > >>> 2019-01-31 14:41:34,913 parameters L0146 DEBUG| PARAMS (key=arch, > > >>> path=*, default=aarch64) => 'aarch64' > > >>> 2019-01-31 14:41:34,913 parameters L0146 DEBUG| PARAMS > > >>> (key=qemu_bin, path=*, default=aarch64-softmmu/qemu-system-aarch64) => > > >>> 'aarch64-softmmu/qemu-system-aarch64' > > >>> 2019-01-31 14:41:34,915 download L0070 INFO | Fetching > > >>> https://sjc.edge.kernel.org/fedora-buffet/fedora/linux/releases/29/Server/aarch64/os/images/pxeboot/vmlinuz > > >>> -> /var/lib/avocado/data/cache/by_name/vmlinuz.3upct2pr > > >>> 2019-01-31 14:41:35,490 download L0054 DEBUG| Retrieved URL > > >>> "https://sjc.edge.kernel.org/fedora-buffet/fedora/linux/releases/29/Server/aarch64/os/images/pxeboot/vmlinuz": > > >>> content-length 8623423, date: "Thu, 31 Jan 2019 19:41:35 GMT", > > >>> last-modified: "Sun, 21 Oct 2018 00:43:09 GMT" > > >>> 2019-01-31 14:41:41,765 qemu L0317 DEBUG| VM launch command: > > >>> 'aarch64-softmmu/qemu-system-aarch64 -chardev > > >>> socket,id=mon,path=/var/tmp/tmpizirkcud/qemu-32609-monitor.sock -mon > > >>> chardev=mon,mode=control -display none -vga none -machine virt -chardev > > >>> socket,id=console,path=/var/tmp/tmpizirkcud/qemu-32609-console.sock,server,nowait > > >>> -serial chardev:console -cpu cortex-a53 -kernel > > >>> /var/lib/avocado/data/cache/by_name/vmlinuz -append console=ttyAMA0' > > >>> 2019-01-31 14:41:41,779 qmp L0167 DEBUG| >>> {'execute': > > >>> 'qmp_capabilities'} > > >>> 2019-01-31 14:41:41,931 qmp L0175 DEBUG| <<< {'return': {}} > > >>> 2019-01-31 14:41:42,830 boot_linux_conso L0041 DEBUG| [ 0.000000] > > >>> Booting Linux on physical CPU 0x0000000000 [0x410fd034] > > >>> > > >>> (...) > > >>> > > >>> 2019-01-31 14:41:42,833 boot_linux_conso L0041 DEBUG| [ 0.000000] > > >>> Policy zone: DMA32 > > >>> 2019-01-31 14:41:42,833 boot_linux_conso L0041 DEBUG| [ 0.000000] > > >>> Kernel command line: console=ttyAMA0 > > >>> 2019-01-31 14:41:42,833 qmp L0167 DEBUG| >>> {'execute': > > >>> 'quit'} > > >> > > >> Here, a QMP response like "<<< {'return': {}}" would be expected. > > >> > > >> Since I can not reproduce this on my system (or on Travis-CI jobs I've > > >> sent), can you tell me on top of which commit you've applied these > > >> patches? > > >> > > > > > > I spoke too soon: > > > > > > https://travis-ci.org/clebergnu/qemu/jobs/487121425#L3033 > > > > > > This looks like a recent regression, and I'm guessing it's not on the > > > test's side. I'll try to bisect it and let you know. > > > > > > > On a fresh environment, I am able to get this reproduced on every 2 of > > runs, more or less. When I hit it, I attached GDB to it, and the > > backtrace shows: > > > > Thread debugging using libthread_db enabled] > > Using host libthread_db library "/lib64/libthread_db.so.1". > > warning: Loadable section ".note.gnu.property" outside of ELF segments > > warning: Loadable section ".note.gnu.property" outside of ELF segments > > __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103 > > 103 2: movl %edx, %eax > > (gdb) bt > > #0 __lll_lock_wait () at > > ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103 > > #1 0x00007fc6ba1a2e09 in __GI___pthread_mutex_lock > > (mutex=mutex@entry=0x5615a233d020 <qemu_global_mutex>) at > > ../nptl/pthread_mutex_lock.c:80 > > #2 0x00005615a1bb7593 in qemu_mutex_lock_impl (mutex=0x5615a233d020 > > <qemu_global_mutex>, file=0x5615a1db2d4c "util/main-loop.c", line=236) > > at util/qemu-thread-posix.c:66 > > #3 0x00005615a171125e in qemu_mutex_lock_iothread_impl > > (file=file@entry=0x5615a1db2d4c "util/main-loop.c", line=line@entry=236) > > at /home/cleber/src/qemu/cpus.c:1849 > > #4 0x00005615a1bb415d in os_host_main_loop_wait (timeout=<optimized > > out>) at util/main-loop.c:236 > > #5 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:497 > > #6 0x00005615a18fdd39 in main_loop () at vl.c:1928 > > #7 0x00005615a16c9ee9 in main (argc=<optimized out>, argv=<optimized > > out>, envp=<optimized out>) at vl.c:4665 > > Tip: run "thread apply all bt" so you can get a backtrace of all > threads. > >
Nice, thanks! > > > > Running it with `taskset -c 1` prevents this issue from happening, which > > AFAICT, contributes even further towards this being a QEMU race condition. > > > > I'm CC'ing Peter and Claudio (listed maintainers of aarch64), as this > > seems to limited to that target. Any tips on what to do here? > > I am hitting this on Travis, too, and I finally could reproduce > it locally, > > The guest is still writing on the serial console, but nobody is > reading the data on the other side. A VCPU thread is stuck > inside the EAGAIN/nanosleep loop at qemu_chr_write_buffer(), > holding the QEMU global lock. > This is awesome, because it really looks like a perfect explanation for the behavior/reproducer/workaround that I described here: https://bugs.launchpad.net/qemu/+bug/1829779 > Thread 4 (Thread 0x7f2e45fff700 (LWP 6461)): > #0 0x00007f2e4ec03500 in nanosleep () at /lib64/libpthread.so.0 > #1 0x00007f2e4fb229d7 in g_usleep () at /lib64/libglib-2.0.so.0 > #2 0x0000559a4e7ca4c9 in qemu_chr_write_buffer (s=s@entry=0x559a502d0ac0, > buf=buf@entry=0x7f2e45ffdd90 "7", len=1, offset=offset@entry=0x7f2e45ffdd60, > write_all=true) at chardev/char.c:115 > #3 0x0000559a4e7ca78f in qemu_chr_write (s=0x559a502d0ac0, > buf=buf@entry=0x7f2e45ffdd90 "7", len=len@entry=1, > write_all=write_all@entry=true) at chardev/char.c:148 > #4 0x0000559a4e7cc7e2 in qemu_chr_fe_write_all (be=be@entry=0x559a504b4c50, > buf=buf@entry=0x7f2e45ffdd90 "7", len=len@entry=1) at chardev/char-fe.c:53 > #5 0x0000559a4e58f320 in pl011_write (opaque=0x559a504b47d0, offset=0, > value=55, size=<optimized out>) at hw/char/pl011.c:183 > #6 0x0000559a4e325121 in memory_region_write_accessor (mr=0x559a504b4ae0, > addr=0, value=<optimized out>, size=2, shift=<optimized out>, mask=<optimized > out>, attrs=...) at /home/ehabkost/rh/proj/virt/qemu/memory.c:503 > #7 0x0000559a4e322cd6 in access_with_adjusted_size (addr=addr@entry=0, > value=value@entry=0x7f2e45ffded8, size=size@entry=2, > access_size_min=<optimized out>, access_size_max=<optimized out>, > access_fn=access_fn@entry= > 0x559a4e3250a0 <memory_region_write_accessor>, mr=0x559a504b4ae0, > attrs=...) at /home/ehabkost/rh/proj/virt/qemu/memory.c:569 > #8 0x0000559a4e32763f in memory_region_dispatch_write > (mr=mr@entry=0x559a504b4ae0, addr=addr@entry=0, data=<optimized out>, > data@entry=55, size=size@entry=2, attrs=...) at > /home/ehabkost/rh/proj/virt/qemu/memory.c:1497 > #9 0x0000559a4e338708 in io_writex (env=env@entry=0x559a503d5620, > mmu_idx=mmu_idx@entry=1, val=val@entry=55, > addr=addr@entry=18446462598867529728, retaddr=139836732143069, size=2, > iotlbentry=<optimized out>, iotlbentry=<optimized out>) > at /home/ehabkost/rh/proj/virt/qemu/accel/tcg/cputlb.c:945 > #10 0x0000559a4e33d203 in store_helper (big_endian=false, size=2, > retaddr=<optimized out>, oi=<optimized out>, val=55, > addr=18446462598867529728, env=0x559a503d5620) at > /home/ehabkost/rh/proj/virt/qemu/accel/tcg/cputlb.c:1544 > #11 0x0000559a4e33d203 in helper_le_stw_mmu (env=0x559a503d5620, > addr=18446462598867529728, val=55, oi=<optimized out>, > retaddr=139836732143069) at > /home/ehabkost/rh/proj/virt/qemu/accel/tcg/cputlb.c:1636 > #12 0x00007f2e46bef5dd in code_gen_buffer () > #13 0x0000559a4e352381 in cpu_tb_exec (itb=<optimized out>, > cpu=0x559a503d5620) at > /home/ehabkost/rh/proj/virt/qemu/accel/tcg/cpu-exec.c:171 > #14 0x0000559a4e352381 in cpu_loop_exec_tb (tb_exit=<synthetic pointer>, > last_tb=<synthetic pointer>, tb=<optimized out>, cpu=0x559a503d5620) at > /home/ehabkost/rh/proj/virt/qemu/accel/tcg/cpu-exec.c:618 > #15 0x0000559a4e352381 in cpu_exec (cpu=cpu@entry=0x559a503cd360) at > /home/ehabkost/rh/proj/virt/qemu/accel/tcg/cpu-exec.c:729 > #16 0x0000559a4e30ea0f in tcg_cpu_exec (cpu=0x559a503cd360) at > /home/ehabkost/rh/proj/virt/qemu/cpus.c:1434 > #17 0x0000559a4e310b6b in qemu_tcg_cpu_thread_fn > (arg=arg@entry=0x559a503cd360) at /home/ehabkost/rh/proj/virt/qemu/cpus.c:1743 > #18 0x0000559a4e83669a in qemu_thread_start (args=<optimized out>) at > util/qemu-thread-posix.c:502 > #19 0x00007f2e4ebf958e in start_thread () at /lib64/libpthread.so.0 > #20 0x00007f2e4eb266f3 in clone () at /lib64/libc.so.6 > > > For reference, this is the QEMU command line: > > aarch64-softmmu/qemu-system-aarch64 -chardev > socket,id=mon,path=/var/tmp/tmpxnkcjvf0/qemu-6453-monitor.sock -mon > chardev=mon,mode=control -display none -vga none -machine virt -chardev > socket,id=console,path=/var/tmp/tmpxnkcjvf0/qemu-6453-console.sock,server,nowait > -serial chardev:console -cpu cortex-a53 -kernel > /home/ehabkost/rh/proj/virt/qemu/tests/venv/data/cache/by_location/e959d0e1dd72e77653e218e666198db1f3d0c213/vmlinuz > -append printk.time=0 console=ttyAMA0 > > > > -- > Eduardo Besides the command line, I hope it's also easy to use the following reproducer: https://github.com/clebergnu/qemu/commit/c778e28c24030c4a36548b714293b319f4bf18df It should apply cleanly to QEMU master, and instructions on how to run it are documented in the commit message. Thanks, - Cleber.