On Mon, Mar 21, 2016 at 02:45:07PM +0800, kernel test robot wrote:
> FYI, we noticed the below changes on
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git rcu/dev
> commit 5b3e3964dba5f5a3210ca931d523c1e1f3119b31 ("rcutorture: Add RCU 
> grace-period performance tests")
> 
> As below, the log "torture_init_begin: refusing rcu init: spin_lock running" 
> showed with your commit.

This is the expected result if you try to run two torture tests at the
same time.  You only get one of locktorture, rcutorture, or rcuperf
at any given time.

                                                        Thanx, Paul

> [    3.310757] spin_lock-torture:--- Start of test [debug]: nwriters_stress=4 
> nreaders_stress=0 stat_interval=60 verbose=1 shuffle_interval=3 stutter=5 
> shutdown_secs=0 onoff_interval=0 onoff_holdoff=0
> [    3.310757] spin_lock-torture:--- Start of test [debug]: nwriters_stress=4 
> nreaders_stress=0 stat_interval=60 verbose=1 shuffle_interval=3 stutter=5 
> shutdown_secs=0 onoff_interval=0 onoff_holdoff=0
> [    3.318722] spin_lock-torture: Creating torture_shuffle task
> [    3.318722] spin_lock-torture: Creating torture_shuffle task
> [    3.350213] spin_lock-torture: Creating torture_stutter task
> [    3.350213] spin_lock-torture: Creating torture_stutter task
> [    3.353000] spin_lock-torture: torture_shuffle task started
> [    3.353000] spin_lock-torture: torture_shuffle task started
> [    3.355562] spin_lock-torture: Creating lock_torture_writer task
> [    3.355562] spin_lock-torture: Creating lock_torture_writer task
> [    3.358373] spin_lock-torture: torture_stutter task started
> [    3.358373] spin_lock-torture: torture_stutter task started
> [    3.361060] spin_lock-torture: lock_torture_writer task started
> [    3.361060] spin_lock-torture: lock_torture_writer task started
> [    3.370011] spin_lock-torture: Creating lock_torture_writer task
> [    3.370011] spin_lock-torture: Creating lock_torture_writer task
> [    3.372856] spin_lock-torture: Creating lock_torture_writer task
> [    3.372856] spin_lock-torture: Creating lock_torture_writer task
> [    3.375817] spin_lock-torture: lock_torture_writer task started
> [    3.375817] spin_lock-torture: lock_torture_writer task started
> [    3.378697] spin_lock-torture: Creating lock_torture_writer task
> [    3.378697] spin_lock-torture: Creating lock_torture_writer task
> [    3.380049] spin_lock-torture: lock_torture_writer task started
> [    3.380049] spin_lock-torture: lock_torture_writer task started
> [    3.410169] spin_lock-torture: Creating lock_torture_stats task
> [    3.410169] spin_lock-torture: Creating lock_torture_stats task
> [    3.413129] spin_lock-torture: lock_torture_writer task started
> [    3.413129] spin_lock-torture: lock_torture_writer task started
> [    3.420137] torture_init_begin: refusing rcu init: spin_lock running
> [    3.420137] torture_init_begin: refusing rcu init: spin_lock running
> 
> [    3.430064] spin_lock-torture: lock_torture_stats task started
> [    3.430064] spin_lock-torture: lock_torture_stats task started
> [    3.441101] futex hash table entries: 16 (order: -1, 2048 bytes)
> [    3.441101] futex hash table entries: 16 (order: -1, 2048 bytes)
> [    3.443791] audit: initializing netlink subsys (disabled)
> [    3.443791] audit: initializing netlink subsys (disabled)
> [    3.446329] audit: type=2000 audit(1458435960.381:1): initialized
> [    3.446329] audit: type=2000 audit(1458435960.381:1): initialized
> [    3.470185] zbud: loaded
> [    3.470185] zbud: loaded
> 
> 
> FYI, raw QEMU command line is:
> 
>       qemu-system-x86_64 -enable-kvm -cpu Nehalem -kernel 
> /pkg/linux/x86_64-randconfig-i0-201612/gcc-5/5b3e3964dba5f5a3210ca931d523c1e1f3119b31/vmlinuz-4.5.0-rc1-00035-g5b3e396
>  -append 'root=/dev/ram0 user=lkp 
> job=/lkp/scheduled/vm-intel12-yocto-x86_64-6/bisect_boot-1-yocto-minimal-x86_64.cgz-x86_64-randconfig-i0-201612-5b3e3964dba5f5a3210ca931d523c1e1f3119b31-20160320-8459-1gazcic-1.yaml
>  ARCH=x86_64 kconfig=x86_64-randconfig-i0-201612 
> branch=linux-devel/devel-spot-201603200631 
> commit=5b3e3964dba5f5a3210ca931d523c1e1f3119b31 
> BOOT_IMAGE=/pkg/linux/x86_64-randconfig-i0-201612/gcc-5/5b3e3964dba5f5a3210ca931d523c1e1f3119b31/vmlinuz-4.5.0-rc1-00035-g5b3e396
>  max_uptime=600 
> RESULT_ROOT=/result/boot/1/vm-intel12-yocto-x86_64/yocto-minimal-x86_64.cgz/x86_64-randconfig-i0-201612/gcc-5/5b3e3964dba5f5a3210ca931d523c1e1f3119b31/0
>  LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug 
> apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 
> softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 
> prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw 
> ip=::::vm-intel12-yocto-x86_64-6::dhcp drbd.minor_count=8'  -initrd 
> /fs/KVM/initrd-vm-intel12-yocto-x86_64-6 -m 320 -smp 2 -device 
> e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog 
> i6300esb -rtc base=localtime -drive 
> file=/fs/KVM/disk0-vm-intel12-yocto-x86_64-6,media=disk,if=virtio -drive 
> file=/fs/KVM/disk1-vm-intel12-yocto-x86_64-6,media=disk,if=virtio -pidfile 
> /dev/shm/kboot/pid-vm-intel12-yocto-x86_64-6 -serial 
> file:/dev/shm/kboot/serial-vm-intel12-yocto-x86_64-6 -daemonize -display none 
> -monitor null 
> 
> Thanks,
> Xiaolong Ye.


Reply via email to