** Description changed: Found on 2025.01.13/bionic/linux-hwe-5.4/5.4.0-208.228~18.04.1 - (openstack/s390x-vm) + (openstack/s390x-vm). Seems to be flaky. 17:53:34 DEBUG| [stdout] ================================================================================ - 17:53:34 DEBUG| [stdout] + 17:53:34 DEBUG| [stdout] 17:53:34 DEBUG| [stdout] -------------------------------------------------------------------------------- 17:53:34 DEBUG| [stdout] ZFS options: compression=gzip-1 17:53:34 DEBUG| [stdout] Stress test: /home/ubuntu/autotest/client/tmp/ubuntu_zfs_stress/src/stress-ng/stress-ng --verify --times --metrics-brief --syslog --keep-name -t 5s --hdd 4 --hdd-opts sync,wr-rnd,rd-rnd,fadv-willneed,fadv-rnd --link 4 --symlink 4 --lockf 4 --seek 4 --aio 4 --aio-requests 32 --dentry 4 --dir 4 --dentry-order stride --fallocate 4 --fstat 4 --dentries 65536 --io 1 --lease 4 --mmap 0 --mmap-file --mmap-async --open 4 --rename 4 --hdd-bytes 128M --fallocate-bytes 128M --chdir 4 --chmod 4 --filename 4 --rename 4 --mmap-bytes 128M --hdd-write-size 512 --ionice-class besteffort --ionice-level 0 - 17:53:34 DEBUG| [stdout] VDEV path: + 17:53:34 DEBUG| [stdout] VDEV path: 17:53:34 DEBUG| [stdout] Mount point: /testpool/test 17:53:34 DEBUG| [stdout] Date: Wed Feb 19 17:53:34 UTC 2025 17:53:34 DEBUG| [stdout] Host: kt-b-lhwe54-gen-5-4-u-zfs-stress-s390x-kvm 17:53:34 DEBUG| [stdout] Kernel: 5.4.0-208-generic #228~18.04.1-Ubuntu SMP Sat Feb 8 00:55:29 UTC 2025 17:53:34 DEBUG| [stdout] Machine: kt-b-lhwe54-gen-5-4-u-zfs-stress-s390x-kvm s390x s390x 17:53:34 DEBUG| [stdout] CPUs online: 2 17:53:34 DEBUG| [stdout] CPUs total: 2 17:53:34 DEBUG| [stdout] Page size: 4096 17:53:34 DEBUG| [stdout] Pages avail: 1985435 17:53:34 DEBUG| [stdout] Pages total: 2056660 17:53:34 DEBUG| [stdout] -------------------------------------------------------------------------------- - 17:53:34 DEBUG| [stdout] + 17:53:34 DEBUG| [stdout] 17:53:34 DEBUG| [stdout] zfs set compression=gzip-1 testpool/test - 17:53:34 DEBUG| [stdout] + 17:53:34 DEBUG| [stdout] 17:53:34 DEBUG| [stdout] stress-ng: info: [27246] aio stressor will be skipped, it is not implemented on this system: s390x Linux 5.4.0-208-generic gcc 7.5.0 (built without aio.h) 17:53:34 DEBUG| [stdout] stress-ng: info: [27246] setting to a 5 secs run per stressor 17:53:34 DEBUG| [stdout] stress-ng: info: [27246] dispatching hogs: 4 hdd, 4 link, 4 symlink, 4 lockf, 4 seek, 4 dentry, 4 dir, 4 fallocate, 4 fstat, 1 io, 4 lease, 2 mmap, 4 open, 4 rename, 4 chdir, 4 chmod, 4 filename, 4 rename 17:53:34 DEBUG| [stdout] stress-ng: info: [27246] note: /proc/sys/kernel/sched_autogroup_enabled is 1 and this can impact scheduling throughput for processes not attached to a tty. Setting this to 0 may improve performance metrics 17:53:34 DEBUG| [stdout] stress-ng: info: [27302] open: using a maximum of 1048576 file descriptors 17:53:34 DEBUG| [stdout] stress-ng: info: [27295] io: this is a legacy I/O sync stressor, consider using iomix instead 17:53:35 DEBUG| [stdout] stress-ng: fail: [27277] seek: write failed, errno=5 (Input/output error), filesystem type: zfs (14334 blocks available) 17:53:36 DEBUG| [stdout] stress-ng: fail: [27275] seek: write failed, errno=5 (Input/output error), filesystem type: zfs (14334 blocks available) 17:53:36 DEBUG| [stdout] stress-ng: fail: [27274] seek: read failed, errno=5 (Input/output error), filesystem type: zfs (14334 blocks available) 17:53:37 DEBUG| [stdout] Found kernel warning and/or call trace: - 17:53:37 DEBUG| [stdout] + 17:53:37 DEBUG| [stdout] 17:53:37 DEBUG| [stdout] [ 181.561304] TESTING: --verify --times --metrics-brief --syslog --keep-name -t 5s --hdd 4 --hdd-opts sync,wr-rnd,rd-rnd,fadv-willneed,fadv-rnd --link 4 --symlink 4 --lockf 4 --seek 4 --aio 4 --aio-requests 32 --dentry 4 --dir 4 --dentry-order stride --fallocate 4 --fstat 4 --dentries 65536 --io 1 --lease 4 --mmap 0 --mmap-file --mmap-async --open 4 --rename 4 --hdd-bytes 128M --fallocate-bytes 128M --chdir 4 --chmod 4 --filename 4 --rename 4 --mmap-bytes 128M --hdd-write-size 512 --ionice-class besteffort --ionice-level 0 17:53:37 DEBUG| [stdout] [ 181.757863] ubuntu_zfs_stre (23783): drop_caches: 1 17:53:37 DEBUG| [stdout] [ 181.762949] ubuntu_zfs_stre (23783): drop_caches: 2 17:53:37 DEBUG| [stdout] [ 181.764170] ubuntu_zfs_stre (23783): drop_caches: 3 17:53:37 DEBUG| [stdout] [ 181.780661] ZFS options: compression=gzip-1 17:53:37 DEBUG| [stdout] [ 181.780710] Stress test: /home/ubuntu/autotest/client/tmp/ubuntu_zfs_stress/src/stress-ng/stress-ng --verify --times --metrics-brief --syslog --keep-name -t 5s --hdd 4 --hdd-opts sync,wr-rnd,rd-rnd,fadv-willneed,fadv-rnd --link 4 --symlink 4 --lockf 4 --seek 4 --aio 4 --aio-requests 32 --dentry 4 --dir 4 --dentry-order stride --fallocate 4 --fstat 4 --dentries 65536 --io 1 --lease 4 --mmap 0 --mmap-file --mmap-async --open 4 --rename 4 --hdd-bytes 128M --fallocate-bytes 128M --chdir 4 --chmod 4 --filename 4 --rename 4 --mmap-bytes 128M --hdd-write-size 512 --ionice-class besteffort --ionice-level 0 17:53:37 DEBUG| [stdout] [ 181.780719] Mount point: /testpool/test 17:53:37 DEBUG| [stdout] [ 184.029282] Unable to handle kernel pointer dereference in virtual kernel address space 17:53:37 DEBUG| [stdout] [ 184.029287] Failing address: 0000000000000000 TEID: 0000000000000483 17:53:37 DEBUG| [stdout] [ 184.029287] Fault in home space mode while using kernel ASCE. - 17:53:37 DEBUG| [stdout] [ 184.029288] AS:0000000162db0007 R3:00000001fffd8007 S:00000001fffdc000 P:000000000000003d - 17:53:37 DEBUG| [stdout] [ 184.029369] Oops: 0004 ilc:3 [#1] SMP + 17:53:37 DEBUG| [stdout] [ 184.029288] AS:0000000162db0007 R3:00000001fffd8007 S:00000001fffdc000 P:000000000000003d + 17:53:37 DEBUG| [stdout] [ 184.029369] Oops: 0004 ilc:3 [#1] SMP 17:53:37 DEBUG| [stdout] [ 184.029371] Modules linked in: zfs(PO) zunicode(PO) zlua(PO) zcommon(PO) znvpair(PO) zavl(PO) icp(PO) spl(O) binfmt_misc virtio_rng s390_trng vfio_ccw vfio_mdev mdev vfio_iommu_type1 vfio sch_fq_codel ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ip_tables x_tables btrfs zstd_compress zlib_deflate raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 linear pkey zcrypt crc32_vx_s390 virtio_blk ghash_s390 prng aes_s390 des_s390 libdes sha3_512_s390 sha3_256_s390 sha512_s390 sha256_s390 sha1_s390 sha_common virtio_net net_failover failover 17:53:37 DEBUG| [stdout] [ 184.029400] CPU: 0 PID: 27276 Comm: stress-ng Tainted: P O 5.4.0-208-generic #228~18.04.1-Ubuntu 17:53:37 DEBUG| [stdout] [ 184.029401] Hardware name: IBM 8561 LT1 400 (KVM/Linux) 17:53:37 DEBUG| [stdout] [ 184.029402] Krnl PSW : 0704e00180000000 0000000162334200 (mutex_lock+0x10/0x28) 17:53:37 DEBUG| [stdout] [ 184.029409] R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3 17:53:37 DEBUG| [stdout] [ 184.029410] Krnl GPRS: 00000001e3b67800 0000000000000000 0000000000000010 00000001b90d4888 17:53:37 DEBUG| [stdout] [ 184.029410] 00000001b0dc5500 0000000000399a50 0000000000000019 0000000000000038 17:53:37 DEBUG| [stdout] [ 184.029411] 0000000000000010 00000001ba6b5560 0000000000000000 0000000000000000 17:53:37 DEBUG| [stdout] [ 184.029412] 00000001b0dc5500 0000000000000030 000003ff80662744 000003e0048ff7a0 17:53:37 DEBUG| [stdout] [ 184.029418] Krnl Code: 00000001623341f0: c00400000000 brcl 0,00000001623341f0 17:53:37 DEBUG| [stdout] 00000001623341f6: a7190000 lghi %r1,0 17:53:37 DEBUG| [stdout] #00000001623341fa: e34003400004 lg %r4,832 17:53:37 DEBUG| [stdout] >0000000162334200: eb1420000030 csg %r1,%r4,0(%r2) 17:53:37 DEBUG| [stdout] 0000000162334206: ec160006007c cgij %r1,0,6,0000000162334212 17:53:37 DEBUG| [stdout] 000000016233420c: 07fe bcr 15,%r14 17:53:37 DEBUG| [stdout] 000000016233420e: 47000700 bc 0,1792 17:53:37 DEBUG| [stdout] 0000000162334212: c0f4ffffffe7 brcl 15,00000001623341e0 17:53:37 DEBUG| [stdout] [ 184.029426] Call Trace: 17:53:37 DEBUG| [stdout] [ 184.029428] ([<00000001bef2f400>] 0x1bef2f400) - 17:53:37 DEBUG| [stdout] [ 184.029605] [<000003ff8066d4d8>] dbuf_dirty+0x6e8/0x830 [zfs] - 17:53:37 DEBUG| [stdout] [ 184.029629] [<000003ff806784fe>] dmu_write_uio_dnode+0xa6/0x150 [zfs] - 17:53:37 DEBUG| [stdout] [ 184.029653] [<000003ff80678624>] dmu_write_uio_dbuf+0x7c/0xa0 [zfs] - 17:53:37 DEBUG| [stdout] [ 184.029700] [<000003ff80762808>] zfs_write+0x930/0xc88 [zfs] - 17:53:37 DEBUG| [stdout] [ 184.029747] [<000003ff80781eee>] zpl_write_common_iovec+0xb6/0x128 [zfs] - 17:53:37 DEBUG| [stdout] [ 184.029793] [<000003ff80782826>] zpl_iter_write_common+0xa6/0xd0 [zfs] - 17:53:37 DEBUG| [stdout] [ 184.029839] [<000003ff807828c2>] zpl_iter_write+0x72/0xa8 [zfs] - 17:53:37 DEBUG| [stdout] [ 184.029842] [<0000000161cffa3a>] new_sync_write+0x11a/0x1b8 - 17:53:37 DEBUG| [stdout] [ 184.029843] [<0000000161d004d2>] vfs_write+0x152/0x1e0 - 17:53:37 DEBUG| [stdout] [ 184.029844] [<0000000161d01d4c>] ksys_write+0xac/0xe0 - 17:53:37 DEBUG| [stdout] [ 184.029846] [<000000016233718c>] system_call+0xd8/0x2c8 + 17:53:37 DEBUG| [stdout] [ 184.029605] [<000003ff8066d4d8>] dbuf_dirty+0x6e8/0x830 [zfs] + 17:53:37 DEBUG| [stdout] [ 184.029629] [<000003ff806784fe>] dmu_write_uio_dnode+0xa6/0x150 [zfs] + 17:53:37 DEBUG| [stdout] [ 184.029653] [<000003ff80678624>] dmu_write_uio_dbuf+0x7c/0xa0 [zfs] + 17:53:37 DEBUG| [stdout] [ 184.029700] [<000003ff80762808>] zfs_write+0x930/0xc88 [zfs] + 17:53:37 DEBUG| [stdout] [ 184.029747] [<000003ff80781eee>] zpl_write_common_iovec+0xb6/0x128 [zfs] + 17:53:37 DEBUG| [stdout] [ 184.029793] [<000003ff80782826>] zpl_iter_write_common+0xa6/0xd0 [zfs] + 17:53:37 DEBUG| [stdout] [ 184.029839] [<000003ff807828c2>] zpl_iter_write+0x72/0xa8 [zfs] + 17:53:37 DEBUG| [stdout] [ 184.029842] [<0000000161cffa3a>] new_sync_write+0x11a/0x1b8 + 17:53:37 DEBUG| [stdout] [ 184.029843] [<0000000161d004d2>] vfs_write+0x152/0x1e0 + 17:53:37 DEBUG| [stdout] [ 184.029844] [<0000000161d01d4c>] ksys_write+0xac/0xe0 + 17:53:37 DEBUG| [stdout] [ 184.029846] [<000000016233718c>] system_call+0xd8/0x2c8 17:53:37 DEBUG| [stdout] [ 184.029846] Last Breaking-Event-Address: 17:53:37 DEBUG| [stdout] [ 184.029866] [<000003ff8064715c>] 0x3ff8064715c 17:53:37 DEBUG| [stdout] [ 184.029868] ---[ end trace 269ae123ecbec9fd ]--- 17:55:35 DEBUG| [stdout] stress-ng: warn: [27300] cannot terminate process 27419, gave up after 120 seconds 17:55:35 DEBUG| [stdout] stress-ng: warn: [27301] cannot terminate process 27416, gave up after 120 seconds WARNING: sut-test timed out after 180 minutes and was killed! WARNING: The test may have hung or the timeout may need to be increased. sut-test TEST SYSTEM FAILURE DETECTED Test results file '/home/openstack/workspace/bionic-linux-hwe-5.4-generic-s390x.kvm-5.4.0-ubuntu_zfs_stress/kernel-results.xml' not found.
-- You received this bug notification because you are a member of Canonical Platform QA Team, which is subscribed to ubuntu-kernel-tests. https://bugs.launchpad.net/bugs/2098916 Title: ubuntu_zfs_stress caused Oops (kernel NPD) on s390x-vm Status in ubuntu-kernel-tests: New Bug description: Found on 2025.01.13/bionic/linux-hwe-5.4/5.4.0-208.228~18.04.1 (openstack/s390x-vm). Seems to be flaky. 17:53:34 DEBUG| [stdout] ================================================================================ 17:53:34 DEBUG| [stdout] 17:53:34 DEBUG| [stdout] -------------------------------------------------------------------------------- 17:53:34 DEBUG| [stdout] ZFS options: compression=gzip-1 17:53:34 DEBUG| [stdout] Stress test: /home/ubuntu/autotest/client/tmp/ubuntu_zfs_stress/src/stress-ng/stress-ng --verify --times --metrics-brief --syslog --keep-name -t 5s --hdd 4 --hdd-opts sync,wr-rnd,rd-rnd,fadv-willneed,fadv-rnd --link 4 --symlink 4 --lockf 4 --seek 4 --aio 4 --aio-requests 32 --dentry 4 --dir 4 --dentry-order stride --fallocate 4 --fstat 4 --dentries 65536 --io 1 --lease 4 --mmap 0 --mmap-file --mmap-async --open 4 --rename 4 --hdd-bytes 128M --fallocate-bytes 128M --chdir 4 --chmod 4 --filename 4 --rename 4 --mmap-bytes 128M --hdd-write-size 512 --ionice-class besteffort --ionice-level 0 17:53:34 DEBUG| [stdout] VDEV path: 17:53:34 DEBUG| [stdout] Mount point: /testpool/test 17:53:34 DEBUG| [stdout] Date: Wed Feb 19 17:53:34 UTC 2025 17:53:34 DEBUG| [stdout] Host: kt-b-lhwe54-gen-5-4-u-zfs-stress-s390x-kvm 17:53:34 DEBUG| [stdout] Kernel: 5.4.0-208-generic #228~18.04.1-Ubuntu SMP Sat Feb 8 00:55:29 UTC 2025 17:53:34 DEBUG| [stdout] Machine: kt-b-lhwe54-gen-5-4-u-zfs-stress-s390x-kvm s390x s390x 17:53:34 DEBUG| [stdout] CPUs online: 2 17:53:34 DEBUG| [stdout] CPUs total: 2 17:53:34 DEBUG| [stdout] Page size: 4096 17:53:34 DEBUG| [stdout] Pages avail: 1985435 17:53:34 DEBUG| [stdout] Pages total: 2056660 17:53:34 DEBUG| [stdout] -------------------------------------------------------------------------------- 17:53:34 DEBUG| [stdout] 17:53:34 DEBUG| [stdout] zfs set compression=gzip-1 testpool/test 17:53:34 DEBUG| [stdout] 17:53:34 DEBUG| [stdout] stress-ng: info: [27246] aio stressor will be skipped, it is not implemented on this system: s390x Linux 5.4.0-208-generic gcc 7.5.0 (built without aio.h) 17:53:34 DEBUG| [stdout] stress-ng: info: [27246] setting to a 5 secs run per stressor 17:53:34 DEBUG| [stdout] stress-ng: info: [27246] dispatching hogs: 4 hdd, 4 link, 4 symlink, 4 lockf, 4 seek, 4 dentry, 4 dir, 4 fallocate, 4 fstat, 1 io, 4 lease, 2 mmap, 4 open, 4 rename, 4 chdir, 4 chmod, 4 filename, 4 rename 17:53:34 DEBUG| [stdout] stress-ng: info: [27246] note: /proc/sys/kernel/sched_autogroup_enabled is 1 and this can impact scheduling throughput for processes not attached to a tty. Setting this to 0 may improve performance metrics 17:53:34 DEBUG| [stdout] stress-ng: info: [27302] open: using a maximum of 1048576 file descriptors 17:53:34 DEBUG| [stdout] stress-ng: info: [27295] io: this is a legacy I/O sync stressor, consider using iomix instead 17:53:35 DEBUG| [stdout] stress-ng: fail: [27277] seek: write failed, errno=5 (Input/output error), filesystem type: zfs (14334 blocks available) 17:53:36 DEBUG| [stdout] stress-ng: fail: [27275] seek: write failed, errno=5 (Input/output error), filesystem type: zfs (14334 blocks available) 17:53:36 DEBUG| [stdout] stress-ng: fail: [27274] seek: read failed, errno=5 (Input/output error), filesystem type: zfs (14334 blocks available) 17:53:37 DEBUG| [stdout] Found kernel warning and/or call trace: 17:53:37 DEBUG| [stdout] 17:53:37 DEBUG| [stdout] [ 181.561304] TESTING: --verify --times --metrics-brief --syslog --keep-name -t 5s --hdd 4 --hdd-opts sync,wr-rnd,rd-rnd,fadv-willneed,fadv-rnd --link 4 --symlink 4 --lockf 4 --seek 4 --aio 4 --aio-requests 32 --dentry 4 --dir 4 --dentry-order stride --fallocate 4 --fstat 4 --dentries 65536 --io 1 --lease 4 --mmap 0 --mmap-file --mmap-async --open 4 --rename 4 --hdd-bytes 128M --fallocate-bytes 128M --chdir 4 --chmod 4 --filename 4 --rename 4 --mmap-bytes 128M --hdd-write-size 512 --ionice-class besteffort --ionice-level 0 17:53:37 DEBUG| [stdout] [ 181.757863] ubuntu_zfs_stre (23783): drop_caches: 1 17:53:37 DEBUG| [stdout] [ 181.762949] ubuntu_zfs_stre (23783): drop_caches: 2 17:53:37 DEBUG| [stdout] [ 181.764170] ubuntu_zfs_stre (23783): drop_caches: 3 17:53:37 DEBUG| [stdout] [ 181.780661] ZFS options: compression=gzip-1 17:53:37 DEBUG| [stdout] [ 181.780710] Stress test: /home/ubuntu/autotest/client/tmp/ubuntu_zfs_stress/src/stress-ng/stress-ng --verify --times --metrics-brief --syslog --keep-name -t 5s --hdd 4 --hdd-opts sync,wr-rnd,rd-rnd,fadv-willneed,fadv-rnd --link 4 --symlink 4 --lockf 4 --seek 4 --aio 4 --aio-requests 32 --dentry 4 --dir 4 --dentry-order stride --fallocate 4 --fstat 4 --dentries 65536 --io 1 --lease 4 --mmap 0 --mmap-file --mmap-async --open 4 --rename 4 --hdd-bytes 128M --fallocate-bytes 128M --chdir 4 --chmod 4 --filename 4 --rename 4 --mmap-bytes 128M --hdd-write-size 512 --ionice-class besteffort --ionice-level 0 17:53:37 DEBUG| [stdout] [ 181.780719] Mount point: /testpool/test 17:53:37 DEBUG| [stdout] [ 184.029282] Unable to handle kernel pointer dereference in virtual kernel address space 17:53:37 DEBUG| [stdout] [ 184.029287] Failing address: 0000000000000000 TEID: 0000000000000483 17:53:37 DEBUG| [stdout] [ 184.029287] Fault in home space mode while using kernel ASCE. 17:53:37 DEBUG| [stdout] [ 184.029288] AS:0000000162db0007 R3:00000001fffd8007 S:00000001fffdc000 P:000000000000003d 17:53:37 DEBUG| [stdout] [ 184.029369] Oops: 0004 ilc:3 [#1] SMP 17:53:37 DEBUG| [stdout] [ 184.029371] Modules linked in: zfs(PO) zunicode(PO) zlua(PO) zcommon(PO) znvpair(PO) zavl(PO) icp(PO) spl(O) binfmt_misc virtio_rng s390_trng vfio_ccw vfio_mdev mdev vfio_iommu_type1 vfio sch_fq_codel ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ip_tables x_tables btrfs zstd_compress zlib_deflate raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 linear pkey zcrypt crc32_vx_s390 virtio_blk ghash_s390 prng aes_s390 des_s390 libdes sha3_512_s390 sha3_256_s390 sha512_s390 sha256_s390 sha1_s390 sha_common virtio_net net_failover failover 17:53:37 DEBUG| [stdout] [ 184.029400] CPU: 0 PID: 27276 Comm: stress-ng Tainted: P O 5.4.0-208-generic #228~18.04.1-Ubuntu 17:53:37 DEBUG| [stdout] [ 184.029401] Hardware name: IBM 8561 LT1 400 (KVM/Linux) 17:53:37 DEBUG| [stdout] [ 184.029402] Krnl PSW : 0704e00180000000 0000000162334200 (mutex_lock+0x10/0x28) 17:53:37 DEBUG| [stdout] [ 184.029409] R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3 17:53:37 DEBUG| [stdout] [ 184.029410] Krnl GPRS: 00000001e3b67800 0000000000000000 0000000000000010 00000001b90d4888 17:53:37 DEBUG| [stdout] [ 184.029410] 00000001b0dc5500 0000000000399a50 0000000000000019 0000000000000038 17:53:37 DEBUG| [stdout] [ 184.029411] 0000000000000010 00000001ba6b5560 0000000000000000 0000000000000000 17:53:37 DEBUG| [stdout] [ 184.029412] 00000001b0dc5500 0000000000000030 000003ff80662744 000003e0048ff7a0 17:53:37 DEBUG| [stdout] [ 184.029418] Krnl Code: 00000001623341f0: c00400000000 brcl 0,00000001623341f0 17:53:37 DEBUG| [stdout] 00000001623341f6: a7190000 lghi %r1,0 17:53:37 DEBUG| [stdout] #00000001623341fa: e34003400004 lg %r4,832 17:53:37 DEBUG| [stdout] >0000000162334200: eb1420000030 csg %r1,%r4,0(%r2) 17:53:37 DEBUG| [stdout] 0000000162334206: ec160006007c cgij %r1,0,6,0000000162334212 17:53:37 DEBUG| [stdout] 000000016233420c: 07fe bcr 15,%r14 17:53:37 DEBUG| [stdout] 000000016233420e: 47000700 bc 0,1792 17:53:37 DEBUG| [stdout] 0000000162334212: c0f4ffffffe7 brcl 15,00000001623341e0 17:53:37 DEBUG| [stdout] [ 184.029426] Call Trace: 17:53:37 DEBUG| [stdout] [ 184.029428] ([<00000001bef2f400>] 0x1bef2f400) 17:53:37 DEBUG| [stdout] [ 184.029605] [<000003ff8066d4d8>] dbuf_dirty+0x6e8/0x830 [zfs] 17:53:37 DEBUG| [stdout] [ 184.029629] [<000003ff806784fe>] dmu_write_uio_dnode+0xa6/0x150 [zfs] 17:53:37 DEBUG| [stdout] [ 184.029653] [<000003ff80678624>] dmu_write_uio_dbuf+0x7c/0xa0 [zfs] 17:53:37 DEBUG| [stdout] [ 184.029700] [<000003ff80762808>] zfs_write+0x930/0xc88 [zfs] 17:53:37 DEBUG| [stdout] [ 184.029747] [<000003ff80781eee>] zpl_write_common_iovec+0xb6/0x128 [zfs] 17:53:37 DEBUG| [stdout] [ 184.029793] [<000003ff80782826>] zpl_iter_write_common+0xa6/0xd0 [zfs] 17:53:37 DEBUG| [stdout] [ 184.029839] [<000003ff807828c2>] zpl_iter_write+0x72/0xa8 [zfs] 17:53:37 DEBUG| [stdout] [ 184.029842] [<0000000161cffa3a>] new_sync_write+0x11a/0x1b8 17:53:37 DEBUG| [stdout] [ 184.029843] [<0000000161d004d2>] vfs_write+0x152/0x1e0 17:53:37 DEBUG| [stdout] [ 184.029844] [<0000000161d01d4c>] ksys_write+0xac/0xe0 17:53:37 DEBUG| [stdout] [ 184.029846] [<000000016233718c>] system_call+0xd8/0x2c8 17:53:37 DEBUG| [stdout] [ 184.029846] Last Breaking-Event-Address: 17:53:37 DEBUG| [stdout] [ 184.029866] [<000003ff8064715c>] 0x3ff8064715c 17:53:37 DEBUG| [stdout] [ 184.029868] ---[ end trace 269ae123ecbec9fd ]--- 17:55:35 DEBUG| [stdout] stress-ng: warn: [27300] cannot terminate process 27419, gave up after 120 seconds 17:55:35 DEBUG| [stdout] stress-ng: warn: [27301] cannot terminate process 27416, gave up after 120 seconds WARNING: sut-test timed out after 180 minutes and was killed! WARNING: The test may have hung or the timeout may need to be increased. sut-test TEST SYSTEM FAILURE DETECTED Test results file '/home/openstack/workspace/bionic-linux-hwe-5.4-generic-s390x.kvm-5.4.0-ubuntu_zfs_stress/kernel-results.xml' not found. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu-kernel-tests/+bug/2098916/+subscriptions -- Mailing list: https://launchpad.net/~canonical-ubuntu-qa Post to : canonical-ubuntu-qa@lists.launchpad.net Unsubscribe : https://launchpad.net/~canonical-ubuntu-qa More help : https://help.launchpad.net/ListHelp