Hello Colin, yes, this is still an open issue: Linux wopr 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Apr 22 19:10:03 wopr zed[12576]: eid=8352 class=history_event pool_guid=0xB3B099B638F02EEF Apr 22 19:10:03 wopr kernel: VERIFY(size != 0) failed Apr 22 19:10:03 wopr kernel: PANIC at range_tree.c:304:range_tree_find_impl() Apr 22 19:10:03 wopr kernel: Showing stack for process 12577 Apr 22 19:10:03 wopr kernel: CPU: 8 PID: 12577 Comm: receive_writer Tainted: P O 4.15.0-91-generic #92-Ubuntu Apr 22 19:10:03 wopr kernel: Hardware name: Supermicro SSG-6038R-E1CR16L/X10DRH-iT, BIOS 2.0 12/17/2015 Apr 22 19:10:03 wopr kernel: Call Trace: Apr 22 19:10:03 wopr kernel: dump_stack+0x6d/0x8e Apr 22 19:10:03 wopr kernel: spl_dumpstack+0x42/0x50 [spl] Apr 22 19:10:03 wopr kernel: spl_panic+0xc8/0x110 [spl] Apr 22 19:10:03 wopr kernel: ? __switch_to_asm+0x41/0x70 Apr 22 19:10:03 wopr kernel: ? abd_iter_map+0xa/0x90 [zfs] Apr 22 19:10:03 wopr kernel: ? dbuf_dirty+0x43d/0x850 [zfs] Apr 22 19:10:03 wopr kernel: ? getrawmonotonic64+0x43/0xd0 Apr 22 19:10:03 wopr kernel: ? getrawmonotonic64+0x43/0xd0 Apr 22 19:10:03 wopr kernel: ? dmu_zfetch+0x49a/0x500 [zfs] Apr 22 19:10:03 wopr kernel: ? getrawmonotonic64+0x43/0xd0 Apr 22 19:10:03 wopr kernel: ? dmu_zfetch+0x49a/0x500 [zfs] Apr 22 19:10:03 wopr kernel: ? mutex_lock+0x12/0x40 Apr 22 19:10:03 wopr kernel: ? dbuf_rele_and_unlock+0x1a8/0x4b0 [zfs] Apr 22 19:10:03 wopr kernel: range_tree_find_impl+0x88/0x90 [zfs] Apr 22 19:10:03 wopr kernel: ? spl_kmem_zalloc+0xdc/0x1a0 [spl] Apr 22 19:10:03 wopr kernel: range_tree_clear+0x4f/0x60 [zfs] Apr 22 19:10:03 wopr kernel: dnode_free_range+0x11f/0x5a0 [zfs] Apr 22 19:10:03 wopr kernel: dmu_object_free+0x53/0x90 [zfs] Apr 22 19:10:03 wopr kernel: dmu_free_long_object+0x9f/0xc0 [zfs] Apr 22 19:10:03 wopr kernel: receive_freeobjects.isra.12+0x7a/0x100 [zfs] Apr 22 19:10:03 wopr kernel: receive_writer_thread+0x6d2/0xa60 [zfs] Apr 22 19:10:03 wopr kernel: ? set_curr_task_fair+0x2b/0x60 Apr 22 19:10:03 wopr kernel: ? spl_kmem_free+0x33/0x40 [spl] Apr 22 19:10:03 wopr kernel: ? kfree+0x165/0x180 Apr 22 19:10:03 wopr kernel: ? receive_free.isra.13+0xc0/0xc0 [zfs] Apr 22 19:10:03 wopr kernel: thread_generic_wrapper+0x74/0x90 [spl] Apr 22 19:10:03 wopr kernel: kthread+0x121/0x140 Apr 22 19:10:03 wopr kernel: ? __thread_exit+0x20/0x20 [spl] Apr 22 19:10:03 wopr kernel: ? kthread_create_worker_on_cpu+0x70/0x70 Apr 22 19:10:03 wopr kernel: ret_from_fork+0x35/0x40 Apr 22 19:12:56 wopr kernel: INFO: task txg_quiesce:2265 blocked for more than 120 seconds. Apr 22 19:12:56 wopr kernel: Tainted: P O 4.15.0-91-generic #92-Ubuntu Apr 22 19:12:56 wopr kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Apr 22 19:12:56 wopr kernel: txg_quiesce D 0 2265 2 0x80000000 Apr 22 19:12:56 wopr kernel: Call Trace: Apr 22 19:12:56 wopr kernel: __schedule+0x24e/0x880 Apr 22 19:12:56 wopr kernel: schedule+0x2c/0x80 Apr 22 19:12:56 wopr kernel: cv_wait_common+0x11e/0x140 [spl] Apr 22 19:12:56 wopr kernel: ? wait_woken+0x80/0x80 Apr 22 19:12:56 wopr kernel: __cv_wait+0x15/0x20 [spl] Apr 22 19:12:56 wopr kernel: txg_quiesce_thread+0x2cb/0x3d0 [zfs] Apr 22 19:12:56 wopr kernel: ? txg_delay+0x1b0/0x1b0 [zfs] Apr 22 19:12:56 wopr kernel: thread_generic_wrapper+0x74/0x90 [spl] Apr 22 19:12:56 wopr kernel: kthread+0x121/0x140 Apr 22 19:12:56 wopr kernel: ? __thread_exit+0x20/0x20 [spl] Apr 22 19:12:56 wopr kernel: ? kthread_create_worker_on_cpu+0x70/0x70 Apr 22 19:12:56 wopr kernel: ret_from_fork+0x35/0x40 Apr 22 19:12:56 wopr kernel: INFO: task zfs:12482 blocked for more than 120 seconds. Apr 22 19:12:56 wopr kernel: Tainted: P O 4.15.0-91-generic #92-Ubuntu Apr 22 19:12:56 wopr kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Apr 22 19:12:56 wopr kernel: zfs D 0 12482 12479 0x80000080 Apr 22 19:12:56 wopr kernel: Call Trace: Apr 22 19:12:56 wopr kernel: __schedule+0x24e/0x880 Apr 22 19:12:56 wopr kernel: schedule+0x2c/0x80 Apr 22 19:12:56 wopr kernel: cv_wait_common+0x11e/0x140 [spl] Apr 22 19:12:56 wopr kernel: ? wait_woken+0x80/0x80 Apr 22 19:12:56 wopr kernel: __cv_wait+0x15/0x20 [spl] Apr 22 19:12:56 wopr kernel: dmu_recv_stream+0xa51/0xef0 [zfs] Apr 22 19:12:56 wopr kernel: zfs_ioc_recv_impl+0x306/0x1100 [zfs] Apr 22 19:12:56 wopr kernel: ? dbuf_rele+0x36/0x40 [zfs] Apr 22 19:12:56 wopr kernel: zfs_ioc_recv_new+0x33d/0x410 [zfs] Apr 22 19:12:56 wopr kernel: ? spl_kmem_alloc_impl+0xe5/0x1a0 [spl] Apr 22 19:12:56 wopr kernel: ? spl_vmem_alloc+0x19/0x20 [spl] Apr 22 19:12:56 wopr kernel: ? nv_alloc_sleep_spl+0x1f/0x30 [znvpair] Apr 22 19:12:56 wopr kernel: ? nv_mem_zalloc.isra.0+0x2e/0x40 [znvpair] Apr 22 19:12:56 wopr kernel: ? nvlist_xalloc.part.2+0x50/0xb0 [znvpair] Apr 22 19:12:56 wopr kernel: zfsdev_ioctl+0x451/0x610 [zfs] Apr 22 19:12:56 wopr kernel: do_vfs_ioctl+0xa8/0x630 Apr 22 19:12:56 wopr kernel: ? __audit_syscall_entry+0xbc/0x110 Apr 22 19:12:56 wopr kernel: ? syscall_trace_enter+0x1da/0x2d0 Apr 22 19:12:56 wopr kernel: SyS_ioctl+0x79/0x90 Apr 22 19:12:56 wopr kernel: do_syscall_64+0x73/0x130 Apr 22 19:12:56 wopr kernel: entry_SYSCALL_64_after_hwframe+0x3d/0xa2 Apr 22 19:12:56 wopr kernel: RIP: 0033:0x7f3c5a2d55d7 Apr 22 19:12:56 wopr kernel: RSP: 002b:00007ffcf28d05d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 Apr 22 19:12:56 wopr kernel: RAX: ffffffffffffffda RBX: 0000000000005a46 RCX: 00007f3c5a2d55d7 Apr 22 19:12:56 wopr kernel: RDX: 00007ffcf28d05f0 RSI: 0000000000005a46 RDI: 0000000000000006 Apr 22 19:12:56 wopr kernel: RBP: 00007ffcf28d05f0 R08: 00007f3c5a5aae20 R09: 0000000000000000 Apr 22 19:12:56 wopr kernel: R10: 000055c7fedf4010 R11: 0000000000000246 R12: 00007ffcf28d3c20 Apr 22 19:12:56 wopr kernel: R13: 0000000000000006 R14: 000055c7fedfbf10 R15: 000000000000000c Apr 22 19:12:56 wopr kernel: INFO: task receive_writer:12577 blocked for more than 120 seconds. Apr 22 19:12:56 wopr kernel: Tainted: P O 4.15.0-91-generic #92-Ubuntu Apr 22 19:12:56 wopr kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Apr 22 19:12:56 wopr kernel: receive_writer D 0 12577 2 0x80000080 Apr 22 19:12:56 wopr kernel: Call Trace: Apr 22 19:12:56 wopr kernel: __schedule+0x24e/0x880 Apr 22 19:12:56 wopr kernel: schedule+0x2c/0x80 Apr 22 19:12:56 wopr kernel: spl_panic+0xfa/0x110 [spl] Apr 22 19:12:56 wopr kernel: ? abd_iter_map+0xa/0x90 [zfs] Apr 22 19:12:56 wopr kernel: ? dbuf_dirty+0x43d/0x850 [zfs] Apr 22 19:12:56 wopr kernel: ? getrawmonotonic64+0x43/0xd0 Apr 22 19:12:56 wopr kernel: ? getrawmonotonic64+0x43/0xd0 Apr 22 19:12:56 wopr kernel: ? dmu_zfetch+0x49a/0x500 [zfs] Apr 22 19:12:56 wopr kernel: ? getrawmonotonic64+0x43/0xd0 Apr 22 19:12:56 wopr kernel: ? dmu_zfetch+0x49a/0x500 [zfs] Apr 22 19:12:56 wopr kernel: ? mutex_lock+0x12/0x40 Apr 22 19:12:56 wopr kernel: ? dbuf_rele_and_unlock+0x1a8/0x4b0 [zfs] Apr 22 19:12:56 wopr kernel: range_tree_find_impl+0x88/0x90 [zfs] Apr 22 19:12:56 wopr kernel: ? spl_kmem_zalloc+0xdc/0x1a0 [spl] Apr 22 19:12:56 wopr kernel: range_tree_clear+0x4f/0x60 [zfs] Apr 22 19:12:56 wopr kernel: dnode_free_range+0x11f/0x5a0 [zfs] Apr 22 19:12:56 wopr kernel: dmu_object_free+0x53/0x90 [zfs] Apr 22 19:12:56 wopr kernel: dmu_free_long_object+0x9f/0xc0 [zfs] Apr 22 19:12:56 wopr kernel: receive_freeobjects.isra.12+0x7a/0x100 [zfs] Apr 22 19:12:56 wopr kernel: receive_writer_thread+0x6d2/0xa60 [zfs] Apr 22 19:12:56 wopr kernel: ? set_curr_task_fair+0x2b/0x60 Apr 22 19:12:56 wopr kernel: ? spl_kmem_free+0x33/0x40 [spl] Apr 22 19:12:56 wopr kernel: ? kfree+0x165/0x180 Apr 22 19:12:56 wopr kernel: ? receive_free.isra.13+0xc0/0xc0 [zfs] Apr 22 19:12:56 wopr kernel: thread_generic_wrapper+0x74/0x90 [spl] Apr 22 19:12:56 wopr kernel: kthread+0x121/0x140 Apr 22 19:12:56 wopr kernel: ? __thread_exit+0x20/0x20 [spl] Apr 22 19:12:56 wopr kernel: ? kthread_create_worker_on_cpu+0x70/0x70 Apr 22 19:12:56 wopr kernel: ret_from_fork+0x35/0x40 And the syncoid output: # SSH_AUTH_SOCK=/tmp/ssh-EAphgaS9vNJE/agent.3449 SSH_AGENT_PID=3484 syncoid --recursive --skip-parent --create-bookmark --recvoptions="u" rpool syncoid@wopr:srv/backups/millbarge/rpool Sending incremental rpool/ROOT@autosnap_2020-01-28_21:00:02_hourly ... syncoid_millbarge_2020-04-23:01:55:31 (~ 26 KB): 27.7KiB 0:00:00 [ 220KiB/s] [===============================================================================================================================================] 103% Resuming interrupted zfs send/receive from rpool/ROOT/ubuntu to srv/backups/millbarge/rpool/ROOT/ubuntu (~ UNKNOWN remaining): cannot resume send: 'rpool/ROOT/ubuntu@autosnap_2020-01-25_00:00:01_daily' used in the initial send no longer exists cannot receive: failed to read from stream WARN: resetting partially receive state because the snapshot source no longer exists Sending incremental rpool/ROOT/ubuntu@before_backup ... syncoid_millbarge_2020-04-23:01:55:52 (~ 28.6 GB): cannot restore to srv/backups/millbarge/rpool/ROOT/ubuntu@autosnap_2020-02-01_00:00:01_monthly: destination already exists ] 3% ETA 0:04:25 mbuffer: error: outputThread: error writing to <stdout> at offset 0x3b9e0000: Broken pipe ] 4% ETA 0:04:43 mbuffer: warning: error during output to <stdout>: Broken pipe 1.16GiB 0:00:12 [94.3MiB/s] [====> ] 4% CRITICAL ERROR: zfs send -I 'rpool/ROOT/ubuntu'@'before_backup' 'rpool/ROOT/ubuntu'@'syncoid_millbarge_2020-04-23:01:55:52' | pv -s 30731359288 | lzop | mbuffer -q -s 128k -m 16M 2>/dev/null | ssh -S /tmp/syncoid-syncoid-syncoid@wopr-1587606930 syncoid@wopr ' mbuffer -q -s 128k -m 16M 2>/dev/null | lzop -dfc | sudo zfs receive -u -s -F '"'"'srv/backups/millbarge/rpool/ROOT/ubuntu'"'"' 2>&1' failed: 256 at /usr/sbin/syncoid line 786. Sending incremental rpool/home@autosnap_2020-01-28_21:00:02_hourly ... syncoid_millbarge_2020-04-23:01:56:19 (~ 56 KB): 36.1KiB 0:00:00 [ 294KiB/s] [=========================================================================================> ] 63% Sending incremental rpool/home/root#'autosnap_2020-01-28_21:30:02_frequently' ... autosnap_2020-02-01_00:00:01_monthly (~ UNKNOWN): 677KiB 0:00:00 [43.1MiB/s] [ <=> ] Sending incremental rpool/home/root@autosnap_2020-02-01_00:00:01_monthly ... syncoid_millbarge_2020-04-23:01:56:28 (~ 844.4 MB): 850MiB 0:00:11 [75.7MiB/s] [==============================================================================================================================================>] 100% Sending incremental rpool/home/sarnold#'autosnap_2020-01-28_21:30:02_frequently' ... autosnap_2020-02-01_00:00:01_monthly (~ UNKNOWN): 2.83GiB 0:00:29 [96.8MiB/s] [ <=> ] Sending incremental rpool/home/sarnold@autosnap_2020-02-01_00:00:01_monthly ... syncoid_millbarge_2020-04-23:01:56:56 (~ 45.7 GB): 49.5GiB 0:09:33 [88.3MiB/s] [===============================================================================================================================================] 108% Sending incremental rpool/swap@autosnap_2020-01-28_21:00:02_hourly ... syncoid_millbarge_2020-04-23:02:07:05 (~ 184.9 MB): 193MiB 0:00:00 [ 282MiB/s] [===============================================================================================================================================] 104% Sending incremental rpool/tmp@autosnap_2020-01-28_21:00:02_hourly ... syncoid_millbarge_2020-04-23:02:07:24 (~ 13.6 MB): 14.1MiB 0:00:00 [40.0MiB/s] [===============================================================================================================================================] 103% Sending incremental rpool/usr#'autosnap_2020-01-28_21:30:02_frequently' ... autosnap_2020-02-01_00:00:01_monthly (~ UNKNOWN): 624 B 0:00:00 [90.9KiB/s] [ <=> ] Sending incremental rpool/usr@autosnap_2020-02-01_00:00:01_monthly ... syncoid_millbarge_2020-04-23:02:07:39 (~ 49 KB): 50.9KiB 0:00:00 [ 190KiB/s] [===============================================================================================================================================] 101% Sending incremental rpool/usr/local#'autosnap_2020-01-28_21:30:02_frequently' ... autosnap_2020-02-01_00:00:01_monthly (~ UNKNOWN): 6.79MiB 0:00:00 [ 241MiB/s] [ <=> ] Sending incremental rpool/usr/local@autosnap_2020-02-01_00:00:01_monthly ... syncoid_millbarge_2020-04-23:02:07:58 (~ 2.0 MB): 1.60MiB 0:00:00 [4.69MiB/s] [================================================================================================================> ] 79% Sending incremental rpool/var@autosnap_2020-01-28_21:00:02_hourly ... syncoid_millbarge_2020-04-23:02:08:20 (~ 26 KB): 27.7KiB 0:00:00 [ 171KiB/s] [===============================================================================================================================================] 103% Sending incremental rpool/var/cache@autosnap_2020-01-28_21:00:02_hourly ... syncoid_millbarge_2020-04-23:02:08:33 (~ 1013.1 MB): 1013MiB 0:00:18 [55.4MiB/s] [==============================================================================================================================================>] 100% Sending incremental rpool/var/lib@autosnap_2020-01-28_21:00:02_hourly ... syncoid_millbarge_2020-04-23:02:08:54 (~ 26 KB): 27.7KiB 0:00:00 [ 125KiB/s] [===============================================================================================================================================] 103% Sending incremental rpool/var/lib/AccountsService@autosnap_2020-01-28_21:00:02_hourly ... syncoid_millbarge_2020-04-23:02:09:06 (~ 56 KB): 36.1KiB 0:00:00 [ 232KiB/s] [=========================================================================================> ] 63% Sending incremental rpool/var/lib/docker@autosnap_2020-01-28_21:00:02_hourly ... syncoid_millbarge_2020-04-23:02:09:18 (~ 54 KB): 34.9KiB 0:00:00 [ 230KiB/s] [===========================================================================================> ] 64% INFO: Sending oldest full snapshot rpool/var/lib/lxd@syncoid_millbarge_2020-04-23:02:09:30 (~ 42 KB) to new target filesystem: 45.1KiB 0:00:00 [3.96MiB/s] [===============================================================================================================================================] 105% INFO: Sending oldest full snapshot rpool/var/lib/lxd/containers@syncoid_millbarge_2020-04-23:02:09:30 (~ 42 KB) to new target filesystem: 45.1KiB 0:00:00 [5.74MiB/s] [===============================================================================================================================================] 105% INFO: Sending oldest full snapshot rpool/var/lib/lxd/custom@syncoid_millbarge_2020-04-23:02:09:31 (~ 42 KB) to new target filesystem: 45.1KiB 0:00:00 [3.95MiB/s] [===============================================================================================================================================] 105% INFO: Sending oldest full snapshot rpool/var/lib/lxd/deleted@syncoid_millbarge_2020-04-23:02:09:31 (~ 42 KB) to new target filesystem: 45.1KiB 0:00:00 [4.32MiB/s] [===============================================================================================================================================] 105% INFO: Sending oldest full snapshot rpool/var/lib/lxd/deleted/containers@syncoid_millbarge_2020-04-23:02:09:31 (~ 42 KB) to new target filesystem: 45.1KiB 0:00:00 [4.37MiB/s] [===============================================================================================================================================] 105% INFO: Sending oldest full snapshot rpool/var/lib/lxd/deleted/custom@syncoid_millbarge_2020-04-23:02:09:32 (~ 42 KB) to new target filesystem: 45.1KiB 0:00:00 [4.86MiB/s] [===============================================================================================================================================] 105% INFO: Sending oldest full snapshot rpool/var/lib/lxd/deleted/images@syncoid_millbarge_2020-04-23:02:09:32 (~ 42 KB) to new target filesystem: 45.1KiB 0:00:00 [5.40MiB/s] [===============================================================================================================================================] 105% INFO: Sending oldest full snapshot rpool/var/lib/lxd/deleted/virtual-machines@syncoid_millbarge_2020-04-23:02:09:33 (~ 42 KB) to new target filesystem: 45.1KiB 0:00:00 [5.02MiB/s] [===============================================================================================================================================] 105% INFO: Sending oldest full snapshot rpool/var/lib/lxd/images@syncoid_millbarge_2020-04-23:02:09:33 (~ 42 KB) to new target filesystem: 45.1KiB 0:00:00 [5.09MiB/s] [===============================================================================================================================================] 105% INFO: Sending oldest full snapshot rpool/var/lib/lxd/virtual-machines@syncoid_millbarge_2020-04-23:02:09:33 (~ 42 KB) to new target filesystem: 45.1KiB 0:00:00 [5.25MiB/s] [===============================================================================================================================================] 105% Sending incremental rpool/var/lib/nfs@autosnap_2020-01-28_21:00:02_hourly ... syncoid_millbarge_2020-04-23:02:09:34 (~ 54 KB): 34.9KiB 0:00:00 [ 236KiB/s] [===========================================================================================> ] 64% Sending incremental rpool/var/lib/schroot@autosnap_2020-01-28_21:00:02_hourly ... syncoid_millbarge_2020-04-23:02:09:46 (~ 394.5 MB): 402MiB 0:00:11 [34.8MiB/s] [===============================================================================================================================================] 101% INFO: Sending oldest full snapshot rpool/var/lib/schroot/chroots@syncoid_millbarge_2020-04-23:02:10:02 (~ 42 KB) to new target filesystem: 45.1KiB 0:00:00 [4.89MiB/s] [===============================================================================================================================================] 105% Resuming interrupted zfs send/receive from rpool/var/log to srv/backups/millbarge/rpool/var/log (~ 57.9 MB remaining): 58.2MiB 0:00:00 [ 318MiB/s] [==============================================================================================================================================>] 100% -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to zfs-linux in Ubuntu. https://bugs.launchpad.net/bugs/1861235 Title: zfs recv PANIC at range_tree.c:304:range_tree_find_impl() Status in Linux: Unknown Status in zfs-linux package in Ubuntu: Incomplete Status in zfs-linux source package in Bionic: New Bug description: Same as bug 1861228 but with a newer kernel installed. [ 790.702566] VERIFY(size != 0) failed [ 790.702590] PANIC at range_tree.c:304:range_tree_find_impl() [ 790.702611] Showing stack for process 28685 [ 790.702614] CPU: 17 PID: 28685 Comm: receive_writer Tainted: P O 4.15.0-76-generic #86-Ubuntu [ 790.702615] Hardware name: Supermicro SSG-6038R-E1CR16L/X10DRH-iT, BIOS 2.0 12/17/2015 [ 790.702616] Call Trace: [ 790.702626] dump_stack+0x6d/0x8e [ 790.702637] spl_dumpstack+0x42/0x50 [spl] [ 790.702640] spl_panic+0xc8/0x110 [spl] [ 790.702645] ? __switch_to_asm+0x41/0x70 [ 790.702714] ? arc_prune_task+0x1a/0x40 [zfs] [ 790.702740] ? dbuf_dirty+0x43d/0x850 [zfs] [ 790.702745] ? getrawmonotonic64+0x43/0xd0 [ 790.702746] ? getrawmonotonic64+0x43/0xd0 [ 790.702775] ? dmu_zfetch+0x49a/0x500 [zfs] [ 790.702778] ? getrawmonotonic64+0x43/0xd0 [ 790.702805] ? dmu_zfetch+0x49a/0x500 [zfs] [ 790.702807] ? mutex_lock+0x12/0x40 [ 790.702833] ? dbuf_rele_and_unlock+0x1a8/0x4b0 [zfs] [ 790.702866] range_tree_find_impl+0x88/0x90 [zfs] [ 790.702870] ? spl_kmem_zalloc+0xdc/0x1a0 [spl] [ 790.702902] range_tree_clear+0x4f/0x60 [zfs] [ 790.702930] dnode_free_range+0x11f/0x5a0 [zfs] [ 790.702957] dmu_object_free+0x53/0x90 [zfs] [ 790.702983] dmu_free_long_object+0x9f/0xc0 [zfs] [ 790.703010] receive_freeobjects.isra.12+0x7a/0x100 [zfs] [ 790.703036] receive_writer_thread+0x6d2/0xa60 [zfs] [ 790.703040] ? set_curr_task_fair+0x2b/0x60 [ 790.703043] ? spl_kmem_free+0x33/0x40 [spl] [ 790.703048] ? kfree+0x165/0x180 [ 790.703073] ? receive_free.isra.13+0xc0/0xc0 [zfs] [ 790.703078] thread_generic_wrapper+0x74/0x90 [spl] [ 790.703081] kthread+0x121/0x140 [ 790.703084] ? __thread_exit+0x20/0x20 [spl] [ 790.703085] ? kthread_create_worker_on_cpu+0x70/0x70 [ 790.703088] ret_from_fork+0x35/0x40 [ 967.636923] INFO: task txg_quiesce:14810 blocked for more than 120 seconds. [ 967.636979] Tainted: P O 4.15.0-76-generic #86-Ubuntu [ 967.637024] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 967.637076] txg_quiesce D 0 14810 2 0x80000000 [ 967.637080] Call Trace: [ 967.637089] __schedule+0x24e/0x880 [ 967.637092] schedule+0x2c/0x80 [ 967.637106] cv_wait_common+0x11e/0x140 [spl] [ 967.637114] ? wait_woken+0x80/0x80 [ 967.637122] __cv_wait+0x15/0x20 [spl] [ 967.637210] txg_quiesce_thread+0x2cb/0x3d0 [zfs] [ 967.637278] ? txg_delay+0x1b0/0x1b0 [zfs] [ 967.637286] thread_generic_wrapper+0x74/0x90 [spl] [ 967.637291] kthread+0x121/0x140 [ 967.637297] ? __thread_exit+0x20/0x20 [spl] [ 967.637299] ? kthread_create_worker_on_cpu+0x70/0x70 [ 967.637304] ret_from_fork+0x35/0x40 [ 967.637326] INFO: task zfs:28590 blocked for more than 120 seconds. [ 967.637371] Tainted: P O 4.15.0-76-generic #86-Ubuntu [ 967.637416] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 967.637467] zfs D 0 28590 28587 0x80000080 [ 967.637470] Call Trace: [ 967.637474] __schedule+0x24e/0x880 [ 967.637477] schedule+0x2c/0x80 [ 967.637486] cv_wait_common+0x11e/0x140 [spl] [ 967.637491] ? wait_woken+0x80/0x80 [ 967.637498] __cv_wait+0x15/0x20 [spl] [ 967.637554] dmu_recv_stream+0xa51/0xef0 [zfs] [ 967.637630] zfs_ioc_recv_impl+0x306/0x1100 [zfs] [ 967.637679] ? dbuf_read+0x34a/0x920 [zfs] [ 967.637725] ? dbuf_rele+0x36/0x40 [zfs] [ 967.637728] ? _cond_resched+0x19/0x40 [ 967.637798] zfs_ioc_recv_new+0x33d/0x410 [zfs] [ 967.637809] ? spl_kmem_alloc_impl+0xe5/0x1a0 [spl] [ 967.637816] ? spl_vmem_alloc+0x19/0x20 [spl] [ 967.637828] ? nv_alloc_sleep_spl+0x1f/0x30 [znvpair] [ 967.637834] ? nv_mem_zalloc.isra.0+0x2e/0x40 [znvpair] [ 967.637840] ? nvlist_xalloc.part.2+0x50/0xb0 [znvpair] [ 967.637905] zfsdev_ioctl+0x451/0x610 [zfs] [ 967.637913] do_vfs_ioctl+0xa8/0x630 [ 967.637917] ? __audit_syscall_entry+0xbc/0x110 [ 967.637924] ? syscall_trace_enter+0x1da/0x2d0 [ 967.637927] SyS_ioctl+0x79/0x90 [ 967.637930] do_syscall_64+0x73/0x130 [ 967.637935] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 [ 967.637938] RIP: 0033:0x7fc305a905d7 [ 967.637940] RSP: 002b:00007ffc45e39618 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 967.637943] RAX: ffffffffffffffda RBX: 0000000000005a46 RCX: 00007fc305a905d7 [ 967.637945] RDX: 00007ffc45e39630 RSI: 0000000000005a46 RDI: 0000000000000006 [ 967.637946] RBP: 00007ffc45e39630 R08: 00007fc305d65e20 R09: 0000000000000000 [ 967.637948] R10: 00005648313c7010 R11: 0000000000000246 R12: 00007ffc45e3cc60 [ 967.637949] R13: 0000000000000006 R14: 00005648313cef10 R15: 000000000000000c [ 967.637960] INFO: task receive_writer:28685 blocked for more than 120 seconds. [ 967.638010] Tainted: P O 4.15.0-76-generic #86-Ubuntu [ 967.638055] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 967.638107] receive_writer D 0 28685 2 0x80000080 [ 967.638109] Call Trace: [ 967.638113] __schedule+0x24e/0x880 [ 967.638117] schedule+0x2c/0x80 [ 967.638126] spl_panic+0xfa/0x110 [spl] [ 967.638175] ? arc_prune_task+0x1a/0x40 [zfs] [ 967.638223] ? dbuf_dirty+0x43d/0x850 [zfs] [ 967.638229] ? getrawmonotonic64+0x43/0xd0 [ 967.638232] ? getrawmonotonic64+0x43/0xd0 [ 967.638289] ? dmu_zfetch+0x49a/0x500 [zfs] [ 967.638293] ? getrawmonotonic64+0x43/0xd0 [ 967.638344] ? dmu_zfetch+0x49a/0x500 [zfs] [ 967.638348] ? mutex_lock+0x12/0x40 [ 967.638395] ? dbuf_rele_and_unlock+0x1a8/0x4b0 [zfs] [ 967.638461] range_tree_find_impl+0x88/0x90 [zfs] [ 967.638468] ? spl_kmem_zalloc+0xdc/0x1a0 [spl] [ 967.638530] range_tree_clear+0x4f/0x60 [zfs] [ 967.638583] dnode_free_range+0x11f/0x5a0 [zfs] [ 967.638635] dmu_object_free+0x53/0x90 [zfs] [ 967.638685] dmu_free_long_object+0x9f/0xc0 [zfs] [ 967.638738] receive_freeobjects.isra.12+0x7a/0x100 [zfs] [ 967.638787] receive_writer_thread+0x6d2/0xa60 [zfs] [ 967.638792] ? set_curr_task_fair+0x2b/0x60 [ 967.638800] ? spl_kmem_free+0x33/0x40 [spl] [ 967.638805] ? kfree+0x165/0x180 [ 967.638852] ? receive_free.isra.13+0xc0/0xc0 [zfs] [ 967.638860] thread_generic_wrapper+0x74/0x90 [spl] [ 967.638863] kthread+0x121/0x140 [ 967.638869] ? __thread_exit+0x20/0x20 [spl] [ 967.638872] ? kthread_create_worker_on_cpu+0x70/0x70 [ 967.638876] ret_from_fork+0x35/0x40 [ 1088.467860] INFO: task txg_quiesce:14810 blocked for more than 120 seconds. [ 1088.467913] Tainted: P O 4.15.0-76-generic #86-Ubuntu [ 1088.467958] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1088.468010] txg_quiesce D 0 14810 2 0x80000000 [ 1088.468013] Call Trace: [ 1088.468020] __schedule+0x24e/0x880 [ 1088.468023] schedule+0x2c/0x80 [ 1088.468035] cv_wait_common+0x11e/0x140 [spl] [ 1088.468039] ? wait_woken+0x80/0x80 [ 1088.468048] __cv_wait+0x15/0x20 [spl] [ 1088.468124] txg_quiesce_thread+0x2cb/0x3d0 [zfs] [ 1088.468193] ? txg_delay+0x1b0/0x1b0 [zfs] [ 1088.468201] thread_generic_wrapper+0x74/0x90 [spl] [ 1088.468204] kthread+0x121/0x140 [ 1088.468210] ? __thread_exit+0x20/0x20 [spl] [ 1088.468213] ? kthread_create_worker_on_cpu+0x70/0x70 [ 1088.468217] ret_from_fork+0x35/0x40 [ 1088.468234] INFO: task zfs:28590 blocked for more than 120 seconds. [ 1088.468278] Tainted: P O 4.15.0-76-generic #86-Ubuntu [ 1088.468322] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1088.468373] zfs D 0 28590 28587 0x80000080 [ 1088.468376] Call Trace: [ 1088.468379] __schedule+0x24e/0x880 [ 1088.468382] schedule+0x2c/0x80 [ 1088.468391] cv_wait_common+0x11e/0x140 [spl] [ 1088.468396] ? wait_woken+0x80/0x80 [ 1088.468403] __cv_wait+0x15/0x20 [spl] [ 1088.468459] dmu_recv_stream+0xa51/0xef0 [zfs] [ 1088.468534] zfs_ioc_recv_impl+0x306/0x1100 [zfs] [ 1088.468582] ? dbuf_read+0x34a/0x920 [zfs] [ 1088.468631] ? dbuf_rele+0x36/0x40 [zfs] [ 1088.468634] ? _cond_resched+0x19/0x40 [ 1088.468704] zfs_ioc_recv_new+0x33d/0x410 [zfs] [ 1088.468714] ? spl_kmem_alloc_impl+0xe5/0x1a0 [spl] [ 1088.468721] ? spl_vmem_alloc+0x19/0x20 [spl] [ 1088.468729] ? nv_alloc_sleep_spl+0x1f/0x30 [znvpair] [ 1088.468735] ? nv_mem_zalloc.isra.0+0x2e/0x40 [znvpair] [ 1088.468740] ? nvlist_xalloc.part.2+0x50/0xb0 [znvpair] [ 1088.468805] zfsdev_ioctl+0x451/0x610 [zfs] [ 1088.468811] do_vfs_ioctl+0xa8/0x630 [ 1088.468815] ? __audit_syscall_entry+0xbc/0x110 [ 1088.468821] ? syscall_trace_enter+0x1da/0x2d0 [ 1088.468824] SyS_ioctl+0x79/0x90 [ 1088.468827] do_syscall_64+0x73/0x130 [ 1088.468831] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 [ 1088.468834] RIP: 0033:0x7fc305a905d7 [ 1088.468836] RSP: 002b:00007ffc45e39618 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 1088.468839] RAX: ffffffffffffffda RBX: 0000000000005a46 RCX: 00007fc305a905d7 [ 1088.468840] RDX: 00007ffc45e39630 RSI: 0000000000005a46 RDI: 0000000000000006 [ 1088.468842] RBP: 00007ffc45e39630 R08: 00007fc305d65e20 R09: 0000000000000000 [ 1088.468843] R10: 00005648313c7010 R11: 0000000000000246 R12: 00007ffc45e3cc60 [ 1088.468845] R13: 0000000000000006 R14: 00005648313cef10 R15: 000000000000000c [ 1088.468853] INFO: task receive_writer:28685 blocked for more than 120 seconds. [ 1088.468903] Tainted: P O 4.15.0-76-generic #86-Ubuntu [ 1088.468948] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1088.469000] receive_writer D 0 28685 2 0x80000080 [ 1088.469002] Call Trace: [ 1088.469006] __schedule+0x24e/0x880 [ 1088.469009] schedule+0x2c/0x80 [ 1088.469018] spl_panic+0xfa/0x110 [spl] [ 1088.469066] ? arc_prune_task+0x1a/0x40 [zfs] [ 1088.469116] ? dbuf_dirty+0x43d/0x850 [zfs] [ 1088.469120] ? getrawmonotonic64+0x43/0xd0 [ 1088.469123] ? getrawmonotonic64+0x43/0xd0 [ 1088.469178] ? dmu_zfetch+0x49a/0x500 [zfs] [ 1088.469182] ? getrawmonotonic64+0x43/0xd0 [ 1088.469234] ? dmu_zfetch+0x49a/0x500 [zfs] [ 1088.469237] ? mutex_lock+0x12/0x40 [ 1088.469286] ? dbuf_rele_and_unlock+0x1a8/0x4b0 [zfs] [ 1088.469352] range_tree_find_impl+0x88/0x90 [zfs] [ 1088.469360] ? spl_kmem_zalloc+0xdc/0x1a0 [spl] [ 1088.469421] range_tree_clear+0x4f/0x60 [zfs] [ 1088.469475] dnode_free_range+0x11f/0x5a0 [zfs] [ 1088.469526] dmu_object_free+0x53/0x90 [zfs] [ 1088.469575] dmu_free_long_object+0x9f/0xc0 [zfs] [ 1088.469627] receive_freeobjects.isra.12+0x7a/0x100 [zfs] [ 1088.469678] receive_writer_thread+0x6d2/0xa60 [zfs] [ 1088.469681] ? set_curr_task_fair+0x2b/0x60 [ 1088.469689] ? spl_kmem_free+0x33/0x40 [spl] [ 1088.469693] ? kfree+0x165/0x180 [ 1088.469740] ? receive_free.isra.13+0xc0/0xc0 [zfs] [ 1088.469747] thread_generic_wrapper+0x74/0x90 [spl] [ 1088.469750] kthread+0x121/0x140 [ 1088.469756] ? __thread_exit+0x20/0x20 [spl] [ 1088.469759] ? kthread_create_worker_on_cpu+0x70/0x70 [ 1088.469763] ret_from_fork+0x35/0x40 [ 1209.298375] INFO: task txg_quiesce:14810 blocked for more than 120 seconds. [ 1209.298429] Tainted: P O 4.15.0-76-generic #86-Ubuntu [ 1209.298474] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1209.298526] txg_quiesce D 0 14810 2 0x80000000 [ 1209.298530] Call Trace: [ 1209.298536] __schedule+0x24e/0x880 [ 1209.298540] schedule+0x2c/0x80 [ 1209.298551] cv_wait_common+0x11e/0x140 [spl] [ 1209.298556] ? wait_woken+0x80/0x80 [ 1209.298564] __cv_wait+0x15/0x20 [spl] [ 1209.298644] txg_quiesce_thread+0x2cb/0x3d0 [zfs] [ 1209.298713] ? txg_delay+0x1b0/0x1b0 [zfs] [ 1209.298721] thread_generic_wrapper+0x74/0x90 [spl] [ 1209.298724] kthread+0x121/0x140 [ 1209.298730] ? __thread_exit+0x20/0x20 [spl] [ 1209.298733] ? kthread_create_worker_on_cpu+0x70/0x70 [ 1209.298738] ret_from_fork+0x35/0x40 [ 1209.298754] INFO: task zfs:28590 blocked for more than 120 seconds. [ 1209.298799] Tainted: P O 4.15.0-76-generic #86-Ubuntu [ 1209.298843] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1209.298894] zfs D 0 28590 28587 0x80000080 [ 1209.298897] Call Trace: [ 1209.298900] __schedule+0x24e/0x880 [ 1209.298903] schedule+0x2c/0x80 [ 1209.298912] cv_wait_common+0x11e/0x140 [spl] [ 1209.298916] ? wait_woken+0x80/0x80 [ 1209.298924] __cv_wait+0x15/0x20 [spl] [ 1209.298981] dmu_recv_stream+0xa51/0xef0 [zfs] [ 1209.299056] zfs_ioc_recv_impl+0x306/0x1100 [zfs] [ 1209.299104] ? dbuf_read+0x34a/0x920 [zfs] [ 1209.299151] ? dbuf_rele+0x36/0x40 [zfs] [ 1209.299154] ? _cond_resched+0x19/0x40 [ 1209.299223] zfs_ioc_recv_new+0x33d/0x410 [zfs] [ 1209.299232] ? spl_kmem_alloc_impl+0xe5/0x1a0 [spl] [ 1209.299239] ? spl_vmem_alloc+0x19/0x20 [spl] [ 1209.299247] ? nv_alloc_sleep_spl+0x1f/0x30 [znvpair] [ 1209.299255] ? nv_mem_zalloc.isra.0+0x2e/0x40 [znvpair] [ 1209.299261] ? nvlist_xalloc.part.2+0x50/0xb0 [znvpair] [ 1209.299325] zfsdev_ioctl+0x451/0x610 [zfs] [ 1209.299331] do_vfs_ioctl+0xa8/0x630 [ 1209.299334] ? __audit_syscall_entry+0xbc/0x110 [ 1209.299339] ? syscall_trace_enter+0x1da/0x2d0 [ 1209.299341] SyS_ioctl+0x79/0x90 [ 1209.299345] do_syscall_64+0x73/0x130 [ 1209.299349] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 [ 1209.299352] RIP: 0033:0x7fc305a905d7 [ 1209.299353] RSP: 002b:00007ffc45e39618 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 1209.299358] RAX: ffffffffffffffda RBX: 0000000000005a46 RCX: 00007fc305a905d7 [ 1209.299361] RDX: 00007ffc45e39630 RSI: 0000000000005a46 RDI: 0000000000000006 [ 1209.299362] RBP: 00007ffc45e39630 R08: 00007fc305d65e20 R09: 0000000000000000 [ 1209.299365] R10: 00005648313c7010 R11: 0000000000000246 R12: 00007ffc45e3cc60 [ 1209.299366] R13: 0000000000000006 R14: 00005648313cef10 R15: 000000000000000c [ 1209.299375] INFO: task receive_writer:28685 blocked for more than 120 seconds. [ 1209.299426] Tainted: P O 4.15.0-76-generic #86-Ubuntu [ 1209.299471] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1209.299523] receive_writer D 0 28685 2 0x80000080 [ 1209.299525] Call Trace: [ 1209.299529] __schedule+0x24e/0x880 [ 1209.299532] schedule+0x2c/0x80 [ 1209.299541] spl_panic+0xfa/0x110 [spl] [ 1209.299591] ? arc_prune_task+0x1a/0x40 [zfs] [ 1209.299638] ? dbuf_dirty+0x43d/0x850 [zfs] [ 1209.299644] ? getrawmonotonic64+0x43/0xd0 [ 1209.299647] ? getrawmonotonic64+0x43/0xd0 [ 1209.299703] ? dmu_zfetch+0x49a/0x500 [zfs] [ 1209.299707] ? getrawmonotonic64+0x43/0xd0 [ 1209.299759] ? dmu_zfetch+0x49a/0x500 [zfs] [ 1209.299762] ? mutex_lock+0x12/0x40 [ 1209.299810] ? dbuf_rele_and_unlock+0x1a8/0x4b0 [zfs] [ 1209.299875] range_tree_find_impl+0x88/0x90 [zfs] [ 1209.299883] ? spl_kmem_zalloc+0xdc/0x1a0 [spl] [ 1209.299944] range_tree_clear+0x4f/0x60 [zfs] [ 1209.300000] dnode_free_range+0x11f/0x5a0 [zfs] [ 1209.300052] dmu_object_free+0x53/0x90 [zfs] [ 1209.300102] dmu_free_long_object+0x9f/0xc0 [zfs] [ 1209.300153] receive_freeobjects.isra.12+0x7a/0x100 [zfs] [ 1209.300203] receive_writer_thread+0x6d2/0xa60 [zfs] [ 1209.300208] ? set_curr_task_fair+0x2b/0x60 [ 1209.300215] ? spl_kmem_free+0x33/0x40 [spl] [ 1209.300219] ? kfree+0x165/0x180 [ 1209.300266] ? receive_free.isra.13+0xc0/0xc0 [zfs] [ 1209.300273] thread_generic_wrapper+0x74/0x90 [spl] [ 1209.300278] kthread+0x121/0x140 [ 1209.300284] ? __thread_exit+0x20/0x20 [spl] [ 1209.300287] ? kthread_create_worker_on_cpu+0x70/0x70 [ 1209.300291] ret_from_fork+0x35/0x40 [ 1330.129221] INFO: task txg_quiesce:14810 blocked for more than 120 seconds. [ 1330.129282] Tainted: P O 4.15.0-76-generic #86-Ubuntu [ 1330.129328] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1330.129382] txg_quiesce D 0 14810 2 0x80000000 [ 1330.129387] Call Trace: [ 1330.129399] __schedule+0x24e/0x880 [ 1330.129403] schedule+0x2c/0x80 [ 1330.129420] cv_wait_common+0x11e/0x140 [spl] [ 1330.129429] ? wait_woken+0x80/0x80 [ 1330.129437] __cv_wait+0x15/0x20 [spl] [ 1330.129547] txg_quiesce_thread+0x2cb/0x3d0 [zfs] [ 1330.129617] ? txg_delay+0x1b0/0x1b0 [zfs] [ 1330.129625] thread_generic_wrapper+0x74/0x90 [spl] [ 1330.129631] kthread+0x121/0x140 [ 1330.129638] ? __thread_exit+0x20/0x20 [spl] [ 1330.129641] ? kthread_create_worker_on_cpu+0x70/0x70 [ 1330.129646] ret_from_fork+0x35/0x40 Thanks ProblemType: Bug DistroRelease: Ubuntu 18.04 Package: linux-image-4.15.0-76-generic 4.15.0-76.86 ProcVersionSignature: Ubuntu 4.15.0-76.86-generic 4.15.18 Uname: Linux 4.15.0-76-generic x86_64 NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair AlsaDevices: total 0 crw-rw---- 1 root audio 116, 1 Jan 28 13:31 seq crw-rw---- 1 root audio 116, 33 Jan 28 13:31 timer AplayDevices: Error: [Errno 2] No such file or directory: 'aplay': 'aplay' ApportVersion: 2.20.9-0ubuntu7.9 Architecture: amd64 ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord': 'arecord' AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', '/dev/snd/timer'] failed with exit code 1: Date: Tue Jan 28 15:24:10 2020 HibernationDevice: RESUME=UUID=d34df57d-ae32-4002-be2c-a25efa8678e4 InstallationDate: Installed on 2016-04-04 (1394 days ago) InstallationMedia: Ubuntu-Server 16.04 LTS "Xenial Xerus" - Beta amd64 (20160325) IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig': 'iwconfig' MachineType: Supermicro SSG-6038R-E1CR16L PciMultimedia: ProcEnviron: TERM=rxvt-unicode-256color PATH=(custom, no user) XDG_RUNTIME_DIR=<set> LANG=en_US.UTF-8 SHELL=/bin/bash ProcFB: 0 astdrmfb ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.15.0-76-generic root=UUID=d1ab8bd8-41d5-4142-92d6-d0926539807b ro RelatedPackageVersions: linux-restricted-modules-4.15.0-76-generic N/A linux-backports-modules-4.15.0-76-generic N/A linux-firmware 1.173.14 RfKill: Error: [Errno 2] No such file or directory: 'rfkill': 'rfkill' SourcePackage: linux UpgradeStatus: Upgraded to bionic on 2018-08-16 (530 days ago) dmi.bios.date: 12/17/2015 dmi.bios.vendor: American Megatrends Inc. dmi.bios.version: 2.0 dmi.board.asset.tag: Default string dmi.board.name: X10DRH-iT dmi.board.vendor: Supermicro dmi.board.version: 1.01 dmi.chassis.asset.tag: Default string dmi.chassis.type: 1 dmi.chassis.vendor: Supermicro dmi.chassis.version: Default string dmi.modalias: dmi:bvnAmericanMegatrendsInc.:bvr2.0:bd12/17/2015:svnSupermicro:pnSSG-6038R-E1CR16L:pvr123456789:rvnSupermicro:rnX10DRH-iT:rvr1.01:cvnSupermicro:ct1:cvrDefaultstring: dmi.product.family: SMC X10 dmi.product.name: SSG-6038R-E1CR16L dmi.product.version: 123456789 dmi.sys.vendor: Supermicro To manage notifications about this bug go to: https://bugs.launchpad.net/linux/+bug/1861235/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp