I'd suggest trying to get a trace of where in the kernel it's blocking so that the deadlock can be found and fixed.
What version of OI is this? - Rich On Thu, Aug 2, 2012 at 5:19 AM, Maurilio Longo <maurilio.lo...@libero.it> wrote: > Hans, > > I've seen the same problem when using zfs list with -d n, so that it just goes > down some depth (I was using 1 as well). > > I've solved it doing a zfs list without -d and sorting and grepping the result > for what I need. > > Slower, but it did not hang anymore. > > You need to reboot, though, I've not found any other way to kill the hanging > process. > > Maurilio. > > Hans Joergensen wrote: >> Hey, >> >> Somehow I've hit somekind of lock on one of my NAS-boxes.... >> >> Output from ps; >> root 26707 1 0 09:10:16 ? 0:00 /usr/sbin/zfs destroy >> datastore1/vmware-nfs/zfsnas4-clientstore@snap-hourly-1-201 >> root 26705 1 0 09:10:16 ? 0:00 /usr/sbin/zfs destroy >> datastore1/vmware-nfs/zfsnas4-clientstore@snap-hourly-1-201 >> root 2583 1 0 11:40:52 ? 0:00 /usr/sbin/zfs list -t >> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4 >> root 12079 1 0 15:40:35 ? 0:00 /usr/sbin/zfs list -t >> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4 >> root 26706 1 0 09:10:16 ? 0:00 /usr/sbin/zfs destroy >> datastore1/vmware-nfs/zfsnas4-clientstore@snap-hourly-1-201 >> root 22359 1 0 22:21:17 ? 0:00 /usr/sbin/zfs list -t >> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4 >> root 26708 1 0 09:10:17 ? 0:00 /usr/sbin/zfs destroy >> datastore1/vmware-nfs/zfsnas4-clientstore@snap-hourly-1-201 >> root 22374 1 0 22:21:24 ? 0:00 /usr/sbin/zfs list -t >> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4 >> root 16677 1 0 18:06:38 ? 0:00 /usr/sbin/zfs list -t >> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4 >> root 22335 1 0 22:21:03 ? 0:00 /usr/sbin/zfs list -t >> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4 >> root 27981 1 0 09:40:57 ? 0:00 /usr/sbin/zfs list -t >> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4 >> root 22386 1 0 22:21:28 ? 0:00 /usr/sbin/zfs list -t >> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4 >> root 7165 1 0 13:40:48 ? 0:00 /usr/sbin/zfs list -t >> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4 >> root 29390 1 0 10:18:03 ? 0:00 /usr/sbin/zfs list -t >> snapshot -r datastore1/vmware-nfs >> root 3637 1 0 12:06:27 ? 0:00 /usr/sbin/zfs list -t >> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4 >> root 15999 1 0 17:40:43 ? 0:00 /usr/sbin/zfs list -t >> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4 >> root 20089 1 0 20:40:52 ? 0:00 /usr/sbin/zfs list -t >> snapshot -H -o name -d 1 -r datastore1/vmware-nfs/zfsnas4 >> root 17500 1 0 18:42:03 ? 0:00 /usr/sbin/zfs list -t >> snapshot -r datastore1/vmware-nfs >> >> >> Any chance I can get around this without rebooting the machine? it's >> a production system with lots of VM's on it.. So that would be very >> annoying.. >> >> I've tried solving the problem by killing the processes that spawned >> the zfs list and destoy commands, thats why they have 1 as parant >> process... >> >> Could the lock have happened because of PID 26707 and 26705 running >> at the same time? >> >> // Hans >> >> _______________________________________________ >> OpenIndiana-discuss mailing list >> OpenIndiana-discuss@openindiana.org >> http://openindiana.org/mailman/listinfo/openindiana-discuss >> > > -- > __________ > | | | |__| Maurilio Longo > |_|_|_|____| farmaconsult s.r.l. > > > > _______________________________________________ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss _______________________________________________ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss