Re: [zfs-discuss] Intermittent ZFS hang

2011-01-03 Thread Paul Armstrong
I'm not sure if they still apply to B134, but it seems similar to problems caused by transaction group issues in the past. Have you looked at the threads involving setting zfs:zfs_write_limit_override, zfs:zfs_vdev_max_pending or zfs:zfs_txg_timeout in /etc/system? Paul -- This message posted

Re: [zfs-discuss] best way to configure raidz groups

2009-12-31 Thread Paul Armstrong
Rather than hacking something like that, he could use a Disk on Module (http://en.wikipedia.org/wiki/Disk_on_module) or something like http://www.tomshardware.com/news/nanoSSD-Drive-Elecom-Japan-SATA,8538.html (which I suspect may be a DOM but I've not poked around sufficiently to see). Paul --

Re: [zfs-discuss] Recovering ZFS stops after syseventconfd can't fork

2009-12-28 Thread Paul Armstrong
Alas, even moving the file out of the way and rebooting the box (to guarantee state) didn't work: -bash-4.0# zpool import -nfFX hds1 echo $? -bash-4.0# echo $? 1 Do you need to be able to read all the labels for each disk in the array in order to recover? >From zdb -l on one of the disks:

Re: [zfs-discuss] Recovering ZFS stops after syseventconfd can't fork

2009-12-22 Thread Paul Armstrong
I'm surprised at the number as well. Running it again, I'm seeing it jump fairly high just before the fork errors: bash-4.0# ps -ef | grep zfsdle | wc -l 20930 (the next run of ps failed due to the fork error). So maybe it is running out of processes. ZFS file data from ::memstat just went do

Re: [zfs-discuss] Recovering ZFS stops after syseventconfd can't fork

2009-12-22 Thread Paul Armstrong
bash-4.0# ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited open files (-n) 256 pipe size(512 bytes, -p) 10 stack size (kbytes, -s) 10240 cpu time

[zfs-discuss] Recovering ZFS stops after syseventconfd can't fork

2009-12-22 Thread Paul Armstrong
I have a machine connected to an HDS with a corrupted pool. While running zpool import -nfFX on the pool, it spawns a large number of zfsdle processes and eventually the machine hangs for 20-30 seconds, spits out error messages zfs: [ID 346414 kern.warning] WARNING: Couldn't create process for

Re: [zfs-discuss] I/O freeze after a disk failure

2007-09-14 Thread Paul Armstrong
> Paul Kraus wrote: > > In the ZFS case I could replace the disk > and the zpool would > > resilver automatically. I could also take the > removed disk and put it > > into the second system and have it recognize the > zpool (and that it > > was missing half of a mirror) and the data was all

[zfs-discuss] Re: Re: Rsync update to ZFS server over SSH faster than over

2007-05-22 Thread Paul Armstrong
> SSH compresses by default? I thought you had to > specify -oCompression > and/or -oCompressionLevel? Depends on how it was compiled. Looking at the man pages for Solaris, looks like it's turned off so yes, you'd have to set -oCompression Paul This message posted from opensolaris.org __

[zfs-discuss] Re: Rsync update to ZFS server over SSH faster than over

2007-05-21 Thread Paul Armstrong
GIven you're not using compression for rsync, the only thing I can think if would be that the stream compression of SSH is helping here. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.ope

[zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-21 Thread Paul Armstrong
There isn't a global hot spare, but you can add a hot spare to multiple pools. Paul This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Need guidance on RAID 5, ZFS, and RAIDZ on home file server

2007-05-08 Thread Paul Armstrong
I'd recommend getting a second 80GB disk and mirroring your root as well. UFS+SDS for root (don't forget a live upgrade slice) and ZFS for the other disks. Probably RAID-Z as you don't have enough disks to be interesting for doing 1+0. Paul This message posted from opensolaris.org _