I'm not sure if they still apply to B134, but it seems similar to problems
caused by transaction group issues in the past.
Have you looked at the threads involving setting zfs:zfs_write_limit_override,
zfs:zfs_vdev_max_pending or zfs:zfs_txg_timeout in /etc/system?
Paul
--
This message posted
Rather than hacking something like that, he could use a Disk on Module
(http://en.wikipedia.org/wiki/Disk_on_module) or something like
http://www.tomshardware.com/news/nanoSSD-Drive-Elecom-Japan-SATA,8538.html
(which I suspect may be a DOM but I've not poked around sufficiently to see).
Paul
--
Alas, even moving the file out of the way and rebooting the box (to guarantee
state) didn't work:
-bash-4.0# zpool import -nfFX hds1
echo $?
-bash-4.0# echo $?
1
Do you need to be able to read all the labels for each disk in the array in
order to recover?
>From zdb -l on one of the disks:
I'm surprised at the number as well.
Running it again, I'm seeing it jump fairly high just before the fork errors:
bash-4.0# ps -ef | grep zfsdle | wc -l
20930
(the next run of ps failed due to the fork error).
So maybe it is running out of processes.
ZFS file data from ::memstat just went do
bash-4.0# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size(512 bytes, -p) 10
stack size (kbytes, -s) 10240
cpu time
I have a machine connected to an HDS with a corrupted pool.
While running zpool import -nfFX on the pool, it spawns a large number of
zfsdle processes and eventually the machine hangs for 20-30 seconds, spits out
error messages
zfs: [ID 346414 kern.warning] WARNING: Couldn't create process for
> Paul Kraus wrote:
> > In the ZFS case I could replace the disk
> and the zpool would
> > resilver automatically. I could also take the
> removed disk and put it
> > into the second system and have it recognize the
> zpool (and that it
> > was missing half of a mirror) and the data was all
> SSH compresses by default? I thought you had to
> specify -oCompression
> and/or -oCompressionLevel?
Depends on how it was compiled.
Looking at the man pages for Solaris, looks like it's turned off
so yes, you'd have to set -oCompression
Paul
This message posted from opensolaris.org
__
GIven you're not using compression for rsync, the only thing I can think if
would be that the stream compression of SSH is helping here.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.ope
There isn't a global hot spare, but you can add a hot spare to multiple pools.
Paul
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'd recommend getting a second 80GB disk and mirroring your root as well.
UFS+SDS for root (don't forget a live upgrade slice) and ZFS for the other
disks.
Probably RAID-Z as you don't have enough disks to be interesting for doing 1+0.
Paul
This message posted from opensolaris.org
_
11 matches
Mail list logo