Hi Paul,
I opened 6914208 to cover the sysevent/zfsdle problem.
If the system crashed due to a power failure and the disk labels for
this pool were corrupted, then I think you will need to follow the steps
to get the disks relabeled correctly. You might review some previous
postings by Victor
Alas, even moving the file out of the way and rebooting the box (to guarantee
state) didn't work:
-bash-4.0# zpool import -nfFX hds1
echo $?
-bash-4.0# echo $?
1
Do you need to be able to read all the labels for each disk in the array in
order to recover?
>From zdb -l on one of the disks:
Paul Armstrong wrote:
I'm surprised at the number as well.
Running it again, I'm seeing it jump fairly high just before the fork errors:
bash-4.0# ps -ef | grep zfsdle | wc -l
20930
(the next run of ps failed due to the fork error).
So maybe it is running out of processes.
ZFS file data fro
I'm surprised at the number as well.
Running it again, I'm seeing it jump fairly high just before the fork errors:
bash-4.0# ps -ef | grep zfsdle | wc -l
20930
(the next run of ps failed due to the fork error).
So maybe it is running out of processes.
ZFS file data from ::memstat just went do
bash-4.0# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size(512 bytes, -p) 10
stack size (kbytes, -s) 10240
cpu time
Something over 8000 sounds vaguely like the default maximum process count.
What does 'ulimit -a' show?
I don't know why you're seeing so many zfsdle processes, though — sounds like a
bug to me.
--
This message posted from opensolaris.org
___
zfs-disc
I have a machine connected to an HDS with a corrupted pool.
While running zpool import -nfFX on the pool, it spawns a large number of
zfsdle processes and eventually the machine hangs for 20-30 seconds, spits out
error messages
zfs: [ID 346414 kern.warning] WARNING: Couldn't create process for