For my latest test I set up a stripe of two mirrors with one hot spare
like so:
zpool create -f -m /export/zmir zmir mirror c0t0d0 c3t2d0 mirror c3t3d0 c3t4d0
spare c3t1d0
I spun down c3t2d0 and c3t4d0 simultaneously, and while the system kept
running (my tar over NFS barely hiccuped), the zpoo
For the record, this happened with a new filesystem. I didn't
muck about with an old filesystem while it was still mounted,
I created a new one, mounted it and then accidentally exported
it.
> > Except that it doesn't:
> >
> > # mount /dev/dsk/c1t1d0s0 /mnt
> > # share /mnt
> > # umount /mnt
> >
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors.
Should I file this as a bug, or should I just "not do that" :->
Ko,
This message posted from opensolaris.org
__
This worked.
I've restarted my testing but I've been fdisking each drive before I
add it to the pool, and so far the system is behaving as expected
when I spin a drive down, i.e., the hot spare gets automatically used.
This makes me wonder if it's possible to ensure that the forced
addition of
> You are likely hitting:
>
> 6397052 unmounting datasets should process
> /etc/mnttab instead of traverse DSL
>
> Which was fixed in build 46 of Nevada. In the
> meantime, you can remove
> /etc/zfs/zpool.cache manually and reboot, which will
> remove all your
> pools (which you can then re-impo
Nevermind:
# zfs destroy [EMAIL PROTECTED]:28
cannot open '[EMAIL PROTECTED]:28': I/O error
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
Here's the truss output:
402:ioctl(3, ZFS_IOC_POOL_LOG_HISTORY, 0x080427B8) = 0
402:ioctl(3, ZFS_IOC_OBJSET_STATS, 0x0804192C) = 0
402:ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x0804243C) = 0
402:ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x0804243C) Err#3 ESRCH
402:ioctl(3, ZFS_IOC_
BTW, I'm also unable to export the pool -- same error.
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ok, so I'm planning on wiping my test pool that seems to have problems
with non-spare disks being marked as spares, but I can't destroy it:
# zpool destroy -f zmir
cannot iterate filesystems: I/O error
Anyone know how I can nuke this for good?
Jim
This message posted from opensolaris.org
__
> If those are the original path ids, and you didn't
> move the disks on the bus? Why is the is_spare flag
Well, I'm not sure, but these drives were set as spares in another pool
I deleted -- should I have done something to the drives (fdisk?) before
rearranging it?
The rest of the options are
Here's the output of zdb:
zmir
version=3
name='zmir'
state=0
txg=770
pool_guid=5904723747772934703
vdev_tree
type='root'
id=0
guid=5904723747772934703
children[0]
type='mirror'
id=0
guid=1506718
> Anyone have any thoughts on this? I'd really like to
> be able to build a nice ZFS box for file service but if
> a hardware failure can corrupt a disk pool I'll have to
> try to find another solution, I'm afraid.
Sorry, I worded this poorly -- if the loss of a disk in a mirror
can corrupt the
> So the questions are:
>
> - is this fixable? I don't see an inum I could run
> find on to remove,
>and I can't even do a zfs volinit anyway:
>nextest-01# zfs volinit
> cannot iterate filesystems: I/O error
>
> - would not enabling zil_disable have prevented
> this?
>
>- Sho
Platform:
- old dell workstation with an Andataco gigaraid enclosure
plugged into an Adaptec 39160
- Nevada b51
Current zpool config:
- one two-disk mirror with two hot spares
In my ferocious pounding of ZFS I've managed to corrupt my data
pool. This is what I've been doing to test
>
> OK, spun down the drives again. Here's that output:
>
> http://www.cise.ufl.edu/~jfh/zfs/threads
I just realized that I changed the configuration, so that doesn't reflect
a system with spares, sorry.
However, I reinitialized the pool and spun down one of the drives and
everything is wo
I know this isn't necessarily ZFS specific, but after I reboot I spin the
drives back
up, but nothing I do (devfsadm, disks, etc) can get them seen again until the
next reboot.
I've got some older scsi drives in an old Andataco Gigaraid enclosure which
I thought supported hot-swap, but I seem una
> >>Do you have a threadlist from the node when it was
> hung ? That would
> >>reveal some info.
> >
> >Unfortunately I don't. Do you mean the output of
> >
> > ::threadlist -v
> >
> Yes. That would be useful.
OK, spun down the drives again. Here's that output:
http://www.cise.ufl.edu/~jfh/
So is there a command to make the spare get used, or
so I have to remove it as a spare and add it if it doesn't
get automatically used?
Is this a bug to be fixed, or will this always be the case when
the disks aren't exactly the same size?
This message posted from opensolaris.org
_
18 matches
Mail list logo