Assuming I have a zpool which consists of a simple 2 disk mirror.
How do I attach a third disk (disk3) to this zpool to mirror the existing
data? Then split this mirror and remove disk0 and disk1, leaving a single
disk zpool which consist of the new disk3. AKA. Online data migration.
[root]#
Hi Matthew,
Just attach disk3 to existing mirrored tlv
Wait for resilvering to complete
Dettach disk0 and disk1
This will leave you with only disk3 in your pool.
You will loose ZFS redundancy fancy features (self healing, ...).
# zpool create test mirror /export/disk0 /export/disk1
# zpool statu
I once installed ZFS on my home Sun Blade 100 and it worked fine on the
sun blade 100 running solaris 10. I reinstalled Solaris 10 09 version
and created a zpool which is not visible using the java control panel.
When I attempt to run the Java control panel to manage the zfs system I
receive an
Blake wrote:
You need to use 'installgrub' to get the right boot pits in place on
your new disk.
I did that, but it didn't help.
I ran:
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t1d0s0
Is it OK to run this before resilvering has completed?
Do I need to change the disk boo
Bob Doolittle wrote:
Blake wrote:
You need to use 'installgrub' to get the right boot pits in place on
your new disk.
I did that, but it didn't help.
I ran:
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t1d0s0
Is it OK to run this before resilvering has completed?
You need
Hello !
I have a machine that started to panic on boot (see panic message
below). It think it panics when it imports the pool (5 x 2 mirror).
Are there any ways to recover from that ?
Some history info: that machine was upgraded a couple of days ago from
snv78 to snv110. This morning zpool was up
assertion failures are bugs. Please file one at http://bugs.opensolaris.org
You may need to try another version of the OS, which may not have
the bug.
-- richard
Cyril Plisko wrote:
Hello !
I have a machine that started to panic on boot (see panic message
below). It think it panics when it imp
On Thu, Mar 26, 2009 at 8:45 PM, Richard Elling
wrote:
> assertion failures are bugs.
Yup, I know that.
> Please file one at http://bugs.opensolaris.org
Just did.
> You may need to try another version of the OS, which may not have
> the bug.
Well, I kinda guessed that. I hoped, may be wrongl
Hi Francis,
Thanks for confirming. That did the trick. I kept thinking I had to mirror
at the highest level (zpool), then split. I actually did it in one less
step than you mention by using replace instead of attach then detach but
what you said is 100% correct.
zpool replace /root/zfs/disk0 /r
Note that c4t1d0s0 is my *new* disk, not my old. I presume that's the
right one to target with installgrub?
Thanks,
Bob
Bob Doolittle wrote:
Blake wrote:
You need to use 'installgrub' to get the right boot pits in place on
your new disk.
I did that, but it didn't help.
I ran:
installgr
Hi there,
Is there a way to get as much data as possible off an existing slightly
corrupted zpool? I have a 2 disk stripe which I'm moving to new storage. I
will be moving it to a ZFS Mirror, however at the moment I'm having problems
with ZFS Panic'ing the system during a send | recv.
I don't k
2009/3/27 Matthew Angelo :
> Doing an $( ls -lR | grep -i "IO Error" ) returns roughly 10-15 files which
> are affected.
If ls works then tar, cpio, etc. should work.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
12 matches
Mail list logo