[zfs-discuss] Attaching a mirror to a mirror

2009-03-26 Thread Matthew Angelo
Assuming I have a zpool which consists of a simple 2 disk mirror.

How do I attach a third disk (disk3) to this zpool to mirror the existing
data?  Then split this mirror and remove disk0 and disk1, leaving a single
disk zpool which consist of the new disk3.   AKA. Online data migration.



[root]# zpool status -v
  pool: apps
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
apps ONLINE   0 0 0
  mirror ONLINE   0 0 0
/root/zfs/disk0  ONLINE   0 0 0
/root/zfs/disk1  ONLINE   0 0 0

errors: No known data errors


The use case here is, we've implemented new Storage.   The new (3rd) LUN is
a RAID10 Hitachi SAN with the existing mirror being local SAS Disks.  Back
in the VxVM world, this would be done by mirroring the dg then splitting the
mirror.   I understand we are moving away from a ZFS mirror to a
single stripe.

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Attaching a mirror to a mirror

2009-03-26 Thread Francois Napoleoni

Hi Matthew,

Just attach disk3 to existing mirrored tlv
Wait for resilvering to complete
Dettach disk0 and disk1
This will leave you with only disk3 in your pool.
You will loose ZFS redundancy fancy features (self healing, ...).

# zpool create test mirror /export/disk0 /export/disk1
# zpool status
  pool: test
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
test   ONLINE   0 0 0
  mirror   ONLINE   0 0 0
/export/disk0  ONLINE   0 0 0
/export/disk1  ONLINE   0 0 0

errors: No known data errors
# zpool attach test /export/disk1 /export/disk3
# zpool status
  pool: test
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Thu Mar 26 
19:55:24 2009

config:

NAME   STATE READ WRITE CKSUM
test   ONLINE   0 0 0
  mirror   ONLINE   0 0 0
/export/disk0  ONLINE   0 0 0
/export/disk1  ONLINE   0 0 0
/export/disk3  ONLINE   0 0 0  71.5K resilvered
#  zpool detach test /export/disk0
#  zpool detach test /export/disk1
# zpool status
  pool: test
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Thu Mar 26 
19:55:24 2009

config:

NAME STATE READ WRITE CKSUM
test ONLINE   0 0 0
  /export/disk3  ONLINE   0 0 0  71.5K resilvered

errors: No known data errors

F.


On 03/26/09 08:20, Matthew Angelo wrote:
Assuming I have a zpool which consists of a simple 2 disk mirror.   

How do I attach a third disk (disk3) to this zpool to mirror the 
existing data?  Then split this mirror and remove disk0 and disk1, 
leaving a single disk zpool which consist of the new disk3.   AKA. 
Online data migration.




[root]# zpool status -v
  pool: apps
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
apps ONLINE   0 0 0
  mirror ONLINE   0 0 0
/root/zfs/disk0  ONLINE   0 0 0
/root/zfs/disk1  ONLINE   0 0 0

errors: No known data errors


The use case here is, we've implemented new Storage.   The new (3rd) LUN 
is a RAID10 Hitachi SAN with the existing mirror being local SAS Disks. 
 Back in the VxVM world, this would be done by mirroring the dg then 
splitting the mirror.   I understand we are moving away from a ZFS 
mirror to a single stripe.


Thanks




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zfs using java

2009-03-26 Thread Howard Huntley




I once installed ZFS on my home Sun Blade 100 and it worked fine on the
sun blade 100 running solaris 10. I reinstalled Solaris 10 09 version
and created a zpool which is not visible using the java control panel.
When I attempt to run the Java control panel to manage the zfs system I
receive an error message stating "!Launch Error, No application is
registered with this sun Java or I have no rights to use any
applications that are registered, see my sys admin." Can any one tell
me how to get this straightened out. I have been fooling around with it
for some time now.

Is any one is Jacksonville, Florida??
-- 

signature1
Howard Huntley Jr. MCP, MCSE
Micro-Computer Systems Specialist



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to increase rpool size in a VM?

2009-03-26 Thread Bob Doolittle

Blake wrote:

You need to use 'installgrub' to get the right boot pits in place on
your new disk.
  


I did that, but it didn't help.
I ran:
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t1d0s0

Is it OK to run this before resilvering has completed?

Do I need to change the disk boot order after doing the detach? Copy the 
boot partition via dd?


-Bob


The manpage for installgrub is pretty helpful.

On Wed, Mar 25, 2009 at 6:18 PM, Bob Doolittle  wrote:
  

Hi,

I have a build 109 system installed in a VM, and my rpool capacity is
getting close to full.

Since it's a VM, I can easily increase the size of the disk, or add another,
larger disk to the VM.

What's the easiest strategy for increasing my capacity?

I tried adding a 2nd larger disk, did a zpool attach, waited for resilvering
to complete, did a zpool detach of the 1st disk, but then it seemed it
couldn't find my grub menu... I couldn't figure out a way to simply add a
2nd disk to the rpool, it seems like it's limited to a single device.

Suggestions?

Please keep me on the reply list, I'm not subscribed to this list currently.

Thanks,
 Bob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to increase rpool size in a VM?

2009-03-26 Thread bob netherton

Bob Doolittle wrote:

Blake wrote:

You need to use 'installgrub' to get the right boot pits in place on
your new disk.
  


I did that, but it didn't help.
I ran:
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t1d0s0

Is it OK to run this before resilvering has completed?



You need to install GRUB in the master boot record (MBR). 


# installgrub -m boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t1d0s0

And yes, it is safe to do while the resilvering is happening.   The 
master boot record

is outside of the block range of your pool.

Changing the boot order shouldn't be necessary (that's what findroot is 
supposed to
help take care of).   It should only be necessary if the new disk wasn't 
seen by the BIOS
in the first place or for some reason isn't selected as part of the 
normal BIOS boot sequence.


Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS crashes on boot

2009-03-26 Thread Cyril Plisko
Hello !

I have a machine that started to panic on boot (see panic message
below). It think it panics when it imports the pool (5 x 2 mirror).
Are there any ways to recover from that ?

Some history info: that machine was upgraded a couple of days ago from
snv78 to snv110. This morning zpool was upgraded to v14 and scrub was
run to verify data health. After 3 or 4 hours the scrub was stopped
(the IO impact was considered too high for the moment). Short time
after that one person reboot it (because it felt sluggish [I hope that
person will never get root access again ! ]). On reboot machine
panic'ed. I had a another boot disk with fresh b110, so I booted from
it, only to see it panic'ing again on zpool import.

So, any ideas how to get this pool imported ? This specific
organization uses Linux everywhere, but fileservers, due to ZFS. It
would be pity to let them loose their trust.

Here is the panic.

panic[cpu2]/thread=ff000c697c60: assertion failed: 0 ==
zap_remove_int(mos, ds_prev->ds_phys->ds_next_clones_obj, obj, tx),
file: ../../common/fs/zfs/dsl_dataset.c, line: 1493

ff000c6978d0 genunix:assfail+7e ()
ff000c697a50 zfs:dsl_dataset_destroy_sync+84b ()
ff000c697aa0 zfs:dsl_sync_task_group_sync+eb ()
ff000c697b10 zfs:dsl_pool_sync+112 ()
ff000c697ba0 zfs:spa_sync+32a ()
ff000c697c40 zfs:txg_sync_thread+265 ()
ff000c697c50 unix:thread_start+8 ()



-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS crashes on boot

2009-03-26 Thread Richard Elling

assertion failures are bugs.  Please file one at http://bugs.opensolaris.org
You may need to try another version of the OS, which may not have
the bug.
-- richard

Cyril Plisko wrote:

Hello !

I have a machine that started to panic on boot (see panic message
below). It think it panics when it imports the pool (5 x 2 mirror).
Are there any ways to recover from that ?

Some history info: that machine was upgraded a couple of days ago from
snv78 to snv110. This morning zpool was upgraded to v14 and scrub was
run to verify data health. After 3 or 4 hours the scrub was stopped
(the IO impact was considered too high for the moment). Short time
after that one person reboot it (because it felt sluggish [I hope that
person will never get root access again ! ]). On reboot machine
panic'ed. I had a another boot disk with fresh b110, so I booted from
it, only to see it panic'ing again on zpool import.

So, any ideas how to get this pool imported ? This specific
organization uses Linux everywhere, but fileservers, due to ZFS. It
would be pity to let them loose their trust.

Here is the panic.

panic[cpu2]/thread=ff000c697c60: assertion failed: 0 ==
zap_remove_int(mos, ds_prev->ds_phys->ds_next_clones_obj, obj, tx),
file: ../../common/fs/zfs/dsl_dataset.c, line: 1493

ff000c6978d0 genunix:assfail+7e ()
ff000c697a50 zfs:dsl_dataset_destroy_sync+84b ()
ff000c697aa0 zfs:dsl_sync_task_group_sync+eb ()
ff000c697b10 zfs:dsl_pool_sync+112 ()
ff000c697ba0 zfs:spa_sync+32a ()
ff000c697c40 zfs:txg_sync_thread+265 ()
ff000c697c50 unix:thread_start+8 ()



  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS crashes on boot

2009-03-26 Thread Cyril Plisko
On Thu, Mar 26, 2009 at 8:45 PM, Richard Elling
 wrote:
> assertion failures are bugs.

Yup, I know that.

>  Please file one at http://bugs.opensolaris.org

Just did.

> You may need to try another version of the OS, which may not have
> the bug.

Well, I kinda guessed that. I hoped, may be wrongly, to hear something
more concrete... Tough luck, I guess...

> -- richard
>
> Cyril Plisko wrote:
>>
>> Hello !
>>
>> I have a machine that started to panic on boot (see panic message
>> below). It think it panics when it imports the pool (5 x 2 mirror).
>> Are there any ways to recover from that ?
>>
>> Some history info: that machine was upgraded a couple of days ago from
>> snv78 to snv110. This morning zpool was upgraded to v14 and scrub was
>> run to verify data health. After 3 or 4 hours the scrub was stopped
>> (the IO impact was considered too high for the moment). Short time
>> after that one person reboot it (because it felt sluggish [I hope that
>> person will never get root access again ! ]). On reboot machine
>> panic'ed. I had a another boot disk with fresh b110, so I booted from
>> it, only to see it panic'ing again on zpool import.
>>
>> So, any ideas how to get this pool imported ? This specific
>> organization uses Linux everywhere, but fileservers, due to ZFS. It
>> would be pity to let them loose their trust.
>>
>> Here is the panic.
>>
>> panic[cpu2]/thread=ff000c697c60: assertion failed: 0 ==
>> zap_remove_int(mos, ds_prev->ds_phys->ds_next_clones_obj, obj, tx),
>> file: ../../common/fs/zfs/dsl_dataset.c, line: 1493
>>
>> ff000c6978d0 genunix:assfail+7e ()
>> ff000c697a50 zfs:dsl_dataset_destroy_sync+84b ()
>> ff000c697aa0 zfs:dsl_sync_task_group_sync+eb ()
>> ff000c697b10 zfs:dsl_pool_sync+112 ()
>> ff000c697ba0 zfs:spa_sync+32a ()
>> ff000c697c40 zfs:txg_sync_thread+265 ()
>> ff000c697c50 unix:thread_start+8 ()
>>
>>
>>
>>
>



-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Attaching a mirror to a mirror

2009-03-26 Thread Matthew Angelo
Hi Francis,
Thanks for confirming.  That did the trick.  I kept thinking I had to mirror
at the highest level (zpool), then split.  I actually did it in one less
step than you mention by using replace instead of attach then detach but
what you said is 100% correct.

zpool replace /root/zfs/disk0 /root/zfs/disk3
zpool detach  /root/zfs/disk1

Thanks again!


On Thu, Mar 26, 2009 at 7:00 PM, Francois Napoleoni <
francois.napole...@sun.com> wrote:

> Hi Matthew,
>
> Just attach disk3 to existing mirrored tlv
> Wait for resilvering to complete
> Dettach disk0 and disk1
> This will leave you with only disk3 in your pool.
> You will loose ZFS redundancy fancy features (self healing, ...).
>
> # zpool create test mirror /export/disk0 /export/disk1
> # zpool status
>  pool: test
>  state: ONLINE
>  scrub: none requested
> config:
>
>NAME   STATE READ WRITE CKSUM
>test   ONLINE   0 0 0
>  mirror   ONLINE   0 0 0
>/export/disk0  ONLINE   0 0 0
>/export/disk1  ONLINE   0 0 0
>
> errors: No known data errors
> # zpool attach test /export/disk1 /export/disk3
> # zpool status
>  pool: test
>  state: ONLINE
>  scrub: resilver completed after 0h0m with 0 errors on Thu Mar 26 19:55:24
> 2009
> config:
>
>NAME   STATE READ WRITE CKSUM
>test   ONLINE   0 0 0
>  mirror   ONLINE   0 0 0
>/export/disk0  ONLINE   0 0 0
>/export/disk1  ONLINE   0 0 0
>/export/disk3  ONLINE   0 0 0  71.5K resilvered
> #  zpool detach test /export/disk0
> #  zpool detach test /export/disk1
> # zpool status
>  pool: test
>  state: ONLINE
>  scrub: resilver completed after 0h0m with 0 errors on Thu Mar 26 19:55:24
> 2009
> config:
>
>NAME STATE READ WRITE CKSUM
>test ONLINE   0 0 0
>  /export/disk3  ONLINE   0 0 0  71.5K resilvered
>
> errors: No known data errors
>
> F.
>
>
>
> On 03/26/09 08:20, Matthew Angelo wrote:
>
>> Assuming I have a zpool which consists of a simple 2 disk mirror.
>> How do I attach a third disk (disk3) to this zpool to mirror the existing
>> data?  Then split this mirror and remove disk0 and disk1, leaving a single
>> disk zpool which consist of the new disk3.   AKA. Online data migration.
>>
>>
>>
>> [root]# zpool status -v
>>  pool: apps
>>  state: ONLINE
>>  scrub: none requested
>> config:
>>
>>NAME STATE READ WRITE CKSUM
>>apps ONLINE   0 0 0
>>  mirror ONLINE   0 0 0
>>/root/zfs/disk0  ONLINE   0 0 0
>>/root/zfs/disk1  ONLINE   0 0 0
>>
>> errors: No known data errors
>>
>>
>> The use case here is, we've implemented new Storage.   The new (3rd) LUN
>> is a RAID10 Hitachi SAN with the existing mirror being local SAS Disks.
>>  Back in the VxVM world, this would be done by mirroring the dg then
>> splitting the mirror.   I understand we are moving away from a ZFS mirror to
>> a single stripe.
>>
>> Thanks
>>
>>
>> 
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to increase rpool size in a VM?

2009-03-26 Thread Bob Doolittle
Note that c4t1d0s0 is my *new* disk, not my old. I presume that's the 
right one to target with installgrub?


Thanks,
  Bob

Bob Doolittle wrote:

Blake wrote:

You need to use 'installgrub' to get the right boot pits in place on
your new disk.
  


I did that, but it didn't help.
I ran:
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t1d0s0

Is it OK to run this before resilvering has completed?

Do I need to change the disk boot order after doing the detach? Copy 
the boot partition via dd?


-Bob


The manpage for installgrub is pretty helpful.

On Wed, Mar 25, 2009 at 6:18 PM, Bob Doolittle 
 wrote:
 

Hi,

I have a build 109 system installed in a VM, and my rpool capacity is
getting close to full.

Since it's a VM, I can easily increase the size of the disk, or add 
another,

larger disk to the VM.

What's the easiest strategy for increasing my capacity?

I tried adding a 2nd larger disk, did a zpool attach, waited for 
resilvering

to complete, did a zpool detach of the 1st disk, but then it seemed it
couldn't find my grub menu... I couldn't figure out a way to simply 
add a

2nd disk to the rpool, it seems like it's limited to a single device.

Suggestions?

Please keep me on the reply list, I'm not subscribed to this list 
currently.


Thanks,
 Bob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recovering data from a corrupted zpool

2009-03-26 Thread Matthew Angelo
Hi there,

Is there a way to get as much data as possible off an existing slightly
corrupted zpool?  I have a 2 disk stripe which I'm moving to new storage.  I
will be moving it to a ZFS Mirror, however at the moment I'm having problems
with ZFS Panic'ing the system during a send | recv.

I don't know exactly how much data is valid.  Everything appears to run as
expected and applications aren't crashing.

Doing an $( ls -lR | grep -i "IO Error" ) returns roughly 10-15 files which
are affected.Luckily, these files ls is returning aren't super critical.

Is it possible to tell ZFS to do a emergency "copy as much valid data off
this file system"?

I've tried disabling checkums on the corrupted source zpool.   But even
still, once ZFS runs into an error the zpool is FAULTED and the kernel
panic's and the system crashes.   Is it possible to tell the zpool to ignore
any errors and continue without faulting the zpool?

We have a backup of the data, which is 2 months old.  Is it slightly
possible to bring this backup online, and 'sync as much as it can' between
the two volumes?  Could this just be a rsync job?

Thanks



[root]# zpool status -v apps
  pool: apps
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
appsONLINE   0 0 120
  c1t1d0ONLINE   0 0 60
  c1t2d0ONLINE   0 0 0
  c1t3d0ONLINE   0 0 60

errors: Permanent errors have been detected in the following files:

apps:<0x0>
<0x1d2>:<0x0>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering data from a corrupted zpool

2009-03-26 Thread Fajar A. Nugraha
2009/3/27 Matthew Angelo :
> Doing an $( ls -lR | grep -i "IO Error" ) returns roughly 10-15 files which
> are affected.

If ls works then tar, cpio, etc. should work.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss