Re: [zfs-discuss] Can this be done?

2009-04-07 Thread Michael Shadle
On Wed, Apr 1, 2009 at 3:19 AM, Michael Shadle  wrote:
> I'm going to try to move one of my disks off my rpool tomorrow (since
> it's a mirror) to a different controller.
>
> According to what I've heard before, ZFS should automagically
> recognize this new location and have no problem, right?

I successfully have realized how nice ZFS is with locating the proper
location of the disk across different controllers/ports. Besides for
rpool - ZFS boot. Moving those creates a huge PITA.


Now quick question - if I have a raidz2 named 'tank' already I can
expand the pool by doing:

zpool attach tank raidz2 device1 device2 device3 ... device7

It will make 'tank' larger and each group of disks (vdev? or zdev?)
will be dual parity. It won't create a mirror, will it?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-04-07 Thread Bob Friesenhahn

On Tue, 7 Apr 2009, Michael Shadle wrote:


Now quick question - if I have a raidz2 named 'tank' already I can
expand the pool by doing:

zpool attach tank raidz2 device1 device2 device3 ... device7

It will make 'tank' larger and each group of disks (vdev? or zdev?)
will be dual parity. It won't create a mirror, will it?


No.  The two vdevs will be load shared rather than creating a mirror. 
This should double your multi-user performance.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-04-07 Thread Scott Lawson



Michael Shadle wrote:

On Wed, Apr 1, 2009 at 3:19 AM, Michael Shadle  wrote:
  

I'm going to try to move one of my disks off my rpool tomorrow (since
it's a mirror) to a different controller.

According to what I've heard before, ZFS should automagically
recognize this new location and have no problem, right?



I successfully have realized how nice ZFS is with locating the proper
location of the disk across different controllers/ports. Besides for
rpool - ZFS boot. Moving those creates a huge PITA.


Now quick question - if I have a raidz2 named 'tank' already I can
expand the pool by doing:

zpool attach tank raidz2 device1 device2 device3 ... device7

It will make 'tank' larger and each group of disks (vdev? or zdev?)
  
You cannot expand a RAIDZ or RAIDZ2 at all.  You must back up the data 
and destroy

if you wish to alter the number of disks in a single RAIDz or RAIDZ2 stripe.

You may however attach and additional RAIDZ or RAIDZ2 to an existing 
storage pool.

Your pool would look something like below if you add additional RAIDZ's.
This is an output from a J4500 with 48 x 1TB drives with multiple 
RAIDZ's in a single pool

yielding ~30TB or so.


pool: nbupool
state: ONLINE
scrub: none requested
config:

   NAME STATE READ WRITE CKSUM
   nbupool  ONLINE   0 0 0
 raidz1 ONLINE   0 0 0
   c2t2d0   ONLINE   0 0 0
   c2t3d0   ONLINE   0 0 0
   c2t4d0   ONLINE   0 0 0
   c2t5d0   ONLINE   0 0 0
   c2t6d0   ONLINE   0 0 0
 raidz1 ONLINE   0 0 0
   c2t7d0   ONLINE   0 0 0
   c2t8d0   ONLINE   0 0 0
   c2t9d0   ONLINE   0 0 0
   c2t10d0  ONLINE   0 0 0
   c2t11d0  ONLINE   0 0 0
 raidz1 ONLINE   0 0 0
   c2t12d0  ONLINE   0 0 0
   c2t13d0  ONLINE   0 0 0
   c2t14d0  ONLINE   0 0 0
   c2t15d0  ONLINE   0 0 0
   c2t16d0  ONLINE   0 0 0
 raidz1 ONLINE   0 0 0
   c2t17d0  ONLINE   0 0 0
   c2t18d0  ONLINE   0 0 0
   c2t19d0  ONLINE   0 0 0
   c2t20d0  ONLINE   0 0 0
   c2t21d0  ONLINE   0 0 0
 raidz1 ONLINE   0 0 0
   c2t22d0  ONLINE   0 0 0
   c2t23d0  ONLINE   0 0 0
   c2t24d0  ONLINE   0 0 0
   c2t25d0  ONLINE   0 0 0
   c2t26d0  ONLINE   0 0 0
 raidz1 ONLINE   0 0 0
   c2t27d0  ONLINE   0 0 0
   c2t28d0  ONLINE   0 0 0
   c2t29d0  ONLINE   0 0 0
   c2t30d0  ONLINE   0 0 0
   c2t31d0  ONLINE   0 0 0
 raidz1 ONLINE   0 0 0
   c2t32d0  ONLINE   0 0 0
   c2t33d0  ONLINE   0 0 0
   c2t34d0  ONLINE   0 0 0
   c2t35d0  ONLINE   0 0 0
   c2t36d0  ONLINE   0 0 0
 raidz1 ONLINE   0 0 0
   c2t37d0  ONLINE   0 0 0
   c2t38d0  ONLINE   0 0 0
   c2t39d0  ONLINE   0 0 0
   c2t40d0  ONLINE   0 0 0
   c2t41d0  ONLINE   0 0 0
 raidz1 ONLINE   0 0 0
   c2t42d0  ONLINE   0 0 0
   c2t43d0  ONLINE   0 0 0
   c2t44d0  ONLINE   0 0 0
   c2t45d0  ONLINE   0 0 0
   c2t46d0  ONLINE   0 0 0
   spares
 c2t47d0AVAIL
 c2t48d0AVAIL
 c2t49d0AVAIL


will be dual parity. It won't create a mirror, will it?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-04-07 Thread David Champion
* On 07 Apr 2009, Michael Shadle wrote: 
> 
> Now quick question - if I have a raidz2 named 'tank' already I can
> expand the pool by doing:
> 
> zpool attach tank raidz2 device1 device2 device3 ... device7
> 
> It will make 'tank' larger and each group of disks (vdev? or zdev?)
> will be dual parity. It won't create a mirror, will it?

That's correct.

Anything you're unsure about, you can test.  Just create a zpool using
files instead of devices:

for i in 1 2 3 4; do
mkfile 256m /tmp/file$i
done
zpool create testpool raidz /tmp/file1 /tmp/file2 /tmp/file3 /tmp/file4

...and experiment on that.  No data risk this way.

-- 
 -D.d...@uchicago.eduNSITUniversity of Chicago
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can this be done?

2009-04-07 Thread Michael Shadle
On Tue, Apr 7, 2009 at 5:22 PM, Bob Friesenhahn
 wrote:

> No.  The two vdevs will be load shared rather than creating a mirror. This
> should double your multi-user performance.

Cool - now a followup -

When I attach this new raidz2, will ZFS auto "rebalance" data between
the two, or will it keep the other one empty and do some sort of load
balancing between the two for future writes only?

Is there a way (perhaps a scrub? or something?) to get the data spread
around to both?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS data loss

2009-04-07 Thread Victor Galis

Hi,

I have lost a ZFS volume and I am hoping to get some help to recover the 
information ( a couple of months worth of work :( ).


I have been using ZFS for more than 6 months on this project. Yesterday 
I ran a "zvol status" command, the system froze and rebooted. When it 
came back the discs where not available.


See bellow the output of  " zpool status", "format" and "zdb". 


Do you think there is something that can be done to recover lost data?

Thanks,
  Vic


v490vic-00f3 # zpool status
 pool: SAP
state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
   replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
  see: http://www.sun.com/msg/ZFS-8000-D3
scrub: none requested
config:

   NAME   STATE READ WRITE CKSUM
   SAPUNAVAIL  0 0 0  
insufficient replicas
 mirror   UNAVAIL  0 0 0  corrupted 
data

   c6t40d0ONLINE   0 0 0 cannot open
   c6t226000C0FFA05F5Ed0  UNAVAIL  0 0 0  cannot open
 mirror   UNAVAIL  0 0 0  
insufficient replicas

   c6t216000C0FF805F5Ed0  UNAVAIL  0 0 0  cannot open
   c6t42d0UNAVAIL  0 0 0  cannot open
v490vic-00f3 # format
Searching for disks...done

c6t42d0: configured with capacity of 272.44GB
c6t118d0: configured with capacity of 679.97GB
c6t118d31: configured with capacity of 16.00MB
c6t119d0: configured with capacity of 679.97GB
c6t119d31: configured with capacity of 16.00MB
c6t216000C0FF87A646d0: configured with capacity of 272.44GB
c6t216000C0FF805F5Ed0: configured with capacity of 272.44GB
c6t226000C0FFA05F5Ed0: configured with capacity of 272.44GB


AVAILABLE DISK SELECTIONS:
  0. c1t0d0 
 /p...@9,60/SUNW,q...@2/f...@0,0/s...@w210c503b5b09,0
  1. c1t1d0 
 /p...@9,60/SUNW,q...@2/f...@0,0/s...@w210c503b783f,0
  2. c6t40d0 
 /p...@8,60/SUNW,q...@2/f...@0,0/s...@w216000c0ff87a7a9,0
  3. c6t42d0 
 /p...@8,60/SUNW,q...@2/f...@0,0/s...@w226000c0ffa7a7a9,0
  4. c6t118d0 
 /p...@8,60/SUNW,q...@2/f...@0,0/s...@w202500a0b85a4cc8,0
  5. c6t118d31 
 /p...@8,60/SUNW,q...@2/f...@0,0/s...@w202500a0b85a4cc8,1f
  6. c6t119d0 
 /p...@8,60/SUNW,q...@2/f...@0,0/s...@w202400a0b85a4cc8,0
  7. c6t119d31 
 /p...@8,60/SUNW,q...@2/f...@0,0/s...@w202400a0b85a4cc8,1f
  8. c6t216000C0FF87A646d0 hd 127 sec 127>

 /p...@8,60/SUNW,q...@2/f...@0,0/s...@w216000c0ff87a646,0
  9. c6t216000C0FF805F5Ed0 hd 127 sec 127>

 /p...@8,60/SUNW,q...@2/f...@0,0/s...@w216000c0ff805f5e,0
 10. c6t226000C0FFA05F5Ed0 hd 127 sec 127>

 /p...@8,60/SUNW,q...@2/f...@0,0/s...@w226000c0ffa05f5e,0
 11. c6t226000C0FFA7A646d0 hd 127 sec 127>


v490vic-00f3 # zdb
SAP
   version=4
   name='SAP'
   state=0
   txg=2589278
   pool_guid=5644167510038135831
   vdev_tree
   type='root'
   id=0
   guid=5644167510038135831
   children[0]
   type='mirror'
   id=0
   guid=14615540212911926254
   whole_disk=0
   metaslab_array=17
   metaslab_shift=31
   ashift=9
   asize=292543528960
   children[0]
   type='disk'
   id=0
   guid=3588260184558145093
   path='/dev/dsk/c6t216000C0FF87A7A9d0s0'
   devid='id1,s...@n600c0ff007a7a96be8e22900/a'
   whole_disk=1
   DTL=197
   children[1]
   type='disk'
   id=1
   guid=4026862278207380809
   path='/dev/dsk/c6t226000C0FFA05F5Ed0s0'
   devid='id1,s...@n600c0ff0005f5e76605a4900/a'
   whole_disk=1
   DTL=196
   children[1]
   type='mirror'
   id=1
   guid=13122299688569793580
   whole_disk=0
   metaslab_array=14
   metaslab_shift=31
   ashift=9
   asize=292543528960
   children[0]
   type='disk'
   id=0
   guid=1343811411216678
   path='/dev/dsk/c6t216000C0FF805F5Ed0s0'
   devid='id1,s...@n600c0ff0005f5e137feb5500/a'
   whole_disk=1
   DTL=198
   children[1]
   type='disk'
   id=1
   guid=7841227132682983069
   path='/dev/dsk/c6t226000C0FFA7A7A9d0s0'
   devid='id1,s...@n600c0f

Re: [zfs-discuss] Can this be done?

2009-04-07 Thread Scott Lawson



Michael Shadle wrote:

On Tue, Apr 7, 2009 at 5:22 PM, Bob Friesenhahn
 wrote:

  

No.  The two vdevs will be load shared rather than creating a mirror. This
should double your multi-user performance.



Cool - now a followup -

When I attach this new raidz2, will ZFS auto "rebalance" data between
the two, or will it keep the other one empty and do some sort of load
balancing between the two for future writes only?
  
Future writes only as far as I am aware. You will however get increased 
IO potentially.

(Total increase will depend on controller layouts etc etc.)

Is there a way (perhaps a scrub? or something?) to get the data spread
around to both?
  
No. You could backup and restore though. (or if you a small number of 
big files you

could I guess copy them around inside the pool to get them "rebalanced". )


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


--
___


Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland New Zealand

Phone  : +64 09 968 7611
Fax: +64 09 968 7641
Mobile : +64 27 568 7611

mailto:sc...@manukau.ac.nz

http://www.manukau.ac.nz




perl -e 'print
$i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Importing zpool after one side of mirror was destroyed

2009-04-07 Thread Christopher West

Hello all,

I have a customer asking the following question.  Does anyone have any 
suggestions? I have searched SunSolve
and have not had any luck finding documentation to assist this customer. 
Any assistance would be greatly appreciated!


Thank you in advance,
Chris

-Customer's problem 
statement
I'm unable to import a mirrored ZFS pool.  The system was shut down with 
a functional mirrored ZFS pool using c1t0d0s6 and c1t1d0s6.  Solaris was 
then re-installed from DVD media using c1t0d0 as the root disk.  The 
root disk was repartitioned during install, which destroyed slice 6 
formerly used as a Zpool mirror component.  I need to recover data from 
the mirrored Zpool using the one remaining mirror.


Based on this behavior, I'm also concerned that a system with a mirrored 
ZFS root disk could be difficult to recover.  Example:  Server X boots 
from mirrored zpool "rpool".  Server X is shutdown for maintenance, and 
1 of the 2 disks in the mirrored Zpool fails when the server is powered 
back on.  Now the mirrored zpool is unable to be imported and the server 
fails to boot.  What is the recovery procedure?


# zpool import
 pool: local
 id: xxx
state: UNAVAIL
status: The pool is formatted using an older on-disk version.
actiion: The pool cannot be imported due to damaged devices or data.
config:

  local   UNAVAIL  insufficient replicas
 mirror  UNAVAIL  corrupted data
c1d0s6 ONLINE
c1t1d0s6  ONLINE

# zpool import -o failmode=continue -f local
cannot import 'local': invalid vdev configuratiion

# zdb -l /dev/rdsk/c1t1d0s6
-
LABEL 0 
-

  version=4
  name='local'
  state=0
  txg=4
  pool_guid=x
  top_guid=xx
  guid=x
  vdev_tree
type='mirror'
id=0
guid=
metaslab_array=14
metaslab_shift=31
ashift=9
asize=
children[0]
type='disk'
id=0
guid=
path='/dev/dsk/c1d0s6'
devid='id1,c...@xxx/g'
whole_disk=0
children[1]
type='disk'
id=1
guid=
path='/dev/dsk/c2d0s6'
devid='id1,c...@xxx/g'
whole_disk=0

<  --- The remaining 3 labels on this disk are also valid --- >

--
Christopher West, OS Administration
Email: christopher.w...@sun.com
Phone: 1-800-USA-4SUN
My Working Hours : 8am-5pm MT, Monday thru Friday
My Manager : Michael Ventimiglia 
===
TO REACH THE NEXT AVAILABLE ENGINEER:
1. Call 1-800-USA-4SUN choose opt 2 and enter your SR number.
2. Wait for my voice mail message to begin.
3. Press "0" during my message to reach the next available engineer.
4. You will hear hold music until the next engineer answers.

Submit, check and update tickets at http://www.sun.com/osc

This email may contain confidential and privileged material for the sole
use of the intended recipient. Any review or distribution by others is
strictly prohibited. If you are not the intended recipient please
contact the sender and delete all copies.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Importing zpool after one side of mirror was destroyed

2009-04-07 Thread Geoff Shipman
Chris,

I believe the trouble is stemming from the fact both devices in the pool
are seen as available, but as customer noted one side of the mirror was
destroyed by the install.

If the untouched device were alone then the pool should be listed as
being in a degraded state as with a mirror the pool should survive a
single lost device.   Then the pool should at least import and allow
access to the data.

I set up a mirrored root ZFS system and one of the devices on boot takes
longer to probe than the other so is listed as unavailable and the root
pool is listed as degraded but never has the system not booted because
of this.

I originally setup my ZFS root partitions incorrectly by trying to
assign the entire disk (c#t#d#) to the zpool.   This worked as I was
migrating from UFS to ZFS but after getting both devices listed as
entire disks  could not boot the system because EFI labeling was in play
and currently an EFI labeled disk cannot be booted from.   I then
booted  to the Update 6 media and imported the pool and detached one of
the devices then used format to relabel the detached device to SMI
assigning slice 1 the entire disk.  Then imported the single device pool
again and attached the new SMI device which resilvered from the original
device.  I added the boot blocks for ZFS via the install boot command. 
After the resilver finished I detached the remaining EFI labeled device
and used format to create the SMI label and assigned the single slice
the entire disk.  Then attached that device to the pool and again added
the ZFS boot blocks to the new device and waited for the resilver to
complete.  Then tested by booting system and everything came back as
expected.

So the management of a mirrored ZFS root pools was not that difficult. 
Making sure the correct device was detached and fixed and attached later
was the toughest part.

So in customers you hope that the pool in its current configuration does
not get imported as the resilver most likely would finish the wipe of
the good data.   As customer has the OS loaded on c1t0d0 boot from OS
media with this device removed so it is not found issue the "zpool
import" and the pool should be listed as degraded with the expected
device listed as unavailable.  Then issue "zpool import -f -R /mnt
"  and verify the data still exists.  Then issue a
"zpool detach  " so you no longer have a
mirror, and export the pool.

At this point boot from disk into single user mode and move the
/etc/zfs/zpool.cache file to a different name.   Finish the boot into
multi user and issue the "zpool import" the pool should be seen and
available.  Then finish the "zpool import " and if you
want to make a mirror again issue the "zpool attach  
".

This what I believe would get out of the trouble seen.

Geoff


On Tue, 2009-04-07 at 13:59, Christopher West wrote:
> Hello all,
> 
> I have a customer asking the following question.  Does anyone have any
> suggestions? I have searched SunSolve
> and have not had any luck finding documentation to assist this
> customer. 
> Any assistance would be greatly appreciated!
> 
> Thank you in advance,
> Chris
> 
> -Customer's problem 
> statement
> I'm unable to import a mirrored ZFS pool.  The system was shut down
> with 
> a functional mirrored ZFS pool using c1t0d0s6 and c1t1d0s6.  Solaris
> was 
> then re-installed from DVD media using c1t0d0 as the root disk.  The 
> root disk was repartitioned during install, which destroyed slice 6 
> formerly used as a Zpool mirror component.  I need to recover data
> from 
> the mirrored Zpool using the one remaining mirror.
> 
> Based on this behavior, I'm also concerned that a system with a
> mirrored 
> ZFS root disk could be difficult to recover.  Example:  Server X boots
> from mirrored zpool "rpool".  Server X is shutdown for maintenance,
> and 
> 1 of the 2 disks in the mirrored Zpool fails when the server is
> powered 
> back on.  Now the mirrored zpool is unable to be imported and the
> server 
> fails to boot.  What is the recovery procedure?
> 
> # zpool import
>   pool: local
>   id: xxx
>  state: UNAVAIL
> status: The pool is formatted using an older on-disk version.
> actiion: The pool cannot be imported due to damaged devices or data.
> config:
> 
>local   UNAVAIL  insufficient replicas
>   mirror  UNAVAIL  corrupted data
>  c1d0s6 ONLINE
>  c1t1d0s6  ONLINE
> 
> # zpool import -o failmode=continue -f local
> cannot import 'local': invalid vdev configuratiion
> 
> # zdb -l /dev/rdsk/c1t1d0s6
> -
> LABEL 0 
> -
>version=4
>name='local'
>state=0
>txg=4
>pool_guid=x
>top_guid=xx
>guid=x
>vdev_tree
>  type='mirror'
>  id=0
>  guid=
>