Re: [zfs-discuss] S10U6 and x4500 thumper sata controller

2008-11-01 Thread Mertol Ozyoney
I also need this information.
Thanks a lot for keeping me on the loop also

Sent from a mobile device

Mertol Ozyoney

On 31.Eki.2008, at 13:59, "Paul B. Henson" <[EMAIL PROTECTED]> wrote:

>
> S10U6 was released this morning (whoo-hooo!), and I was wondering if
> someone in the know could verify that it contains all the
> fixes/patches/IDRs for the x4500 sata problems?
>
> Thanks...
>
> --  
> Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/ 
> ~henson/
> Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
> California State Polytechnic University  |  Pomona CA 91768
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Help with simple (?) reconfigure of zpool

2008-11-01 Thread Robert Rodriguez
Hello all.  I was hoping to get some advice on how to do this without moving or 
losing my data.  I have 6 drives in two raidz1 vdevs in a pool.  I have 2 new 
1TB drives that I would like to add to that pool and replace 3 of the smaller 
drives.  I'd like to end up with 5 1TB drives in a single raidz1 vdev in the 
same pool.  I realize that copying the data somewhere else and then simply 
rebuilding the pool in the proper config would be the simplest method, but I 
have no place to put that data.  Any ideas / tricks / or even 'you shouldn't 
configure it that way' would be appreciated.

Current pool:
# zpool status -v
  pool: mp
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Fri Oct  3 09:09:37 2008
config:

NAMESTATE READ WRITE CKSUM
mp  ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c7t2d0  ONLINE   0 0 0
c7t0d0  ONLINE   0 0 0
c7t1d0  ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c4d0ONLINE   0 0 0
c5d0ONLINE   0 0 0
c6d0ONLINE   0 0 0

errors: No known data errors

All the drives on c7 are 1TB drives.  The drives on c4, c5 and c6, are 320G, 
400G and 400G respectively.  I have 2 new 1TB drives I'd ideally like to add to 
the first vdev (although from everything I've read that is not possible).  So, 
here is where I'd like to end up:

zpool status -v
  pool: mp
NAMESTATE READ WRITE CKSUM
mp  ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c7t2d0  ONLINE   0 0 0
c7t0d0  ONLINE   0 0 0
c7t1d0  ONLINE   0 0 0
c7t3d0  ONLINE   0 0 0
c7t4d0  ONLINE   0 0 0

With all of those drives at 1TB, that would effectively give 4TB of storage 
with one drive used for parity (or so I assume).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with simple (?) reconfigure of zpool

2008-11-01 Thread Will Murnane
On Sat, Nov 1, 2008 at 16:52, Robert Rodriguez <[EMAIL PROTECTED]> wrote:
> I have 6 drives in two raidz1 vdevs in a pool.  I have 2 new 1TB drives that 
> I would like to add to that pool and replace 3 of the smaller drives.  I'd 
> like to end up with 5 1TB drives in a single raidz1 vdev in the same pool.
No can do.  ZFS cannot remove vdevs (yet; this is planned as one of a
series of changes collectively known as BP rewrite), so you cannot go
from having two raidz vdevs down to one.  Your best bet is to replace
the smaller disks one at a time, or copy your data somewhere else and
then back onto a new pool.  Or wait for the BP rewrite changes to make
it out.

>  I realize that copying the data somewhere else and then simply rebuilding 
> the pool in the proper config would be the simplest method, but I have no 
> place to put that data.
Disks are cheap these days, and they give you a place to put backups.
As expensive as more disks sound, they're the simplest solution to
this problem.  Either buy one more disk and replace the raidz vdev
with a larger one, or buy three more, build your ideal pool, and copy
things over.  Then use the remaining disks to make a backup of the
things that are important to you.

> With all of those drives at 1TB, that would effectively give 4TB of storage 
> with one drive used for parity (or so I assume).
That configuration would have that capacity, yes.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with simple (?) reconfigure of zpool

2008-11-01 Thread Bob Friesenhahn
On Sat, 1 Nov 2008, Robert Rodriguez wrote:

> up with 5 1TB drives in a single raidz1 vdev in the same pool.  I 
> realize that copying the data somewhere else and then simply 
> rebuilding the pool in the proper config would be the simplest 
> method, but I have no place to put that data.  Any ideas / tricks / 
> or even 'you shouldn't configure it that way' would be appreciated.

You need a place to put the data.  If you have enough time to tolerate 
the slow 12MB/second throughput of USB 2.0 drives, then they can be 
your friend.  1TB USB external drives are not terribly expensive these 
days.  With enough of them, you can use them as a temporary storage 
area for all your data.  Of course these drives are not to be trusted 
so you should purchase enough of them that you can build a redundant 
temporary storage pool.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs backups

2008-11-01 Thread David Magda
On Oct 31, 2008, at 13:13, Richard Elling wrote:

> Paul Kraus wrote:
>>
>> Is there a ufsdump equivalent for ZFS ? For home use I really don't
>> want to have to buy a NetBackup license.
>
> No, and it is unlikely to happen.  To some degree, ufsdump existed
> because of deficiencies in other copy applications, which have been  
> largely fixed over the past 25+ years.

How about stabilization of the 'zfs send' stream's format?

Also I'm surprised Jorg Schilly hasn't chimed yet suggesting star. :)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with simple (?) reconfigure of zpool

2008-11-01 Thread Ross
As others have said, you're kind of stuck.  I'd wait until you have another 1TB 
drive, then replace all 3 on your second pool.  That really is the easiest way 
to add capacity.

However, with 6 drives you would be much better off with a single raid-z2 pool, 
and it's probably easier to change configuration sooner than later, before you 
get any more data.  You really need to find a couple of TB of storage to hold 
things temporarily while you shuffle the pool around.

Now this is risky if you don't have backups, but one possible approach might be:
- Take one of the 1TB drives off your raid-z pool
- Use your 3 1TB drives, plus two sparse 1TB files and create a 5 drive raid-z2
- disconnect the sparse files.  You now have a 3TB raid-z2 volume in a degraded 
state
- use zfs send / receive to migrate your data over
- destroy your original pool and use zpool replace to add those drives to the 
new pool in place of the sparse files

Of course, this would be even better if you could get that extra 1TB drive now. 
 That would give you an end result of a 6 drive raid-z2 volume.  The only 
danger is that there's a real risk of data loss if you don't have backups, but 
if you did have backups, you wouldn't need such a complicated process to move 
your data...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with simple (?) reconfigure of zpool

2008-11-01 Thread Ross
PS.  Without backups, single parity raid really isn't something I'd recommend 
with 1TB drives.  Personally I'd take backups of any critical files and start 
migrating as soon as I could.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs backups

2008-11-01 Thread Ross
Is there anything better than star?  That was what I planned to use.  Simple, 
cheap, and compatible with just about anything :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with simple (?) reconfigure of zpool

2008-11-01 Thread Marc Bevand
Ross  googlemail.com> writes:
> Now this is risky if you don't have backups, but one possible approach might 
be:
> - Take one of the 1TB drives off your raid-z pool
> - Use your 3 1TB drives, plus two sparse 1TB files and create a 5 drive 
raid-z2
> - disconnect the sparse files.  You now have a 3TB raid-z2 volume in a 
degraded state
> - use zfs send / receive to migrate your data over
> - destroy your original pool and use zpool replace to add those drives to 
the new pool in place of the sparse files

This would work but it would give the original poster a raidz2 with only 3TB 
of usable space when he really wants a 4TB raidz1.

Fortunately, Robert, a similar procedure exists to end up exactly with the 
pool config you want without requiring any other temporary drives. Before I go 
further, let me tell you there is a real risk of losing your data because the 
procedure I describe below use temporary striped pools (equivalent to raid0) 
to copy data around, and as you know raid0 is the less reliable raid 
mechanism. Also, the procedure involves lost of manual steps.

So, let me first represent your current pool config in compact form using 
drive names describing their capacity:
  pool (2.6TB usable):  raidz a-1t b-1t c-1t  raidz d-320g e-400g f-400g

Export the 1st pool, create a 2nd temporary striped pool made of your 2 new 
drives plus f-400g, reimport the 1st pool (f-400g should show up as missing in 
the 1st one):
  1st pool (2.6TB usable):  raidz a-1t b-1t c-1t  raidz d-320g e-400g 

  2nd pool (2.4TB usable):  g-1t h-1t f-400g

Copy your data to the 2nd pool, destroy the 1st one and create a 3rd temporary 
striped pool made of the 2 smallest drives:
  1st pool (destroyed): (unused drives: a-1t b-1t c-1t)
  2nd pool (2.4TB usable):  g-1t h-1t f-400g
  3rd pool (0.7TB usable):  d-320g e-400g

Create 2 sparse files x-1t and y-1t of 1 TB each on the 3rd pool ("mkfile -n 
932g x-1t y-1t", 1TB is about 932GiB), and recreate the 1st pool with a raidz 
vdev made of 3 physical 1TB drives and the 2 sparse files:
  1st pool (4.0TB usable(*)):  raidz a-1t b-1t c-1t x-1t y-1t
  2nd pool (2.4TB usable): g-1t h-1t f-400g
  3rd pool (0.7TB usable): d-320g e-400g

(*) 4.0TB virtually; in practice the sparse files won't be able to allocate 
1TB of disk blocks because they are backed by the 3rd pool which is much 
smaller.

Offline one of the sparse files ("zpool offline") of the 1st pool to prevent 
at least one of them from allocating disk blocks:
  1st pool (4.0TB usable(**)):  raidz a-1t b-1t c-1t x-1t 
  2nd pool (2.4TB usable):  g-1t h-1t f-400g
  3rd pool (0.7TB usable):  d-320g e-400g

(**) At that point x-1t can grow to at least 0.7 TB because it is the only 
consumer of disk blocks on the 3rd pool; which means the 1st pool can now hold 
at least 0.7*4 = 2.8 TB in practice.

Now you should be able to copy all your data from the 2nd pool back to the 1st 
one. When done, destroy the 2nd pool:
  1st pool (4.0TB usable):  raidz a-1t b-1t c-1t x-1t 
  2nd pool (destroyed): (unused drives: g-1t h-1t f-400g)
  3rd pool (0.7TB usable):  d-320g e-400g

Finally, replace x-1t and the other offlined sparse files with g-1t and h-1t 
("zpool replace"):
  1st pool (4.0TB usable):  raidz a-1t b-1t c-1t g-1t h-1t
  2nd pool (destroyed): (unused drives: f-400g)
  3rd pool (0.7TB usable):  d-320g e-400g

And destroy the 3rd pool.

-marc


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs backups

2008-11-01 Thread Carson Gaspar
On 11/1/2008 11:28 AM, David Magda wrote:
> On Oct 31, 2008, at 13:13, Richard Elling wrote:
>
>> Paul Kraus wrote:
>>> Is there a ufsdump equivalent for ZFS ? For home use I really don't
>>> want to have to buy a NetBackup license.
>> No, and it is unlikely to happen.  To some degree, ufsdump existed
>> because of deficiencies in other copy applications, which have been
>> largely fixed over the past 25+ years.
>
> How about stabilization of the 'zfs send' stream's format?
>
> Also I'm surprised Jorg Schilly hasn't chimed yet suggesting star. :)

We need _some_ backup format that will preserve ZFS ACLs. star doesn't 
(yet), nor does any other version of tar that I know of. Sun's cpio 
didn't use to (I haven't tested it recently, anyone know for sure?), and 
if it does now, it's still cpio *shudder*. Even rsync doesn't support 
ZFS ACLs (yet).

It would be _really_ nice if someone familiar with the rather badly 
documented ZFS ACL APIs would contribute code to give us some working 
options. I suspect star and rsync are the least work, as they already 
have ACL frameworks. And if Sun's cpio doesn't have ZFS ACL support yet, 
that really needs to happen.

I actually took a look at doing this for rsync, but I didn't have enough 
time to learn the API by trial-and-error.

-- 
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs backups

2008-11-01 Thread Carson Gaspar
On 11/1/2008 4:06 PM, Carson Gaspar wrote:
...
> (yet), nor does any other version of tar that I know of. Sun's cpio
> didn't use to (I haven't tested it recently, anyone know for sure?), and
> if it does now, it's still cpio *shudder*. Even rsync doesn't support
...
> have ACL frameworks. And if Sun's cpio doesn't have ZFS ACL support yet,
> that really needs to happen.

OK, I just tested with Sol 10 U5 (fully patched as of 2 weeks ago), and 
Sun's cpio _does_ preserve ZFS ACLs. So does Sun's cp, FYI. Although 
cp's man page is unclear about how to preserve both ACLs and extended 
attributes.

So for now I'm forced to grit my teeth and recommend Sun's cpio for full 
backups. Just make sure you include the options -P (for ACLs) and -@ 
(for extended attributes) if you want a full backup. You probably also 
want to test what restores look like from a non-Sun cpio (I haven't done 
this).

-- 
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pool can't be imported on fresh install with one broken vdev

2008-11-01 Thread Christian Walther
Nobody?
Should I open a bug report?

Maybe some more info: The port the broken disk was attached to is no used by 
the new boot device containing a fresh zpool. I didn't want to loose all 
information from the old root pool, so I decided to use another disk 
temporarily.
Is it possible that the zpool import doesn't fail because there's a device 
missing, but because the port of this missing device is in use by another one 
belonging to a different poo?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs backups

2008-11-01 Thread Ian Collins
Carson Gaspar wrote:
> On 11/1/2008 4:06 PM, Carson Gaspar wrote:
> ...
>   
>> (yet), nor does any other version of tar that I know of. Sun's cpio
>> didn't use to (I haven't tested it recently, anyone know for sure?), and
>> if it does now, it's still cpio *shudder*. Even rsync doesn't support
>> 
> ...
>   
>> have ACL frameworks. And if Sun's cpio doesn't have ZFS ACL support yet,
>> that really needs to happen.
>> 
>
> OK, I just tested with Sol 10 U5 (fully patched as of 2 weeks ago), and 
> Sun's cpio _does_ preserve ZFS ACLs. So does Sun's cp, FYI. Although 
> cp's man page is unclear about how to preserve both ACLs and extended 
> attributes.
>
>   
So does tar.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs backups

2008-11-01 Thread Joerg Schilling
Carson Gaspar <[EMAIL PROTECTED]> wrote:

> > Also I'm surprised Jorg Schilly hasn't chimed yet suggesting star. :)
>
> We need _some_ backup format that will preserve ZFS ACLs. star doesn't 
> (yet), nor does any other version of tar that I know of. Sun's cpio 
> didn't use to (I haven't tested it recently, anyone know for sure?), and 
> if it does now, it's still cpio *shudder*. Even rsync doesn't support 
> ZFS ACLs (yet).

I tried to discuss the archive format for ACLs with people from Sun.
It seems that noone is interested.

When I has time to implement ZFS ACL support, the implementation of Sun's 
libsec was broken. 

If the archive format used by Sun tar and Sun cpio does not include numerical 
user/group ids in addition to the names, the usability of backups made with 
these programs is extemely limited.


I am currently working on cdrtools-3.0-final. Once this is ready, I have more 
time to add ZFS ACL support to star.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] luactivate: how to fix things from boot:net?

2008-11-01 Thread Vincent Fox
So after tinkering with lucreate and luactivate I now have several boot 
environments but the active one is unfortunately not bootable.

How can I access the luactivate command from boot:net?

boot net:dhcp -s
mkdir /tmp/mnt
zpool import -R /tmp/mnt rpool

I poke around in /tmp/mnt but do not find usr/sbin under there.

This is on Sparc.  I suppose I could manually edit
/tmp/mnt/rpool/boot/menu.lst

It just seems inelegant even if correct.  Why doesn't the net boot image 
include th lu commands?  This is very frustrating to repair.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with simple (?) reconfigure of zpool

2008-11-01 Thread Robert Rodriguez
Thank you for all of the thoughtful replies everyone.

Tomorrow I will attempt the procedure you so kindly outlined Marc.

A couple of follow up question, have you done anything similar before?  Can you 
assess the risk involved here?  Does the fact that the pool is currently at 90% 
usage change this in any way.  I'm just looking for a little reassurance, but I 
do see the genius in the procedure here.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs backups

2008-11-01 Thread Ian Collins
Joerg Schilling wrote:
> Carson Gaspar <[EMAIL PROTECTED]> wrote:
>
>   
>>> Also I'm surprised Jorg Schilly hasn't chimed yet suggesting star. :)
>>>   
>> We need _some_ backup format that will preserve ZFS ACLs. star doesn't 
>> (yet), nor does any other version of tar that I know of. Sun's cpio 
>> didn't use to (I haven't tested it recently, anyone know for sure?), and 
>> if it does now, it's still cpio *shudder*. Even rsync doesn't support 
>> ZFS ACLs (yet).
>> 
>
> I tried to discuss the archive format for ACLs with people from Sun.
> It seems that noone is interested.
>
> When I has time to implement ZFS ACL support, the implementation of Sun's 
> libsec was broken. 
>
> If the archive format used by Sun tar and Sun cpio does not include numerical 
> user/group ids in addition to the names, the usability of backups made with 
> these programs is extemely limited.
>
>   
It does:

user:icollins:r-:---:allow:1005,user:ian:r-:---:allow:100

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] luactivate: how to fix things from boot:net?

2008-11-01 Thread Ian Collins
Vincent Fox wrote:
> So after tinkering with lucreate and luactivate I now have several boot 
> environments but the active one is unfortunately not bootable.
>
> How can I access the luactivate command from boot:net?
>
>   
install-discuss would be a better place to ask.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with simple (?) reconfigure of zpool

2008-11-01 Thread Marc Bevand
Robert Rodriguez  comcast.net> writes:
> 
> A couple of follow up question, have you done anything similar before?

I have done similar manipulations to experiment with ZFS
(using files instead of drives).

> Can you assess the risk involved here?

If any one of your 8 drives die during the procedure, you are going
to lose some data, plain and simple. I would especially be worried
about the 2 brand new drives that were just bought. You are probably
the best person to estimate the probability of them dying, as you
know their history (have been running 24/7 for 1-2 years with
periodical scrubs and not a single pb ? then they are probably ok).

IMHO you can reduce the risk a lot by scrubbing everything:
- before you start, scrub your existing pool (pool #1)
- scrub pool #2 after copying data to it and before destroying pool #1
- scrub pool #1 (made of sparse files) and pool #3 (backing the sparse
  files) after copying from pool #2 to #1
- rescrub pool #1 after replacing the sparse files with real drives

> Does the fact that the pool is currently at 90% usage change this
> in any way.

Nope.

-marc

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] luactivate: how to fix things from boot:net?

2008-11-01 Thread Vincent Fox
Thanks I have restated the question over there.

Just thought this a ZFS question since I am doing Sparc ZFS root mirrors.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss