Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-20 Thread Wout Mertens

Just a "me too" mail:

On 13 Sep 2006, at 08:30, Richard Elling wrote:


Is this use of slightly based upon disk failure modes?  That is, when
disks fail do they tend to get isolated areas of badness compared to
complete loss?  I would suggest that complete loss should include
someone tripping over the power cord to the external array that  
houses

the disk.


The field data I have says that complete disk failures are the  
exception.


It's the same here. In our 100 laptop population in the last 2 years,  
we had 2 dead drives and 10 or so with I/O errors.



BTW, this feature will be very welcome on my laptop!  I can't wait :-)


I, too, would love having two copies of my important data on my  
laptop drive. Laptop drives are small enough as they are, there's no  
point in storing the OS, tmp and swap files twice as well.


So if ditto-data blocks aren't hard to implement, they would be  
welcome. Otherwise there's still the mirror-split-your-drive approach.


Wout.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ztune

2006-09-20 Thread Robert Milkowski
Hello Roch,

Tuesday, September 19, 2006, 5:00:22 PM, you wrote:

R> Tuning is generally evil, so be extra cautious with this;
R> with time, as we understand the beast, we'll get rid of
R> such things:
R> 
R> http://blogs.sun.com/roch/entry/tuning_the_knobs

And changing vdev prefetch size could make HUGE difference - at least
it did here. Generally the bigger work set and more random read IO the
more benefit there could be from lowering it down I guess.

Roch - thank you for that script.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Some questions about how to organize ZFS-based filestorage

2006-09-20 Thread Sergey
Hi all,

I am trying to organize our small (and the only one) filestorage using and 
thinking in ZFS-style )

So I have SF x4100 (2 x DualCore AMD Opteron 280, 4 Gb of RAM, Solaris 10 x86 
06/06 64 bit kernel + updates),  Sun Fiber Channel HBA card (Qlogic-based) and 
Apple Xraid 7Tb (2 raid controllers with 7 x 500 Gb ATA disks per each 
controller). Two internal SAS drives are in RAID1 mode using built-in LSI 
controller.

Xraid is confugured like the following - 6 disks in HW RAID 5 and one spare 
disk per controller.

So I have :

# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c0t2d0 
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1000,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   1. c4t600039317312d0 
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1077,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
   2. c5t60003931742Bd0 
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1077,[EMAIL 
PROTECTED],1/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0


I need a place to keep multiple builds of the products (a huge number of small 
files). This will take about 2 Tb - so it's quite logical to give the whole 
"1." or "2." from the output above. What will be the best block size that I 
need to supply to "zfs create" command to get the most from filesystem that has 
a huge number of small files?

The other tank will host users' homes, projects' files and other files.

Now I am thinking to create two separate ZFS pools. "1." and "2." will be the 
only physical devices in both pools.

Or I'd better go and create one xfs pool that includes both "1." and "2."?

Later on I will use NFS to share this filestorage between Linux, Solaris, 
OpenSolaris and MacOSX hosts.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Some questions about how to organize ZFS-based filestorage

2006-09-20 Thread James C. McPherson

Sergey wrote:

I am trying to organize our small (and the only one) filestorage using and
thinking in ZFS-style )

..

AVAILABLE DISK SELECTIONS: 0. c0t2d0 
 /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1000,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0 1. c4t600039317312d0
 
/[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1077,[EMAIL PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0 2.
c5t60003931742Bd0  
/[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1077,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0



I need a place to keep multiple builds of the products (a huge number of
small files). This will take about 2 Tb - so it's quite logical to give the
whole "1." or "2." from the output above. What will be the best block size
that I need to supply to "zfs create" command to get the most from
filesystem that has a huge number of small files?


You're not quite thinking ZFS-style yet. With ZFS you do not have to
worry about block sizes unless you want to - the filesystem handles
that for you.


cheers,
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
  http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] is there any way to merge pools and zfs file systems together?

2006-09-20 Thread Krzys
Weird question but I have two separate pools and I have zfs file system on both 
of them, I wanted to see if there is a way to merge them together?!? or I have 
to dump content to tape (or some other location) then destroy both pools make 
one big pool of those two and then create zfs on it and recover data?!?


thanks for help or any info.

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is there any way to merge pools and zfs file systems together?

2006-09-20 Thread George Wilson

Chris,

You could use 'zfs send/recv' to migrate one of the filesystems to pool 
you want to keep. You will need to make sure that the properties for 
that filesystem are correct after migrating it over. Then you can 
destroy the pool you just migrated off of and add those disks to the 
pool you migrated to. This will prevent you from having to destroy all 
and restore.


Thanks,
George

Krzys wrote:
Weird question but I have two separate pools and I have zfs file system 
on both of them, I wanted to see if there is a way to merge them 
together?!? or I have to dump content to tape (or some other location) 
then destroy both pools make one big pool of those two and then create 
zfs on it and recover data?!?


thanks for help or any info.

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is there any way to merge pools and zfs file systems together?

2006-09-20 Thread Krzys
great, yeah thats an option but I dont have anough space so I will have to copy 
those files off the system I guess and then do it, or at least from one pool 
from what I can see and then add another drive to my pool... that will work 
slightly faster :)


thanks.

Chris


On Wed, 20 Sep 2006, George Wilson wrote:


Chris,

You could use 'zfs send/recv' to migrate one of the filesystems to pool you 
want to keep. You will need to make sure that the properties for that 
filesystem are correct after migrating it over. Then you can destroy the pool 
you just migrated off of and add those disks to the pool you migrated to. 
This will prevent you from having to destroy all and restore.


Thanks,
George

Krzys wrote:
Weird question but I have two separate pools and I have zfs file system on 
both of them, I wanted to see if there is a way to merge them together?!? 
or I have to dump content to tape (or some other location) then destroy 
both pools make one big pool of those two and then create zfs on it and 
recover data?!?


thanks for help or any info.

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



!DSPAM:122,451159a326684021468!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Disk Layout for New Storage Server

2006-09-20 Thread Eric Hill
I really like that idea.  That indeed would provide for both excellent 
reliability (the ability to lose an entire shelf) and performance (stripe 
across 6 rz2 pools).  Thanks for the suggestion!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Veritas NetBackup Support for ZFS

2006-09-20 Thread Bob Connelly

Hi Folks:

Is anyone aware whether or not Veritas Enterprise NetBackup supports 
ZFS? The customer is currently using NetBackup version 5.0 but is moving 
to Version 6.0.


Thanks
Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Veritas NetBackup Support for ZFS

2006-09-20 Thread Kristofer
ZFS is not supported by NetBackup engineering yet.  No ETA on when that
is supposedly coming out.  The files themselves appear to be backed up
(I tested this using NetBackup 6.0MP3), and can be restored, but ACL
information and such is not backed up.  Your bpbkar log might fill up
with warning messages about not being able to obtain the ACL
information, as well.On 9/20/06, Bob Connelly <[EMAIL PROTECTED]> wrote:
Hi Folks:Is anyone aware whether or not Veritas Enterprise NetBackup supportsZFS? The customer is currently using NetBackup version 5.0 but is movingto Version 6.0.ThanksBob___
zfs-discuss mailing listzfs-discuss@opensolaris.orghttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Veritas NetBackup Support for ZFS

2006-09-20 Thread Jeff A. Earickson

Hi,

I am using Netbackup 6.0 MP3 on several ZFS systems just fine.  I
think that NBU won't back up some exotic ACLs of ZFS, but if you
are using ZFS like other filesystems (UFS, etc) then there aren't
any issues.

Jeff Earickson
Colby College

On Wed, 20 Sep 2006, Bob Connelly wrote:


Date: Wed, 20 Sep 2006 19:44:53 -0400
From: Bob Connelly <[EMAIL PROTECTED]>
To: zfs-discuss@opensolaris.org, Robert Connelly <[EMAIL PROTECTED]>
Subject: [zfs-discuss] Veritas NetBackup Support for ZFS

Hi Folks:

Is anyone aware whether or not Veritas Enterprise NetBackup supports ZFS? The 
customer is currently using NetBackup version 5.0 but is moving to Version 
6.0.


Thanks
Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs scrub question

2006-09-20 Thread Wee Yeh Tan

Peter,

I'll first check /var/adm/messages to see if there are any poblems
with the following disks:

c10t600A0B800011730E66F444C5EE7Ed0
c10t600A0B800011730E66F644C5EE96d0
c10t600A0B800011652EE5CF44C5EEA7d0
c10t600A0B800011730E66F844C5EEBAd0

The checksum errors seems to concentrate around these.

--
Just me,
Wire ...


On 9/20/06, Peter Wilk <[EMAIL PROTECTED]> wrote:

All,

IHAC that had  called in an issue with the following description.

I have a system which has two ZFS storage pools.  One of the pools is on
hardware which is having problems, so I wanted to start the system with
only one of the two ZFS storage pools.  How do NOT mount the second ZFS
storage pool?

engineer response:
ZFS has a number of commands for this. If you want to make it so the
system does not use the pool, you can offline the pool until you have a
chance to repair it. From the ZFS manual:

zpool offline  
zpool offline myzfspool c1t1d0

However, note you may not be able to offline it if it is the only device
in the pool, in which case you would have to add another device so that
data can be transferred until the bad drive is replaced, otherwise there
would be data loss.

You may want to check the status with:

zpool status 

Depending on what you find here, you may be able to remove the bad
device and replace it or you may have to try and back up the data,
destroy the pool and recreate it on the new device. If you reference the
ZFS Administration Manual, the full information is listed on pages 135-140.


customer response:
Since all my disks for the entire pool were not available, I ended up
exporting the entire zpool.  At that point I could bring up the system
with the zpool which was operational.

After we got the storage subsystem fixed we brought the second zpool
back online with zpool import.

Since we had problems with the storage we ran zpool scrub on the pool.
Checksum errors were found, on a single device in each of the raidz
groups.  I have been told that ZFS will correct these errors.  After
zpool scrub ran to completion, we cleared the errors, and we are now in
the process of running it again.  There are several hours to go, but it
has already flagged additional checksum errors.  I would have thought
the original run of zpool scrub would have fixed these.


Not even understanding ZFS fully and just learning of this command zfs
scrub. I believe zfs scrub is similiar to an fsck. It appears that zfs
scrub is not resolving the issue..any suggestion would be helpful
zfs

Thanks

Peter

Please respond to me directly for I may  not be on this alias



Customer was told to do the following commands and I believe it did not
clear up his issue..see below

zpool status -v (should list the status and what errors it found)
zpool scrub (one more time to see if more errors are found)
zpool status -v (this should show us an after picture)

Sorry I left that out, yes, you would want to run a zpool clear before
the scrub. You may also want to output a zpool status after the clear to
make sure the count cleared out. When you run the commands, you may just
want to do a script session to capture everything.

latest email from customer

After
I am attaching some output files which show "zpool scrub" running
multiple times and catching checksums each time.  Remember, I have
now run zpool scrub about 3 - 4 times.




=
 __
/_/\
   /_\\ \Peter Wilk -  OS/Security Support
  /_\ \\ /   Sun Microsystems
 /_/ \/ / /  1 Network Drive,  P.O Box 4004
/_/ /   \//\ Burlington, Massachusetts 01803-0904
\_\//\   / / 1-800-USA-4SUN, opt 1, opt 1,#
 \_/ / /\ /  Email: [EMAIL PROTECTED]
  \_/ \\ \   =
   \_\ \\
\_\/

=

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss