Re: [zfs-discuss] HAMMER

2007-10-17 Thread Robert Milkowski
Hello Dave,

Tuesday, October 16, 2007, 9:17:30 PM, you wrote:

DJ> you mean c9n ? ;)

DJ> does anyone actually *use* compression ?  i'd like to see a poll on how many
DJ> people are using (or would use) compression on production systems that are
DJ> larger than your little department catch-all dumping ground server.  i mean,
DJ> unless you had some NDMP interface directly to ZFS, daily tape backups for
DJ> any large system will likely be an excersize in futility unless the systems
DJ> are largely just archive servers, at which point it's probably smarter to
DJ> perform backups less often, coinciding with the workflow of migrating 
DJ> archive data to it.  otherwise wouldn't the system just plain get pounded?

LDAP servers with several dozen millions accounts?
Why? First you get about 2:1 compression ratio with lzjb, and you also
get better performance.


-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] practicality of zfs send/receive for failover

2007-10-17 Thread Robert Milkowski
Hello Matthew,

Wednesday, October 17, 2007, 1:46:02 AM, you wrote:

MA> Richard Elling wrote:
>> Paul B. Henson wrote:
>>> On Fri, 12 Oct 2007, Paul B. Henson wrote:
>>>
 I've read a number of threads and blog posts discussing zfs send/receive
 and its applicability is such an implementation, but I'm curious if
 anyone has actually done something like that in practice, and if so how
 well it worked.
>>> So I didn't hear from anyone on this thread actually running such an
>>> implementation in production? Could someone maybe comment on a theoretical
>>> level :) whether this would be realistic for multiple terabytes, or if I
>>> should just give up on it?
>> 
>> It should be more reasonable to use ZFS send/recv than a dumb volume
>> block copy.  It should be on the same order of goodness as rsync-style
>> copying.  I use send/recv quite often, but my wife doesn't have a TByte
>> of pictures (yet :-)

MA> Incremental zfs send/recv is actually orders of magnitude "more goodness"
MA> than rsync (due to much faster finding of changed files).

MA> I know of customers who are using send|ssh|recv to replicate entire thumpers
MA> across the country, in production.  I'm sure they'll speak up here if/when
MA> they find this thread...

I know such environment too, however just across a server room :)
Is it perfect? No... but still comparing to "legacy" backup it's much
better in terms of performance and much worse in terms of
manageability.

-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs-ownership

2007-10-17 Thread Claus Guttesen
> Is the mount using NFSv4?  If so, there is likely a midguided
> mapping of the user/groups between the client and server.
>
> While not including BSD info, there is a little bit on
> NFSv4 user/group mappings at this blog:
> http://blogs.sun.com/nfsv4

It defaults to nfs ver. 3. As a sidenote samba is running on this
machine as well, and the windows-share is able to read and write to
the sub-partitons (under home/user).

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HAMMER

2007-10-17 Thread Dave Johnson
From: "Robert Milkowski" <[EMAIL PROTECTED]>
> LDAP servers with several dozen millions accounts?
> Why? First you get about 2:1 compression ratio with lzjb, and you also
> get better performance.

a busy ldap server certainly seems a good fit for compression but when i 
said "large" i meant, as in bytes and numbers of files :)

seriously, is anyone out there using zfs for large "storage" servers?  you 
know, the same usage that 90% of the storage sold in the world is used for ? 
(yes, i pulled that figure out of my *ss ;)

are my concerns invalid with the current implementation of zfs with 
compression ?  is the compression so lightweight that it can be decompressed 
as fast as the disks can stream uncompressed backup data to tape while the 
server is still servicing clients ?  the days of "nightly" backups seem long 
gone in the space I've been working in the last several years... backups run 
almost 'round the clock it seems on our biggest systems (15-30Tb and 
150-300mil files , which may be small by the standard of others of you out 
there.)

what really got my eyes rolling about c9n and prompted my question was all 
this talk about gzip compression and other even heavierweight compression 
algor's.  lzjb is relatively lightweight but i could still see it being a 
bottleneck in a 'weekly full backups' scenario unless you had a very new 
system with kilowatts of cpu to spare.  gzip ? pulease.  bzip and lzma 
someone has *got* to be joking ?  i see these as ideal candiates for AVS 
scenarios where the aplication never requires full dumps to tape, but on a 
typical storage server ?  the compression would be ideal but would also make 
it impossible to backup in any reasonable "window".

back to my postulation, if it is correct, what about some NDMP interface to 
ZFS ?  it seems a more than natural candidate.  in this scenario, 
compression would be a boon since the blocks would already be in a 
compressed state.  I'd imagine this fitting into the 'zfs send' codebase 
somewhere.

thoughts (on either c9n and/or 'zfs send ndmp') ?

-=dave

- Original Message - 
From: "Robert Milkowski" <[EMAIL PROTECTED]>
To: "Dave Johnson" <[EMAIL PROTECTED]>
Cc: "roland" <[EMAIL PROTECTED]>; 
Sent: Wednesday, October 17, 2007 2:35 AM
Subject: Re[2]: [zfs-discuss] HAMMER


> Hello Dave,
>
> Tuesday, October 16, 2007, 9:17:30 PM, you wrote:
>
> DJ> you mean c9n ? ;)
>
> DJ> does anyone actually *use* compression ?  i'd like to see a poll on 
> how many
> DJ> people are using (or would use) compression on production systems that 
> are
> DJ> larger than your little department catch-all dumping ground server.  i 
> mean,
> DJ> unless you had some NDMP interface directly to ZFS, daily tape backups 
> for
> DJ> any large system will likely be an excersize in futility unless the 
> systems
> DJ> are largely just archive servers, at which point it's probably smarter 
> to
> DJ> perform backups less often, coinciding with the workflow of migrating
> DJ> archive data to it.  otherwise wouldn't the system just plain get 
> pounded?
>
> LDAP servers with several dozen millions accounts?
> Why? First you get about 2:1 compression ratio with lzjb, and you also
> get better performance.
>
>
> -- 
> Best regards,
> Robert Milkowskimailto:[EMAIL PROTECTED]
>   http://milek.blogspot.com
>
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Upgrade from B62 ZFS Boot/Root to B70b

2007-10-17 Thread Brian Hechinger
How painful is this going to be?  Completely?

-brian
-- 
"Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my burger is cooked thoroughly."  -- Jonathan 
Patschke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs-ownership

2007-10-17 Thread Paul Kraus
On 10/16/07, Claus Guttesen <[EMAIL PROTECTED]> wrote:

> I have created some zfs-partitions. First I create the
> home/user-partitions. Beneath that I create additional partitions.
> Then I have do a chown -R for that user. These partitions are shared
> using the sharenfs=on. The owner- and group-id is 1009.
>
> These partitions are visible as the user assigned above. But when I
> mount the home/user partition from a FreeBSD-client, only the
> top-partiton has the proper uid- and guid-assignment. The partitons
> beneath are assigned to the root/wheel (uid 0 and gid 0 on FreeBSD).
>
> Am I doing something wrong

Did you mount both the parent and all the children on the client ?

-- 
Paul Kraus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs-ownership

2007-10-17 Thread Claus Guttesen
> > I have created some zfs-partitions. First I create the
> > home/user-partitions. Beneath that I create additional partitions.
> > Then I have do a chown -R for that user. These partitions are shared
> > using the sharenfs=on. The owner- and group-id is 1009.
> >
> > These partitions are visible as the user assigned above. But when I
> > mount the home/user partition from a FreeBSD-client, only the
> > top-partiton has the proper uid- and guid-assignment. The partitons
> > beneath are assigned to the root/wheel (uid 0 and gid 0 on FreeBSD).
> >
> > Am I doing something wrong
>
> Did you mount both the parent and all the children on the client ?

No, I just assumed that the sub-partitions would inherit the same
uid/gid as the parent. I have done a chown -R.

That would be a neat feature because I have some fairly large
partitions hosted on another server (solaris 9 on sparc and vxfs)
which shares disk via nfs. Everytime I create a new partition I must
create and mount the partitons on each webserver.

Not that I have many webservers, but it would be nice to create a
/data-partition, then a image-partition below that and then a, b, c,
d, e, f etc. (/data/image/a).

Then all I had to do was to mount the image-partiton and I wouldn't
have to worry about mounting other than the image-partition on the
webserver.

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Home fileserver with solaris 10 and zfs

2007-10-17 Thread Sandro
hi

I am currently running a linux box as my fileserver at home.
It's got eight 250 gig sata2 drives connected to two sata pci controllers and 
configured as one big raid5 with linux software raid.
Linux is (and solaris will be) installed on two separate mirrored disks. 

I've been playing around with solaris 10 and zfs at work and I'm pretty exited 
about it and now I'd like to migrate my fileserver to solaris and zfs.

Now I just wanted to hear your opinion on how to configure these eight disks. 
Should I use raidz or raidz2 and how many pools, block size and so on?

Most of my files are between 200 and 800 megs and a total of 100.000 files.

The new solaris box will have 2 gigs of mem and a AMD dualcore cpu around 2ghz.

Any suggestions, tips or experiences ?

Thanks in advance

regards
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs-ownership

2007-10-17 Thread Paul Kraus
On 10/17/07, Claus Guttesen <[EMAIL PROTECTED]> wrote:

> > Did you mount both the parent and all the children on the client ?
>
> No, I just assumed that the sub-partitions would inherit the same
> uid/gid as the parent. I have done a chown -R.

  Ahhh, the issue is not permissions, but how the NFS server
sees the various directories to share. Each dataset in the zpool is
seen as a separate FS from the OS perspective; each is a separate NFS
share. In which case each has to be mounted separately on the NFS
client.

-- 
Paul Kraus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs-ownership

2007-10-17 Thread Claus Guttesen
> > > Did you mount both the parent and all the children on the client ?
> >
> > No, I just assumed that the sub-partitions would inherit the same
> > uid/gid as the parent. I have done a chown -R.
>
>   Ahhh, the issue is not permissions, but how the NFS server
> sees the various directories to share. Each dataset in the zpool is
> seen as a separate FS from the OS perspective; each is a separate NFS
> share. In which case each has to be mounted separately on the NFS
> client.

Thank you for the clarification. When mounting the same partitions
from  a windows-client I get r/w access to both the parent- and
child-partition.

Will it be possible to implement such a feature in nfs?

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HAMMER

2007-10-17 Thread Carisdad
Dave Johnson wrote:
> From: "Robert Milkowski" <[EMAIL PROTECTED]>
>   
>> LDAP servers with several dozen millions accounts?
>> Why? First you get about 2:1 compression ratio with lzjb, and you also
>> get better performance.
>> 
>
> a busy ldap server certainly seems a good fit for compression but when i 
> said "large" i meant, as in bytes and numbers of files :)
>
> seriously, is anyone out there using zfs for large "storage" servers?  you 
> know, the same usage that 90% of the storage sold in the world is used for ? 
> (yes, i pulled that figure out of my *ss ;)

We're using ZFS compression on Netbackup Disk Cache Media Servers.  I 
have 3 media servers with 42TB usable each, with compression enabled.  I 
had to wait for Sol10 U4 to run compression because these are T2000's 
and there was a problem that zfs was using only 1 compression thread per 
pool which made it too slow.  But after U4, I have no problem handling 
bursts of nearly 2Gbit/s of backup streams in over the network while 
still spooling to a pair of 30MByte/s tape drives on each server.

-Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HAMMER

2007-10-17 Thread Jonathan Loran


We are using zfs compression across 5 zpools, about 45TB of data on 
iSCSI storage.  I/O is very fast, with small fractional CPU usage (seat 
of the pants metrics here, sorry).  We have one other large 10TB volume 
for nearline Networker backups, and that one isn't compressed.  We 
already compress these data on the backup client, and there wasn't any 
more compression to be had on the zpool, so it isn't worth it there. 

There's no doubt that heavier weight compression would be a problem as 
you say.  One thing that would be ultra cool on the backup pool would be 
to have post write compression.  After backups are done, the backup 
server sits more or less idle. It would be cool to do a compress on 
scrub operation that cold do some real high level compression.  Then we 
could zfssend | ssh-remote | zfsreceive to an off site location with far 
less less network bandwidth, not to mention the remote storage could be 
really small.  Datadomain (www.datadomain.com) does block level 
checksumming to save files as link lists of common blocks.  They get 
very high compression ratios (in our tests about 6/1, but with more 
frequent full backups, more like 20/1).  Then off site transfers go that 
much faster.


Jon

Dave Johnson wrote:

From: "Robert Milkowski" <[EMAIL PROTECTED]>
  

LDAP servers with several dozen millions accounts?
Why? First you get about 2:1 compression ratio with lzjb, and you also
get better performance.



a busy ldap server certainly seems a good fit for compression but when i 
said "large" i meant, as in bytes and numbers of files :)


seriously, is anyone out there using zfs for large "storage" servers?  you 
know, the same usage that 90% of the storage sold in the world is used for ? 
(yes, i pulled that figure out of my *ss ;)


are my concerns invalid with the current implementation of zfs with 
compression ?  is the compression so lightweight that it can be decompressed 
as fast as the disks can stream uncompressed backup data to tape while the 
server is still servicing clients ?  the days of "nightly" backups seem long 
gone in the space I've been working in the last several years... backups run 
almost 'round the clock it seems on our biggest systems (15-30Tb and 
150-300mil files , which may be small by the standard of others of you out 
there.)


what really got my eyes rolling about c9n and prompted my question was all 
this talk about gzip compression and other even heavierweight compression 
algor's.  lzjb is relatively lightweight but i could still see it being a 
bottleneck in a 'weekly full backups' scenario unless you had a very new 
system with kilowatts of cpu to spare.  gzip ? pulease.  bzip and lzma 
someone has *got* to be joking ?  i see these as ideal candiates for AVS 
scenarios where the aplication never requires full dumps to tape, but on a 
typical storage server ?  the compression would be ideal but would also make 
it impossible to backup in any reasonable "window".


back to my postulation, if it is correct, what about some NDMP interface to 
ZFS ?  it seems a more than natural candidate.  in this scenario, 
compression would be a boon since the blocks would already be in a 
compressed state.  I'd imagine this fitting into the 'zfs send' codebase 
somewhere.


thoughts (on either c9n and/or 'zfs send ndmp') ?

-=dave

- Original Message - 
From: "Robert Milkowski" <[EMAIL PROTECTED]>

To: "Dave Johnson" <[EMAIL PROTECTED]>
Cc: "roland" <[EMAIL PROTECTED]>; 
Sent: Wednesday, October 17, 2007 2:35 AM
Subject: Re[2]: [zfs-discuss] HAMMER


  

Hello Dave,

Tuesday, October 16, 2007, 9:17:30 PM, you wrote:

DJ> you mean c9n ? ;)

DJ> does anyone actually *use* compression ?  i'd like to see a poll on 
how many
DJ> people are using (or would use) compression on production systems that 
are
DJ> larger than your little department catch-all dumping ground server.  i 
mean,
DJ> unless you had some NDMP interface directly to ZFS, daily tape backups 
for
DJ> any large system will likely be an excersize in futility unless the 
systems
DJ> are largely just archive servers, at which point it's probably smarter 
to

DJ> perform backups less often, coinciding with the workflow of migrating
DJ> archive data to it.  otherwise wouldn't the system just plain get 
pounded?


LDAP servers with several dozen millions accounts?
Why? First you get about 2:1 compression ratio with lzjb, and you also
get better performance.


--
Best regards,
Robert Milkowskimailto:[EMAIL PROTECTED]
  http://milek.blogspot.com





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


--


- _/ _/  /   - Jonathan Loran -   -
-/  /   /IT Manager   -
-  _  /   _  / / Space Sciences Laboratory, UC Berkeley
-/ 

Re: [zfs-discuss] HAMMER

2007-10-17 Thread Richard Elling
Jonathan Loran wrote:
> 
> We are using zfs compression across 5 zpools, about 45TB of data on 
> iSCSI storage.  I/O is very fast, with small fractional CPU usage (seat 
> of the pants metrics here, sorry).  We have one other large 10TB volume 
> for nearline Networker backups, and that one isn't compressed.  We 
> already compress these data on the backup client, and there wasn't any 
> more compression to be had on the zpool, so it isn't worth it there. 

cool.

> There's no doubt that heavier weight compression would be a problem as 
> you say.  One thing that would be ultra cool on the backup pool would be 
> to have post write compression.  After backups are done, the backup 
> server sits more or less idle. It would be cool to do a compress on 
> scrub operation that cold do some real high level compression.  Then we 
> could zfssend | ssh-remote | zfsreceive to an off site location with far 
> less less network bandwidth, not to mention the remote storage could be 
> really small.  Datadomain (www.datadomain.com 
> ) does block level checksumming to save files 
> as link lists of common blocks.  They get very high compression ratios 
> (in our tests about 6/1, but with more frequent full backups, more like 
> 20/1).  Then off site transfers go that much faster.

Do not assume that a compressed file system will send compressed.  IIRC, it
does not.

But since UNIX is a land of pipe dreams, you can always compress anyway :-)
zfs send ... | compress | ssh ... | uncompress | zfs receive ...

  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HAMMER

2007-10-17 Thread Jonathan Loran


Richard Elling wrote:
> Jonathan Loran wrote:
...

> Do not assume that a compressed file system will send compressed.  
> IIRC, it
> does not.
Let's say, if it were possible to detect the remote compression support, 
couldn't we send it compressed?  With higher compression rates, wouldn't 
that be smart?  The Internet is not the land of infinite bandwidth that 
we often think it is.

>
> But since UNIX is a land of pipe dreams, you can always compress 
> anyway :-)
> zfs send ... | compress | ssh ... | uncompress | zfs receive ...
>
Gosh, how obvious is that, eh?   Thanks Richard.

>  -- richard

Jon

-- 


- _/ _/  /   - Jonathan Loran -   -
-/  /   /IT Manager   -
-  _  /   _  / / Space Sciences Laboratory, UC Berkeley
-/  / /  (510) 643-5146 [EMAIL PROTECTED]
- __/__/__/   AST:7731^29u18e3
 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] practicality of zfs send/receive for failover

2007-10-17 Thread Paul B. Henson
On Tue, 16 Oct 2007, Matthew Ahrens wrote:

> I know of customers who are using send|ssh|recv to replicate entire
> thumpers across the country, in production.  I'm sure they'll speak up
> here if/when they find this thread...

Ah, that's who I'd like to hear from :)... Thanks for the secondhand
information though...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HAMMER

2007-10-17 Thread Tim Spriggs
Jonathan Loran wrote:
> Richard Elling wrote:
>   
>> Jonathan Loran wrote:
>> 
> ...
>
>   
>> Do not assume that a compressed file system will send compressed.  
>> IIRC, it
>> does not.
>> 
> Let's say, if it were possible to detect the remote compression support, 
> couldn't we send it compressed?  With higher compression rates, wouldn't 
> that be smart?  The Internet is not the land of infinite bandwidth that 
> we often think it is.
>
>   
>> But since UNIX is a land of pipe dreams, you can always compress 
>> anyway :-)
>> zfs send ... | compress | ssh ... | uncompress | zfs receive ...
>>
>> 
> Gosh, how obvious is that, eh?   Thanks Richard.
>
>   
>>  -- richard
>> 
>
> Jon
>
>   

even better is the ability of ssh to compress the stream for you via ssh 
-C :)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Lack of physical memory evidences

2007-10-17 Thread Dmitry Degrave
In pre-ZFS era, we had observable parameters like scan rate and anonymous 
page-in/-out counters to discover situations when a system experiences a lack 
of physical memory. With ZFS, it's difficult to use mentioned parameters to 
figure out situations like that. Has someone any idea what we can use for the 
same purpose now ?

Thanks in advance,
Dmeetry
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding my own compression to zfs

2007-10-17 Thread roland
being at $300 now - a friend of mine just adding another $100
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] df command in ZFS?

2007-10-17 Thread David Runyon
I was presenting to a customer at the EBC yesterday, and one of the 
people at the meeting said using df in ZFS really drives him crazy (no, 
that's all the detail I have).  Any ideas/suggestions?

-- 
David Runyon
Disk Sales Specialist

Sun Microsystems, Inc.
4040 Palm Drive
Santa Clara, CA 95054 US
Mobile 925 323-1211
Email [EMAIL PROTECTED]


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] df command in ZFS?

2007-10-17 Thread MC
I asked this recently, but haven't done anything else about it:  
http://www.opensolaris.org/jive/thread.jspa?messageID=155583𥾿
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] df command in ZFS?

2007-10-17 Thread Mike Gerdts
On 10/17/07, David Runyon <[EMAIL PROTECTED]> wrote:
> I was presenting to a customer at the EBC yesterday, and one of the
> people at the meeting said using df in ZFS really drives him crazy (no,
> that's all the detail I have).  Any ideas/suggestions?

I suspect that this is related to the notion that file systems are
cheap and the traditional notion of quotas is replaced by cheap file
systems.  This makes it so that a system with 1000 users that
previously had a small number of file systems now has over 1000 file
systems.  What used to be relatively simple output from df now turns
into 40+ screens[1] on the default sized terminal window.

1.  If you are in this situation, there is a good chance that the
formatting of df cause line folding or wrapping that doubles the
number of lines to 80+ screens of df output.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] GRUB + zpool version mismatches

2007-10-17 Thread Jason King
Apparently with zfs boot, if the zpool is a version grub doesn't
recognize, it merely ignores any zfs entries in menu.lst, and
apparently instead boots the first entry it thinks it can boot.  I ran
into this myself due to some boneheaded mistakes while doing a very
manual zfs / install at the summit.

Shouldn't it at least spit out a warning?  If so, I have no issues
filing a bug, but wanted to bounce it off those more knowledgeable in
this area than I am.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Fracture Clone Into FS

2007-10-17 Thread Jason J. W. Williams
Hey Guys,

Its not possible yet to fracture a snapshot or clone into a
self-standing filesystem is it? Basically, I'd like to fracture a
snapshot/clone into is own FS so I can rollback past that snapshot in
the original filesystem and still keep that data.

Thank you in advance.

Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home fileserver with solaris 10 and zfs

2007-10-17 Thread Ian Collins
Sandro wrote:
> hi
>
> I am currently running a linux box as my fileserver at home.
> It's got eight 250 gig sata2 drives connected to two sata pci controllers and 
> configured as one big raid5 with linux software raid.
> Linux is (and solaris will be) installed on two separate mirrored disks. 
>
> I've been playing around with solaris 10 and zfs at work and I'm pretty 
> exited about it and now I'd like to migrate my fileserver to solaris and zfs.
>
> Now I just wanted to hear your opinion on how to configure these eight disks. 
> Should I use raidz or raidz2 and how many pools, block size and so on?
>
> Most of my files are between 200 and 800 megs and a total of 100.000 files.
>
> The new solaris box will have 2 gigs of mem and a AMD dualcore cpu around 
> 2ghz.
>
> Any suggestions, tips or experiences ?
>
>   
Have a look back the the many similar threads to this in this list, the
general answer is "it depends".  There are many possibilities with 8
drives, each offering different performance/reliability trade offs.

Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS+NFS on storedge 6120 (sun t4)

2007-10-17 Thread Joel Miller
Ok...got a break from the 25xx release...
Trying to catch up so...sorry for the late response...

The 6120 firmware does not support the Cache Sync command at all...

You could try using a smaller blocksize setting on the array to attempt to 
reduce the number of read/modify/writes that you will incur...

It also can be important to understand how zfs attempts to make aligned 
transactions as wellsince a single 128k write that starts on the beginning 
of a RAID stripe is guaranteed to do a full-stripe write v/s 2 
read/modify/write stripes

I have considered making an unsupported firmware that turns it into a caching 
JBOD...I just have not had any "infinite spare time"

-Joel
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] characterizing I/O on a per zvol basis.

2007-10-17 Thread Nathan Kroenert
Hey all -

Time for my silly question of the day, and before I bust out vi and 
dtrace...

If there a simple, existing way I can observe the read / write / IOPS on 
a per-zvol basis?

If not, is there interest in having one?

Cheers!

Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fracture Clone Into FS

2007-10-17 Thread Bill Moore
I may not be understanding your usage case correctly, so bear with me.

Here is what I understand your request to be.  Time is increasing from
left to right.

A -- B -- C -- D -- E
 \
  - F -- G

Where E and G are writable filesystems and the others are snapshots.

I think you're saying that you want to, for example, keep G and roll E
back to A, keeping A, B, F, and G.

If that's correct, I think you can just clone A (getting H), promote H,
then delete C, D, and E.  That would leave you with:

A -- H
\
 -- B -- F -- G

Is that anything at all like what you're after?


--Bill

On Wed, Oct 17, 2007 at 10:00:03PM -0600, Jason J. W. Williams wrote:
> Hey Guys,
> 
> Its not possible yet to fracture a snapshot or clone into a
> self-standing filesystem is it? Basically, I'd like to fracture a
> snapshot/clone into is own FS so I can rollback past that snapshot in
> the original filesystem and still keep that data.
> 
> Thank you in advance.
> 
> Best Regards,
> Jason
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs-ownership

2007-10-17 Thread Spencer Shepler

On Oct 17, 2007, at 11:25 AM, Claus Guttesen wrote:

 Did you mount both the parent and all the children on the client ?
>>>
>>> No, I just assumed that the sub-partitions would inherit the same
>>> uid/gid as the parent. I have done a chown -R.
>>
>>   Ahhh, the issue is not permissions, but how the NFS server
>> sees the various directories to share. Each dataset in the zpool is
>> seen as a separate FS from the OS perspective; each is a separate NFS
>> share. In which case each has to be mounted separately on the NFS
>> client.
>
> Thank you for the clarification. When mounting the same partitions
> from  a windows-client I get r/w access to both the parent- and
> child-partition.
>
> Will it be possible to implement such a feature in nfs?

NFSv4 allows the client visibility into the shared filesystems
at the server.  It is up to the client to "mount" or access
those individual filesystems.  The Solaris client is being updated
with this functionality (we have named it mirror-mounts); I don't know
about the bsd client's ability to do the same.

Spencer

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss