Re: [zfs-discuss] Self-tuning recordsize

2006-10-13 Thread Nicolas Williams
On Fri, Oct 13, 2006 at 09:22:53PM -0700, Erblichs wrote: > For extremely large files (25 to 100GBs), that are accessed > sequentially for both read & write, I would expect 64k or 128k. Lager files accessed sequentially don't need any special heuristic for record size determination:

Re: [zfs-discuss] Self-tuning recordsize

2006-10-13 Thread Erblichs
Group, I am not sure I agree with the 8k size. Since "recordsize" is based on the size of filesystem blocks for large files, my first consideration is what will be the max size of the file object. For extremely large files (25 to 100GBs), that are accessed

Re: [zfs-discuss] fsflush and zfs

2006-10-13 Thread Neil Perrin
ZFS ignores the fsflush. Here's a snippet of the code in zfs_sync(): /* * SYNC_ATTR is used by fsflush() to force old filesystems like UFS * to sync metadata, which they would otherwise cache indefinitely. * Semantically, the only requirement is that the sync be

Re: [zfs-discuss] zfs and zones

2006-10-13 Thread Mike Gerdts
On 10/13/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote: Using ZFS for a zones root is currently planned to be supported in solaris 10 update 5, but we are working on moving it up to update 4. Are there any areas where the community can help with this? Would code or "me too!" support calls help

[zfs-discuss] zfs_vfsops.c : zfs_vfsinit() : line 1179: Src inspection

2006-10-13 Thread Erblichs
Group, If their is a bad vfs ops template, why wouldn't you just return(error) versus trying to create the vnode ops template? My suggestion is after the cmn_err() then return(error); Mitchell Erblich -

Re: [zfs-discuss] Re: [nfs-discuss] Re: Re: NFS Performance and Tar

2006-10-13 Thread Roch
The high order bit here is that write(); write(); fsync(); can be executed using a single I/O latency (during the fsync) whereas using O_*DSYNC, will require 2 I/O latency (one for each write). -r Neil Perrin writes: > As far as zfs performance is concerned,

Re: [zfs-discuss] ZFS Usability issue : improve means of finding ZFS<->physdevice(s) mapping

2006-10-13 Thread Matthew Ahrens
Robert Milkowski wrote: Hello Noel, Friday, October 13, 2006, 11:22:06 PM, you wrote: ND> I don't understand why you can't use 'zpool status'? That will show ND> the pools and the physical devices in each and is also a pretty basic ND> command. Examples are given in the sysadmin docs and man

Re[2]: [zfs-discuss] ZFS Usability issue : improve means of finding ZFS<->physdevice(s) mapping

2006-10-13 Thread Robert Milkowski
Hello Noel, Friday, October 13, 2006, 11:22:06 PM, you wrote: ND> I don't understand why you can't use 'zpool status'? That will show ND> the pools and the physical devices in each and is also a pretty basic ND> command. Examples are given in the sysadmin docs and manpages for ND> ZFS on the

Re: [zfs-discuss] Zfs Performance with millions of small files in Sendmail messaging environment]

2006-10-13 Thread Robert Milkowski
Hello Ramneek, Friday, October 13, 2006, 6:07:22 PM, you wrote: RS> Hello Experts RS> Would appreciate if somebody can comment on sendmail environment on RS> solaris 10. RS> How will Zfs perform if one has millions of files in sendmail message RS> store directory under zfs filesystem compared to

Re[2]: [zfs-discuss] Thumper and ZFS

2006-10-13 Thread Robert Milkowski
Hello Matthew, Friday, October 13, 2006, 5:37:45 PM, you wrote: MA> Robert Milkowski wrote: >> Hello Richard, >> >> Friday, October 13, 2006, 8:05:18 AM, you wrote: >> >> REP> Do you want data availability, data retention, space, or performance? >> >> data availability, space, performance >>

Re: [zfs-discuss] ZFS Usability issue : improve means of finding ZFS<->physdevice(s) mapping

2006-10-13 Thread Noel Dellofano
I don't understand why you can't use 'zpool status'? That will show the pools and the physical devices in each and is also a pretty basic command. Examples are given in the sysadmin docs and manpages for ZFS on the opensolaris ZFS community page. I realize it's not quite the same command

[zfs-discuss] ZFS Usability issue : improve means of finding ZFS<->physdevice(s) mapping

2006-10-13 Thread Bruce Chapman
ZFS is supposed to be much easier to use than UFS. For creating a filesystem, I agree it is, as I could do that easily without a man page. However, I found it rather surprising that I could not see the physical device(s) a zfs filesystem was attached to using either "df" command (that shows ph

Re: [zfs-discuss] Self-tuning recordsize

2006-10-13 Thread Nicolas Williams
On Fri, Oct 13, 2006 at 08:30:27AM -0700, Matthew Ahrens wrote: > Jeremy Teo wrote: > >Would it be worthwhile to implement heuristics to auto-tune > >'recordsize', or would that not be worth the effort? > > It would be really great to automatically select the proper recordsize > for each file! H

[zfs-discuss] no tool to get "expected" disk usage reports

2006-10-13 Thread Dennis Clarke
- Original Message - Subject: no tool to get "expected" disk usage reports From:"Dennis Clarke" <[EMAIL PROTECTED]> Date:Fri, October 13, 2006 14:29 To: zfs-discuss@opensolaris.org ---

[zfs-discuss] Re: Self-tuning recordsize

2006-10-13 Thread Anton B. Rang
One technique would be to keep a histogram of read & write sizes. Presumably one would want to do this only during a “tuning phase” after the file was first created, or when access patterns change. (A shift to smaller record sizes can be detected by a large proportion of write operations which

[zfs-discuss] Re: [nfs-discuss] Re: Re: NFS Performance and Tar

2006-10-13 Thread Anton B. Rang
For what it's worth, close-to-open consistency was added to Linux NFS in the 2.4.20 kernel (late 2002 timeframe). This might be the source of some of the confusion. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@op

Re: [zfs-discuss] A versioning FS

2006-10-13 Thread Nicolas Williams
On Fri, Oct 13, 2006 at 11:03:51AM +0200, Joerg Schilling wrote: > Nicolas Williams <[EMAIL PROTECTED]> wrote: > > > On Wed, Oct 11, 2006 at 08:24:13PM +0200, Joerg Schilling wrote: > > > Before we start defining the first offocial functionality for this Sun > > > feature, > > > we should define

[zfs-discuss] Zfs Performance with millions of small files in Sendmail messaging environment]

2006-10-13 Thread Ramneek Sethi
Hello Experts Would appreciate if somebody can comment on sendmail environment on solaris 10. How will Zfs perform if one has millions of files in sendmail message store directory under zfs filesystem compared to UFS or VxFS.. -- Thanks & Regards, ***

[zfs-discuss] Re: Re[2]: Thumper and ZFS

2006-10-13 Thread Anton B. Rang
I don't think controllers really fail that often, but anything that increases redundancy is likely to be an improvement. I would hope that the controllers used in Thumper mostly keep their channels independent from a PCI point of view, so that your pools don’t collide. (It does mean that each r

Re: [zfs-discuss] Thumper and ZFS

2006-10-13 Thread Matthew Ahrens
Robert Milkowski wrote: Hello Richard, Friday, October 13, 2006, 8:05:18 AM, you wrote: REP> Do you want data availability, data retention, space, or performance? data availability, space, performance However we're talking about quite a lot of small IOs (r+w). Then you should seriously cons

Re: [zfs-discuss] zfs and zones

2006-10-13 Thread Matthew Ahrens
Roshan Perera wrote: Hi Jeff & Robert, Thanks for the reply. Your interpretation is correct and the answer spot on. This is going to be at a VIP clients QA/production environment and first introduction to 10, zones and zfs. Anything unsupported is not allowed. Hence I may have to wait for the fi

Re: [zfs-discuss] Self-tuning recordsize

2006-10-13 Thread Matthew Ahrens
Jeremy Teo wrote: Would it be worthwhile to implement heuristics to auto-tune 'recordsize', or would that not be worth the effort? It would be really great to automatically select the proper recordsize for each file! How do you suggest doing so? --matt _

[zfs-discuss] Re: Re: zfs/raid configuration question for an

2006-10-13 Thread Anton B. Rang
Most ZFS improvements should be available through patches. Some may require moving to a future update (for instance, ZFS booting, which may have other implications throughout the system). On most systems, you won’t see a lot of difference between hardware or software mirroring. The benefit of

Re: [zfs-discuss] Where is the ZFS configuration data stored?

2006-10-13 Thread Darren Dunham
> Does it matter if the /dev names of the partitions change (i.e. from / > dev/dsk/c2t2250CC611005d3s0 to another machine not using sun hba > drivers with a different/shorter name??) It should not. As long as all the disks are visible and ZFS can read the labels, it should be able to impor

[zfs-discuss] Self-tuning recordsize

2006-10-13 Thread Jeremy Teo
Would it be worthwhile to implement heuristics to auto-tune 'recordsize', or would that not be worth the effort? -- Regards, Jeremy ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Where is the ZFS configuration data stored?

2006-10-13 Thread Keith Clay
Does it matter if the /dev names of the partitions change (i.e. from / dev/dsk/c2t2250CC611005d3s0 to another machine not using sun hba drivers with a different/shorter name??) thanks keith If the file does not exist than ZFS will not attempt to open any pools at boot. You must issu

Re: [nfs-discuss] Re: [zfs-discuss] Re: NFS Performance and Tar

2006-10-13 Thread Spencer Shepler
On Fri, Joerg Schilling wrote: > Spencer Shepler <[EMAIL PROTECTED]> wrote: > > > Sorry, the code in Solaris would behave as I described. Upon the > > application closing the file, modified data is written to the server. > > The client waits for completion of those writes. If there is an error,

Re: [nfs-discuss] Re: [zfs-discuss] Re: NFS Performance and Tar

2006-10-13 Thread Joerg Schilling
Jeff Victor <[EMAIL PROTECTED]> wrote: > >>Your working did not match with the reality, this is why I did write this. > >>You did write that upon close() the client will first do something similar > >>to > >>fsync on that file. The problem is that this is done asynchronously and the > >>close()

Re: [nfs-discuss] Re: [zfs-discuss] Re: NFS Performance and Tar

2006-10-13 Thread Joerg Schilling
Spencer Shepler <[EMAIL PROTECTED]> wrote: > Sorry, the code in Solaris would behave as I described. Upon the > application closing the file, modified data is written to the server. > The client waits for completion of those writes. If there is an error, > it is returned to the caller of close(

Re: [zfs-discuss] zfs and zones

2006-10-13 Thread Roshan Perera
Hi Jeff & Robert, Thanks for the reply. Your interpretation is correct and the answer spot on. This is going to be at a VIP clients QA/production environment and first introduction to 10, zones and zfs. Anything unsupported is not allowed. Hence I may have to wait for the fix. Do you know roughl

Re: [nfs-discuss] Re: [zfs-discuss] Re: NFS Performance and Tar

2006-10-13 Thread Spencer Shepler
On Fri, Jeff Victor wrote: > Spencer Shepler wrote: > >On Fri, Joerg Schilling wrote: > > > >>>This doesn't change the fact that upon close() the NFS client will > >>>write data back to the server. This is done to meet the > >>>close-to-open semantics of NFS. > >> > >>Your working did not match wi

Re: [nfs-discuss] Re: [zfs-discuss] Re: NFS Performance and Tar

2006-10-13 Thread Jeff Victor
Spencer Shepler wrote: On Fri, Joerg Schilling wrote: This doesn't change the fact that upon close() the NFS client will write data back to the server. This is done to meet the close-to-open semantics of NFS. Your working did not match with the reality, this is why I did write this. You did

Re: [nfs-discuss] Re: [zfs-discuss] Re: NFS Performance and Tar

2006-10-13 Thread Spencer Shepler
On Fri, Joerg Schilling wrote: > Spencer Shepler <[EMAIL PROTECTED]> wrote: > > > I didn't comment on the error conditions that can occur during > > the writing of data upon close(). What you describe is the preferred > > method of obtaining any errors that occur during the writing of data. > > T

Re: [zfs-discuss] zfs and zones

2006-10-13 Thread Jeff Victor
Roshan Perera wrote: Hi, Sorry if this has been raised before. Question: IS it possible to 1. Solaris 10 OS partitons to be SDS and have a single partition on that same disk (without SDS) to be ZFS slice. Yes. 2. Partition the zfs slice for many partitions and each partition to hold a zon

Re: [zfs-discuss] zfs and zones

2006-10-13 Thread Robert Milkowski
Hello Roshan, Friday, October 13, 2006, 1:12:12 PM, you wrote: RP> Hi, RP> Sorry if this has been raised before. RP> Question: IS it possible to RP> 1. Solaris 10 OS partitons to be SDS and have a single partition RP> on that same disk (without SDS) to be ZFS slice. Yes. RP> 2. Partition th

[zfs-discuss] zfs and zones

2006-10-13 Thread Roshan Perera
Hi, Sorry if this has been raised before. Question: IS it possible to 1. Solaris 10 OS partitons to be SDS and have a single partition on that same disk (without SDS) to be ZFS slice. 2. Partition the zfs slice for many partitions and each partition to hold a zone. Idea is to create many non-

Re[2]: [zfs-discuss] Thumper and ZFS

2006-10-13 Thread Robert Milkowski
Hello Richard, Friday, October 13, 2006, 8:05:18 AM, you wrote: REP> Do you want data availability, data retention, space, or performance? data availability, space, performance However we're talking about quite a lot of small IOs (r+w). The real question was what do you think about creating ea

Re: [nfs-discuss] Re: [zfs-discuss] Re: NFS Performance and Tar

2006-10-13 Thread Joerg Schilling
Spencer Shepler <[EMAIL PROTECTED]> wrote: > I didn't comment on the error conditions that can occur during > the writing of data upon close(). What you describe is the preferred > method of obtaining any errors that occur during the writing of data. > This occurs because the NFS client is writin

Re: [zfs-discuss] A versioning FS

2006-10-13 Thread Joerg Schilling
Nicolas Williams <[EMAIL PROTECTED]> wrote: > On Wed, Oct 11, 2006 at 08:24:13PM +0200, Joerg Schilling wrote: > > Before we start defining the first offocial functionality for this Sun > > feature, > > we should define a mapping for Mac OS, FreeBSD and Linux. It may make > > sense, to > > def

[zfs-discuss] Re: ZFS ACLs and Samba

2006-10-13 Thread Jiri Sasek
ZFS/NFSv4 introduced a new acl model (see acl(2) ...nevada (OpenSolaris) Solaris10u2). There is no compatibility bridge between the GETACL/SETACL/GETACLCNT and ACE_SETACL/ACE_SETACL/ACE_GETACLCNT fonctions of acl(2) syscall. Because this is Solaris speciffic (samba.org defines its internal acl

[zfs-discuss] Re: Re: zfs/raid configuration question for an

2006-10-13 Thread mete
Since it is embedded into Solaris now, further improvements will be available by patching and we dont have to migrate to another update, right ? Two more things; 1) I know the benefits of ZFS, but I wonder if having ufs+zfs instead of only ufs on single disk mirroring (I mean there are two disk