Re: [zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-10-29 Thread Paul B. Henson
On Thu, 29 Oct 2009 casper@sun.com wrote: > Do you have the complete NFS trace output? My reading of the source code > says that the file will be created with the proper gid so I am actually > believing that the client "over corrects" the attributes after creating > the file/directory. I dug

Re: [zfs-discuss] FW: File level cloning

2009-10-29 Thread Robert Milkowski
create a dedicated zfs zvol or filesystem for each file representing your virtual machine. Then if you need to clone a VM you clone its zvol or the filesystem. Jeffry Molanus wrote: I'm not doing anything yet; I just wondered if ZFS provides any methods to do file level cloning instead of comp

Re: [zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-10-29 Thread Paul B. Henson
On Thu, 29 Oct 2009 casper@sun.com wrote: > Do you have the complete NFS trace output? My reading of the source code > says that the file will be created with the proper gid so I am actually > believing that the client "over corrects" the attributes after creating > the file/directory. Yes,

Re: [zfs-discuss] Dumb idea?

2009-10-29 Thread C. Bergström
Miles Nordin wrote: "pt" == Peter Tribble writes: pt> Does it make sense to fold this sort of intelligence into the pt> filesystem, or is it really an application-level task? in general it seems all the time app writers want to access hundreds of thousands of files by uni

[zfs-discuss] internal scrub keeps restarting resilvering?

2009-10-29 Thread Jeremy Kitchen
After several days of trying to get a 1.5TB drive to resilver and it continually restarting, I eliminated all of the snapshot-taking facilities which were enabled and 2009-10-29.14:58:41 [internal pool scrub done txg:567780] complete=0 2009-10-29.14:58:41 [internal pool scrub txg:567780] func

[zfs-discuss] Difficulty testing an SSD as a ZIL

2009-10-29 Thread Scott Meilicke
Hi all, I received my SSD, and wanted to test it out using fake zpools with files as backing stores before attaching it to my production pool. However, when I exported the test pool and imported, I get an error. Here is what I did: I created a file to use as a backing store for my new pool: mkf

Re: [zfs-discuss] Dumb idea?

2009-10-29 Thread Miles Nordin
> "pt" == Peter Tribble writes: pt> Does it make sense to fold this sort of intelligence into the pt> filesystem, or is it really an application-level task? in general it seems all the time app writers want to access hundreds of thousands of files by unique id rather than filename, a

Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread David Magda
On Oct 29, 2009, at 15:08, Henrik Johansson wrote: On Oct 29, 2009, at 5:23 PM, Bob Friesenhahn wrote: On Thu, 29 Oct 2009, Orvar Korvar wrote: So the solution is to never get more than 90% full disk space, för fan? Right. While UFS created artificial limits to keep the filesystem from

Re: [zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-10-29 Thread Casper . Dik
>I posted a little while back about a problem we are having where when a >new directory gets created over NFS on a Solaris NFS server from a Linux >NFS client, the new directory group ownership is that of the primary group >of the process, even if the parent directory has the sgid bit set and is

Re: [zfs-discuss] Dumb idea?

2009-10-29 Thread Peter Tribble
On Sat, Oct 24, 2009 at 12:12 PM, Orvar Korvar wrote: > Would this be possible to implement ontop ZFS? Maybe it is a dumb idea, I > dont know. What do you think, and how to improve this? > > Assume all files are put in the zpool, helter skelter. And then you can > create arbitrary different filt

[zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-10-29 Thread Paul B. Henson
I posted a little while back about a problem we are having where when a new directory gets created over NFS on a Solaris NFS server from a Linux NFS client, the new directory group ownership is that of the primary group of the process, even if the parent directory has the sgid bit set and is owned

Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread Rob Logan
> So the solution is to never get more than 90% full disk space while that's true, its not Henrik's main discovery. Henrik points out that 1/4 of the arc is used for metadata, and sometime that's not enough.. if echo "::arc" | mdb -k | egrep ^size isn't reaching echo "::arc" | mdb -k | egrep "^

Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread Henrik Johansson
On Oct 29, 2009, at 5:23 PM, Bob Friesenhahn wrote: On Thu, 29 Oct 2009, Orvar Korvar wrote: So the solution is to never get more than 90% full disk space, för fan? Right. While UFS created artificial limits to keep the filesystem from getting so full that it became sluggish and "sick",

Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Cindy Swearingen
Daniel, What is the actual size of c1d1? >I notice that the size of the first partition is wildly inaccurate. If format doesn't understand the disk, then ZFS won't either. Do you have some kind of intervening software like EMC powerpath or are these disks under some virtualization control? If

Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Daniel
Yes I am trying to create a non-redundant pool of two disks. The output of format -> partition for c0d0 Current partition table (original): Total disk sectors available: 976743646 + 16384 (reserved sectors) Part TagFlag First Sector Size Last Sector 0usr

Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Cindy Swearingen
I might need to see the format-->partition output for both c0d0 and c1td1. But in the meantime, you could try this: # zpool create tank2 c1d1 # zpool destroy tank2 # zpool add tank c1d1 Adding the c1d1 disk to the tank pool will create a non-redundant pool of two disks. Is this what you had in

Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread Bob Friesenhahn
On Thu, 29 Oct 2009, Orvar Korvar wrote: So the solution is to never get more than 90% full disk space, för fan? Right. While UFS created artificial limits to keep the filesystem from getting so full that it became sluggish and "sick", ZFS does not seem to include those protections. Don't

Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Daniel
Here is the output of zpool status and format. # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM tankONLINE 0 0 0 c0d0 ONLINE 0 0 0 errors: No known data errors

Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Cindy Swearingen
Hi Dan, Could you provide a bit more information, such as: 1. zpool status output for tank 2. the format entries for c0d0 and c1d1 Thanks, Cindy - Original Message - From: Daniel Date: Thursday, October 29, 2009 9:59 am Subject: [zfs-discuss] adding new disk to pool To: zfs-discuss@o

[zfs-discuss] adding new disk to pool

2009-10-29 Thread Daniel
Hi, I just installed 2 new disks in my solaris box and would like to add them to my zfs pool. After installing the disks I run # zpool add -n tank c1d1 and I get: would update 'tank' to the following configuration: tank c0d0 c1d1 Which is what I want however when I o

Re: [zfs-discuss] sub-optimal ZFS performance

2009-10-29 Thread Orvar Korvar
So the solution is to never get more than 90% full disk space, för fan? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] S10U8 msg/ZFS-8000-9P

2009-10-29 Thread Andrew Gabriel
Lasse Osterild wrote: Hi, Seems either Solaris or SunSolve is in need of an update. pool: dataPool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to

[zfs-discuss] S10U8 msg/ZFS-8000-9P

2009-10-29 Thread Lasse Osterild
Hi, Seems either Solaris or SunSolve is in need of an update. pool: dataPool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be repla

Re: [zfs-discuss] ZFS create hanging on door call?

2009-10-29 Thread Miles Benson
Hi, Did anyone ever get to the bottom of this? After enabling smb, I'm now seeing this behaviour - zfs create just hangs. Thanks Miles -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.op