Re: [zfs-discuss] Inode (dnode) numbers (Re: rename(2) (mv(1)) between ZFS filesystems in the same zpool)

2008-01-02 Thread Darren Reed
Nicolas Williams wrote: > On Mon, Dec 31, 2007 at 07:20:30PM +1100, Darren Reed wrote: > >> Frank Hofmann wrote: >> >>> http://www.opengroup.org/onlinepubs/009695399/functions/rename.html >>> >>> ERRORS >>> The rename() function shall fail if: >>> [ ... ] >>> [EXDEV] >>>

Re: [zfs-discuss] Inode (dnode) numbers (Re: rename(2) (mv(1)) between ZFS filesystems in the same zpool)

2008-01-02 Thread Wee Yeh Tan
On Jan 3, 2008 12:32 AM, Nicolas Williams <[EMAIL PROTECTED]> wrote: > Oof, I see this has been discussed since (and, actually, IIRC it was > discussed a long time ago too). > > Anyways, IMO, this requires a new syscall or syscalls: > > xdevrename(2) > xdevcopy(2) > > and then mv(1) can do:

Re: [zfs-discuss] What are the dates ls shows on a snapshot?

2008-01-02 Thread eric kustarz
On Dec 23, 2007, at 7:53 PM, David Dyer-Bennet wrote: > Just out of curiosity, what are the dates ls -l shows on a snapshot? > Looks like they might be the pool creation date. The ctime and mtime are from the file system creation date. The atime is the current time. See: http://src.opensolar

Re: [zfs-discuss] [zones-discuss] ZFS shared /home between zones

2008-01-02 Thread James C. McPherson
Bob Scheifler wrote: > James C. McPherson wrote: >> You can definitely loopback mount the same fs into multiple >> zones, and as far as I can see you don't have the multiple-writer >> issues that otherwise require Qfs to solve - since you're operating >> within just one kernel instance. > > Is the

Re: [zfs-discuss] hot spare and resilvering problem

2008-01-02 Thread Eric Ham
On Dec 25, 2007 3:19 AM, Maciej Olchowik <[EMAIL PROTECTED]> wrote: > Hi Folks, > > I have 3510 disk array connected to T2000 server running: > SunOS 5.10 Generic_118833-33 sun4v sparc SUNW,Sun-Fire-T200 > 12 disks (300G each) is exported from array and ZFS is used > to manage them (raidz with on

Re: [zfs-discuss] Help! ZFS pool is UNAVAILABLE

2008-01-02 Thread Aaron Berland
Hi Joe, Thanks for trying. I can't even get the pool online because there are 2 corrupt drives according to zpool status. Yours and the other gentlemen's insights have been very helpful, however! I lucked out and realized that I did have copies of 90% of my data, so I am just going to destro

Re: [zfs-discuss] Help! ZFS pool is UNAVAILABLE

2008-01-02 Thread Richard Elling
Moore, Joe wrote: > I AM NOT A ZFS DEVELOPER. These suggestions "should" work, but there > may be other people who have better ideas. > > Aaron Berland wrote: > >> Basically, I have a 3 drive raidz array on internal Seagate >> drives. running build 64nv. I purchased 3 add'l USB drives >> with

Re: [zfs-discuss] Help! ZFS pool is UNAVAILABLE

2008-01-02 Thread Moore, Joe
I AM NOT A ZFS DEVELOPER. These suggestions "should" work, but there may be other people who have better ideas. Aaron Berland wrote: > Basically, I have a 3 drive raidz array on internal Seagate > drives. running build 64nv. I purchased 3 add'l USB drives > with the intention of mirroring and t

Re: [zfs-discuss] Inode (dnode) numbers (Re: rename(2) (mv(1)) between ZFS filesystems in the same zpool)

2008-01-02 Thread Nicolas Williams
Oof, I see this has been discussed since (and, actually, IIRC it was discussed a long time ago too). Anyways, IMO, this requires a new syscall or syscalls: xdevrename(2) xdevcopy(2) and then mv(1) can do: if (rename(old, new) != 0) { if (xdevrename(old, new) != 0

[zfs-discuss] Inode (dnode) numbers (Re: rename(2) (mv(1)) between ZFS filesystems in the same zpool)

2008-01-02 Thread Nicolas Williams
On Mon, Dec 31, 2007 at 07:20:30PM +1100, Darren Reed wrote: > Frank Hofmann wrote: > > http://www.opengroup.org/onlinepubs/009695399/functions/rename.html > > > > ERRORS > > The rename() function shall fail if: > > [ ... ] > > [EXDEV] > > [CX] The links named by new and old ar

Re: [zfs-discuss] Adding to zpool: would failure of one device destroy all data?

2008-01-02 Thread Wee Yeh Tan
Your data will be striped across both vdevs after you add the 2nd vdev. In any case, failure of one stripe device will result in the loss of the entire pool. I'm not sure, however, if there is anyway vm recover any data from surviving vdevs. On 1/2/08, Austin <[EMAIL PROTECTED]> wrote: > I didn't

[zfs-discuss] Adding to zpool: would failure of one device destroy all data?

2008-01-02 Thread Austin
I didn't find any clear answer in the documentation, so here it goes: I've got a 4-device RAIDZ array in a pool. I then add another RAIDZ array to the pool. If one of the arrays fails, would all the data on the array be lost, or would it be like disc spanning, and only the data on the failed a

Re: [zfs-discuss] [zones-discuss] ZFS shared /home between zones

2008-01-02 Thread Bob Scheifler
James C. McPherson wrote: > You can definitely loopback mount the same fs into multiple > zones, and as far as I can see you don't have the multiple-writer > issues that otherwise require Qfs to solve - since you're operating > within just one kernel instance. Is there any significant performance

Re: [zfs-discuss] Setting a dataset create time only property at pool creation time.

2008-01-02 Thread Darren J Moffat
Kalpak Shah wrote: > Hi > > I faced a similar problem when I was adding a property for per-dataset dnode > sizes. I got around it by adding a ZPOOL_PROP_DNODE_SIZE and adding the > dataset property in dsl_dataset_stats(). That way the root dataset gets the > property too. I am not very sure if

Re: [zfs-discuss] zpool panic need help

2008-01-02 Thread Felix Thommen
Hi again in meantime I upgraded to s10u4 including recommended patches. Then I tried again to import the zpool with same behaviour. The stack dump is exactly the same as in previous message. to complete label print: # zdb -lv /dev/rdsk/c2t0d0s0 LABEL

Re: [zfs-discuss] Setting a dataset create time only property at pool creation time.

2008-01-02 Thread Kalpak Shah
Hi I faced a similar problem when I was adding a property for per-dataset dnode sizes. I got around it by adding a ZPOOL_PROP_DNODE_SIZE and adding the dataset property in dsl_dataset_stats(). That way the root dataset gets the property too. I am not very sure if this is the cleanest solution o

[zfs-discuss] Setting a dataset create time only property at pool creation time.

2008-01-02 Thread Darren J Moffat
Our test engineer for the ZFS Crypto project discovered that it isn't possible to enable encryption on the "top" filesystem in a pool - the one that gets created by default. The intent here is that the default top level filesystem gets the encryption property not the pool itself (because the la

Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2008-01-02 Thread Wee Yeh Tan
On Jan 2, 2008 11:46 AM, Darren Reed <[EMAIL PROTECTED]> wrote: > [EMAIL PROTECTED] wrote: > > ... > > That's a sad situation for backup utilities, by the way - a backup > > tool would have no way of finding out that file X on fs A already > > existed as file Z on fs B. So what ? If the file got co