[zfs-discuss] case 37962758 - zfs can't destroy Sol10U4

2007-10-16 Thread Renato Ferreira de Castro - Sun Microsystems - Gland Switzerland
Hardware Platform: Sun Fire T2000
SunOS webz2.unige.ch 5.10 Generic_120011-14 sun4v sparc SUNW,Sun-Fire-T200
OBP 4.26.1 2007/04/02 16:26
SUNWzfskr VERSION:  11.10.0,REV=2006.05.18.02.15
SUNWzfsr   VERSION:  11.10.0,REV=2006.05.18.02.15
SUNWzfsu   VERSION:  11.10.0,REV=2006.05.18.02.15
/net/kromo.swiss.sun.com/export/cases/explorer.844814d0.webz2.unige.ch-2007.10.15.16.00

Dear All,

My Customer can't destroy a dataset in zfs, no problemo in Sol10U3.

Anyone have already face this kind of issues ?

Regards
Nato


What he try to do :
--- 
- re-mount and umount manually, then try to destroy.
# mount -F zfs zpool_dokeos1/dokeos1/home /mnt
# umount /mnt
# zfs destroy dokeos1_pool/dokeos1/home
cannot destroy 'dokeos1_pool/dokeos1/home': dataset is busy

The file system is not mounted:
---
# mount | grep dokeos1/home
(no output)
 
There is no clone dependencies:
# zfs get origin|grep -v 'origin-'
NAME  PROPERTY  
VALUE SOURCE
 
The dependant snapshots have been destroyed OK:
-
# zfs list -r dokeos1_pool/dokeos1/home
NAMEUSED  AVAIL  REFER  MOUNTPOINT
dokeos1_pool/dokeos1/home  33.5K  29.8G  33.5K  legacy
 
Other commands fail with the same problem:

# zfs rename dokeos1_pool/dokeos1/home dokeos1_pool/dokeos1/home_old
cannot rename 'dokeos1_pool/dokeos1/home': dataset is busy
 
# zpool export dokeos1_pool
cannot export 'dokeos1_pool': pool is busy

He can create and destroy other file systems without problems:
---
# zfs create -o mountpoint=legacy -o snapdir=visible 
dokeos1_pool/dokeos1/test
# zfs destroy dokeos1_pool/dokeos1/test
(no problem)

- hide the snapshot dirs:
--
# zfs set snapdir=hidden dokeos1_pool/dokeos1/home
# zfs destroy dokeos1_pool/dokeos1/home
cannot destroy 'dokeos1_pool/dokeos1/home': dataset is busy

Here is the zpool history:
---
# zpool history dokeos1_pool  
History for 'dokeos1_pool':
2007-10-12.18:17:24 zpool create -f -m none dokeos1_pool 
c4t4849544143484920443630303630353430323336d0
2007-10-12.18:18:01 zfs create -o mountpoint -o snapdir=visible 
dokeos1_pool/dokeos1
2007-10-12.18:18:01 zfs create -o mountpoint -o snapdir=visible 
dokeos1_pool/dokeos1/home
2007-10-12.20:31:59 zfs receive dokeos1_pool/dokeos1/new
2007-10-12.21:00:02 zfs snapshot -r [EMAIL PROTECTED]:21
2007-10-12.22:00:03 zfs snapshot -r [EMAIL PROTECTED]:22
2007-10-12.23:00:02 zfs snapshot -r [EMAIL PROTECTED]:23
2007-10-13.00:00:03 zfs snapshot -r [EMAIL PROTECTED]:00
2007-10-13.01:00:02 zfs snapshot -r [EMAIL PROTECTED]:01
2007-10-13.02:00:03 zfs snapshot -r [EMAIL PROTECTED]:02
2007-10-13.02:33:28 zfs snapshot -r [EMAIL PROTECTED]
2007-10-13.02:33:34 zfs snapshot -r [EMAIL PROTECTED]:Saturday
2007-10-13.03:00:03 zfs snapshot -r [EMAIL PROTECTED]:03
2007-10-13.03:28:27 zfs destroy [EMAIL PROTECTED]
2007-10-13.03:28:27 zfs destroy dokeos1_pool/[EMAIL PROTECTED]
2007-10-13.03:28:27 zfs destroy dokeos1_pool/dokeos1/[EMAIL PROTECTED]
2007-10-13.03:28:27 zfs destroy dokeos1_pool/dokeos1/[EMAIL PROTECTED]
2007-10-13.04:00:03 zfs snapshot -r [EMAIL PROTECTED]:04
2007-10-13.05:00:03 zfs snapshot -r [EMAIL PROTECTED]:05
2007-10-13.06:00:03 zfs snapshot -r [EMAIL PROTECTED]:06
2007-10-13.07:00:03 zfs snapshot -r [EMAIL PROTECTED]:07
2007-10-13.08:00:04 zfs snapshot -r [EMAIL PROTECTED]:08
2007-10-13.09:00:03 zfs snapshot -r [EMAIL PROTECTED]:09
2007-10-13.10:00:03 zfs snapshot -r [EMAIL PROTECTED]:10
2007-10-13.11:00:03 zfs snapshot -r [EMAIL PROTECTED]:11
2007-10-13.12:00:03 zfs snapshot -r [EMAIL PROTECTED]:12
2007-10-13.13:00:03 zfs snapshot -r [EMAIL PROTECTED]:13
2007-10-13.14:00:04 zfs snapshot -r [EMAIL PROTECTED]:14
2007-10-13.15:00:04 zfs snapshot -r [EMAIL PROTECTED]:15
2007-10-13.16:00:04 zfs snapshot -r [EMAIL PROTECTED]:16
2007-10-13.17:00:07 zfs snapshot -r [EMAIL PROTECTED]:17
2007-10-13.18:00:05 zfs snapshot -r [EMAIL PROTECTED]:18
2007-10-13.19:00:04 zfs snapshot -r [EMAIL PROTECTED]:19
2007-10-13.20:00:05 zfs snapshot -r [EMAIL PROTECTED]:20
2007-10-13.21:00:03 zfs destroy [EMAIL PROTECTED]:21
2007-10-13.21:00:04 zfs destroy dokeos1_pool/[EMAIL PROTECTED]:21
2007-10-13.21:00:04 zfs destroy dokeos1_pool/dokeos1/[EMAIL PROTECTED]:21
2007-10-13.21:00:04 zfs destroy dokeos1_pool/dokeos1/[EMAIL PROTECTED]:21
2007-10-13.21:00:04 zfs snapshot -r [EMAIL PROTECTED]:21
2007-10-13.22:00:04 zfs destroy [EMAIL PROTECTED]:22
2007-10-13.22:00:04 zfs destroy dokeos1_pool/[EMAIL PROTECTED]:22
2007-10-13.22:00:05 zfs destroy dokeos1_pool/dokeos1/[EMAIL PROTECTED]:22
2007-10-13.22:00:05 zfs destroy dokeos1_pool/dokeos1/[EMAIL PROTECTED]:22
2007-10-13.22:00:05 zfs snapshot -r [EMAIL PROTECTED]:22
2007-10-13.23:

[zfs-discuss] zfs won't import a pool automatically at boot

2007-10-16 Thread Michael Goff
Hi,

When jumpstarting s10x_u4_fcs onto a machine, I have a postinstall script which 
does:

zpool create tank c1d0s7 c2d0s7 c3d0s7 c4d0s7
zfs create tank/data
zfs set mountpoint=/data tank/data
zpool export -f tank

When jumpstart finishes and the node reboots, the pool is not imported 
automatically. I have to do:

zpool import tank

for it to show up. Then on subsequent reboots it imports and mounts 
automatically. What I can I do to get it to mount automatically the first time? 
When I didn't have the zpool export I would get an message that I needed to use 
zpool import -f because it wasn't exported properly from another machine. So it 
looks like the state of the pool created during the jumpstart install was lost.

BTW, I love using zfs commands to manage filesystems. They are so easy and 
intuitive!

thanks,
Mike
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] case 37962758 - zfs can't destroy Sol10U4

2007-10-16 Thread Dick Davies
On 16/10/2007, Renato Ferreira de Castro - Sun Microsystems - Gland Switzerland
> What he try to do :
> ---
> - re-mount and umount manually, then try to destroy.
> # mount -F zfs zpool_dokeos1/dokeos1/home /mnt
> # umount /mnt
> # zfs destroy dokeos1_pool/dokeos1/home
> cannot destroy 'dokeos1_pool/dokeos1/home': dataset is busy
>
> The file system is not mounted:

I had the same thing on s10u3. Try

zfs mount dokeos1_pool/dokeos1/home
zfs umount dokeos1_pool/dokeos1/home
zfs destroy dokeos1_pool/dokeos1/home

-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs won't import a pool automatically at boot

2007-10-16 Thread Dick Davies
On 16/10/2007, Michael Goff <[EMAIL PROTECTED]> wrote:
> Hi,
>
> When jumpstarting s10x_u4_fcs onto a machine, I have a postinstall script 
> which does:
>
> zpool create tank c1d0s7 c2d0s7 c3d0s7 c4d0s7
> zfs create tank/data
> zfs set mountpoint=/data tank/data
> zpool export -f tank

Try without the '-f' ?


-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs won't import a pool automatically at boot

2007-10-16 Thread dudekula mastan
Hi Mike,
   
  After rebooting a UNIX machine (HP-UX/Linux/Solaris), it will mount (or 
import) only the file systems which are mounted (or imported) before the reboot.
   
  In your case the zfs file system tank/data is exported(or unmounted) before 
reboot.Thats the reason why the zpool is not imported automatically after 
reboot.
   
  This is neither a problem nor a bug. ZFS developers designed the import and 
export command like this.
   
  Not only ZFS, no file system will mount all the available file systems. Any 
UNIX machine will mount only the file systems which are mounted before reboot.
-Masthan  D

Michael Goff <[EMAIL PROTECTED]> wrote:

  Hi,

When jumpstarting s10x_u4_fcs onto a machine, I have a postinstall script which 
does:

zpool create tank c1d0s7 c2d0s7 c3d0s7 c4d0s7
zfs create tank/data
zfs set mountpoint=/data tank/data
zpool export -f tank

When jumpstart finishes and the node reboots, the pool is not imported 
automatically. I have to do:

zpool import tank

for it to show up. Then on subsequent reboots it imports and mounts 
automatically. What I can I do to get it to mount automatically the first time? 
When I didn't have the zpool export I would get an message that I needed to use 
zpool import -f because it wasn't exported properly from another machine. So it 
looks like the state of the pool created during the jumpstart install was lost.

BTW, I love using zfs commands to manage filesystems. They are so easy and 
intuitive!

thanks,
Mike


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


   
-
Be a better Globetrotter. Get better travel answers from someone who knows.
Yahoo! Answers - Check it out.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs won't import a pool automatically at boot

2007-10-16 Thread Sanjeev Bagewadi
Michael,

If you don't call "zpool export -f tank" it should work.
However, it would be necessary to understand why you are using the above 
command after creation of the zpool.

Can you avoid exporting after the creation ?

Regards,
Sanjeev


Michael Goff wrote:
> Hi,
>
> When jumpstarting s10x_u4_fcs onto a machine, I have a postinstall script 
> which does:
>
> zpool create tank c1d0s7 c2d0s7 c3d0s7 c4d0s7
> zfs create tank/data
> zfs set mountpoint=/data tank/data
> zpool export -f tank
>
> When jumpstart finishes and the node reboots, the pool is not imported 
> automatically. I have to do:
>
> zpool import tank
>
> for it to show up. Then on subsequent reboots it imports and mounts 
> automatically. What I can I do to get it to mount automatically the first 
> time? When I didn't have the zpool export I would get an message that I 
> needed to use zpool import -f because it wasn't exported properly from 
> another machine. So it looks like the state of the pool created during the 
> jumpstart install was lost.
>
> BTW, I love using zfs commands to manage filesystems. They are so easy and 
> intuitive!
>
> thanks,
> Mike
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   


-- 
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel:x27521 +91 80 669 27521

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs won't import a pool automatically at boot

2007-10-16 Thread Robert Milkowski
Hello Sanjeev,

Tuesday, October 16, 2007, 10:14:01 AM, you wrote:

SB> Michael,

SB> If you don't call "zpool export -f tank" it should work.
SB> However, it would be necessary to understand why you are using the above
SB> command after creation of the zpool.

SB> Can you avoid exporting after the creation ?


It won't help during jumpstart as /etc is not the same one as after he
will boot.

Before you export a pool put in your finish script:

cp -p /etc/zfs/zpool.cache /a/etc/zfs/

Then export a pool. It should do the trick.

-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs won't import a pool automatically at boot

2007-10-16 Thread Michael Goff
Great, thanks Robert. That's what I was looking for. I was thinking that 
I would have to transfer the state somehow from the temporary jumpstart 
environment to /a so that it would be persistent. I'll test it out tomorrow.

Sanjeev, when I did not have the zpool export, it still did not import 
automatically upon reboot after the jumpstart. And when I imported it 
manually, if gave an error. So that's why I added the export.

Mike

Robert Milkowski wrote:
> Hello Sanjeev,
> 
> Tuesday, October 16, 2007, 10:14:01 AM, you wrote:
> 
> SB> Michael,
> 
> SB> If you don't call "zpool export -f tank" it should work.
> SB> However, it would be necessary to understand why you are using the above
> SB> command after creation of the zpool.
> 
> SB> Can you avoid exporting after the creation ?
> 
> 
> It won't help during jumpstart as /etc is not the same one as after he
> will boot.
> 
> Before you export a pool put in your finish script:
> 
> cp -p /etc/zfs/zpool.cache /a/etc/zfs/
> 
> Then export a pool. It should do the trick.
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs won't import a pool automatically at boot

2007-10-16 Thread Sanjeev Bagewadi
Thanks Robert ! I missed that part.

-- Sanjeev.

Michael Goff wrote:
> Great, thanks Robert. That's what I was looking for. I was thinking 
> that I would have to transfer the state somehow from the temporary 
> jumpstart environment to /a so that it would be persistent. I'll test 
> it out tomorrow.
>
> Sanjeev, when I did not have the zpool export, it still did not import 
> automatically upon reboot after the jumpstart. And when I imported it 
> manually, if gave an error. So that's why I added the export.
>
> Mike
>
> Robert Milkowski wrote:
>> Hello Sanjeev,
>>
>> Tuesday, October 16, 2007, 10:14:01 AM, you wrote:
>>
>> SB> Michael,
>>
>> SB> If you don't call "zpool export -f tank" it should work.
>> SB> However, it would be necessary to understand why you are using 
>> the above
>> SB> command after creation of the zpool.
>>
>> SB> Can you avoid exporting after the creation ?
>>
>>
>> It won't help during jumpstart as /etc is not the same one as after he
>> will boot.
>>
>> Before you export a pool put in your finish script:
>>
>> cp -p /etc/zfs/zpool.cache /a/etc/zfs/
>>
>> Then export a pool. It should do the trick.
>>


-- 
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel:x27521 +91 80 669 27521

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enlarge a mirrored pool

2007-10-16 Thread Ivan Wang
> 
> Would the bootloader have issues here? On x86 I would
> imagine that you 
> would have to reload grub, would a similar thing need
> to be done on SPARC?
>

Yeah, that's also what I'm thinking, apparently zfs mirror doesn't take care of 
boot sector. So as of now, estimating size of a zfs root pool is still 
required, better not to go with a carefree grow-as-needed mindset. 

Ivan.
 
> 
> Ivan Wang wrote:
> >>> Erik Trimble wrote:
> >>> After both drives are replaced, you will
> automatically see the 
> >>> additional space.
> >>>   
> >> I believe currently after the last replace an
> >> import/export sequence
> >> is needed to force zfs to see the increased size.
> >> 
> >
> > What if root fs is also in this pool? will there be
> any limitation for a pool containing /?
> >
> > Thanks,
> > Ivan.
> >
> >   
> >> Neil.
> >>  
> >> __
> >> zfs-discuss mailing list
> >> zfs-discuss@opensolaris.org
> >>
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> >> ss
> >> 
> >  
> >  
> > This message posted from opensolaris.org
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> >
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
> >   
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] nfs-ownership

2007-10-16 Thread Claus Guttesen
Hi.

I have created some zfs-partitions. First I create the
home/user-partitions. Beneath that I create additional partitions.
Then I have do a chown -R for that user. These partitions are shared
using the sharenfs=on. The owner- and group-id is 1009.

These partitions are visible as the user assigned above. But when I
mount the home/user partition from a FreeBSD-client, only the
top-partiton has the proper uid- and guid-assignment. The partitons
beneath are assigned to the root/wheel (uid 0 and gid 0 on FreeBSD).

Am I doing something wrong

>From nfs-client:

ls -l spool
drwxr-xr-x  181 print  print  181 16 oct 21:00 2007-10-16
drwxr-xr-x2 rootwheel 2 11 oct 11:07 c8

>From nfs-server:
ls -l spool
drwxr-xr-x 185 print print 185 Oct 16 21:10 2007-10-16
drwxr-xr-x   6 print print   6 Oct 13 17:10 c8

The folder 2007-10-16 is a regular folder below the nfs-mounted
partition, c8 is a zfs-partition.

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HAMMER

2007-10-16 Thread roland
and what about compression?

:D
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lose ability to open terminal window after adding line to dfstab

2007-10-16 Thread Josh Fisher
For anyone who is interested the solution to this issue was to set the zfs 
mountpoint of the dataset being shared to legacy. This enables the proper 
sharing of the dataset to a client and the ability to open a terminal window in 
the zone sharing out the dataset. Does anyone know why this fixed the issue? 
The sharing was working properly before the mount point was changed to legacy. 
The only problem was a user could not open a terminal window in the zone 
sharing a dataset. How did setting the mount point to legacy enable the window 
to be opened? Thanks.

Josh
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HAMMER

2007-10-16 Thread Dave Johnson
you mean c9n ? ;)

does anyone actually *use* compression ?  i'd like to see a poll on how many 
people are using (or would use) compression on production systems that are 
larger than your little department catch-all dumping ground server.  i mean, 
unless you had some NDMP interface directly to ZFS, daily tape backups for 
any large system will likely be an excersize in futility unless the systems 
are largely just archive servers, at which point it's probably smarter to 
perform backups less often, coinciding with the workflow of migrating 
archive data to it.  otherwise wouldn't the system just plain get pounded?

-=dave

- Original Message - 
From: "roland" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, October 16, 2007 12:44 PM
Subject: Re: [zfs-discuss] HAMMER


> and what about compression?
>
> :D
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HAMMER

2007-10-16 Thread Jonathan Loran


We use compression on almost all of our zpools.  We see very little if 
any I/O slowdown because of this, and you get free disk space. In fact, 
I believe read I/O gets a boost from this, since decompression is cheap 
compared to normal disk I/O.


Jon

Dave Johnson wrote:

you mean c9n ? ;)

does anyone actually *use* compression ?  i'd like to see a poll on how many 
people are using (or would use) compression on production systems that are 
larger than your little department catch-all dumping ground server.  i mean, 
unless you had some NDMP interface directly to ZFS, daily tape backups for 
any large system will likely be an excersize in futility unless the systems 
are largely just archive servers, at which point it's probably smarter to 
perform backups less often, coinciding with the workflow of migrating 
archive data to it.  otherwise wouldn't the system just plain get pounded?


-=dave

- Original Message - 
From: "roland" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 16, 2007 12:44 PM
Subject: Re: [zfs-discuss] HAMMER


  

and what about compression?

:D


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


--


- _/ _/  /   - Jonathan Loran -   -
-/  /   /IT Manager   -
-  _  /   _  / / Space Sciences Laboratory, UC Berkeley
-/  / /  (510) 643-5146 [EMAIL PROTECTED]
- __/__/__/   AST:7731^29u18e3




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HAMMER

2007-10-16 Thread Bryan Allen

On Oct 16, 2007, at 4:36 PM, Jonathan Loran wrote:

>
> We use compression on almost all of our zpools.  We see very little  
> if any I/O slowdown because of this, and you get free disk space.  
> In fact, I believe read I/O gets a boost from this, since  
> decompression is cheap compared to normal disk I/O.

Same here. For our workload (many writes, relatively few reads), we  
saw at least a 5x increase in performance (says a developer  
offhandedly when I ask him) when we enabled compression. I was  
expecting a boost, but I recall being surprised by how much quicker  
it was.

I have not enabled it everywhere, just in specific places where disk  
I/O is being contended for, and CPU is in abundance.
--
bda
cyberpunk is dead. long live cyberpunk.
http://bda.mirrorshades.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs-ownership

2007-10-16 Thread Spencer Shepler

Claus,

Is the mount using NFSv4?  If so, there is likely a midguided
mapping of the user/groups between the client and server.

While not including BSD info, there is a little bit on
NFSv4 user/group mappings at this blog:
http://blogs.sun.com/nfsv4

Spencer

On Oct 16, 2007, at 2:11 PM, Claus Guttesen wrote:

> Hi.
>
> I have created some zfs-partitions. First I create the
> home/user-partitions. Beneath that I create additional partitions.
> Then I have do a chown -R for that user. These partitions are shared
> using the sharenfs=on. The owner- and group-id is 1009.
>
> These partitions are visible as the user assigned above. But when I
> mount the home/user partition from a FreeBSD-client, only the
> top-partiton has the proper uid- and guid-assignment. The partitons
> beneath are assigned to the root/wheel (uid 0 and gid 0 on FreeBSD).
>
> Am I doing something wrong
>
>> From nfs-client:
>
> ls -l spool
> drwxr-xr-x  181 print  print  181 16 oct 21:00 2007-10-16
> drwxr-xr-x2 rootwheel 2 11 oct 11:07 c8
>
>> From nfs-server:
> ls -l spool
> drwxr-xr-x 185 print print 185 Oct 16 21:10 2007-10-16
> drwxr-xr-x   6 print print   6 Oct 13 17:10 c8
>
> The folder 2007-10-16 is a regular folder below the nfs-mounted
> partition, c8 is a zfs-partition.
>
> -- 
> regards
> Claus
>
> When lenity and cruelty play for a kingdom,
> the gentlest gamester is the soonest winner.
>
> Shakespeare
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] practicality of zfs send/receive for failover

2007-10-16 Thread Paul B. Henson
On Fri, 12 Oct 2007, Matthew Ahrens wrote:

> You can use delegated administration ("zfs allow someone send pool/fs").
> This is in snv_69.  RBAC is much more coarse-grained, but you could use
> it too.

Out of curiosity, what kind of things are going to be added via patches to
S10u4 vs things that are going to need to wait for u5? I keep finding cool
stuff I want that's not in u4 yet, and I'm not really very patient ;).

> You can do "zfs recv -F", which will discard any changes made since the
> most recent snapshot, in order to perform the receive.  This is in snv_48
> and s10u4.

Excellent, at least that one is in the production release already.

Thanks...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] practicality of zfs send/receive for failover

2007-10-16 Thread Paul B. Henson
On Fri, 12 Oct 2007, Paul B. Henson wrote:

> I've read a number of threads and blog posts discussing zfs send/receive
> and its applicability is such an implementation, but I'm curious if
> anyone has actually done something like that in practice, and if so how
> well it worked.

So I didn't hear from anyone on this thread actually running such an
implementation in production? Could someone maybe comment on a theoretical
level :) whether this would be realistic for multiple terabytes, or if I
should just give up on it?

Thanks...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] practicality of zfs send/receive for failover

2007-10-16 Thread Richard Elling
Paul B. Henson wrote:
> On Fri, 12 Oct 2007, Paul B. Henson wrote:
> 
>> I've read a number of threads and blog posts discussing zfs send/receive
>> and its applicability is such an implementation, but I'm curious if
>> anyone has actually done something like that in practice, and if so how
>> well it worked.
> 
> So I didn't hear from anyone on this thread actually running such an
> implementation in production? Could someone maybe comment on a theoretical
> level :) whether this would be realistic for multiple terabytes, or if I
> should just give up on it?

It should be more reasonable to use ZFS send/recv than a dumb volume
block copy.  It should be on the same order of goodness as rsync-style
copying.  I use send/recv quite often, but my wife doesn't have a TByte
of pictures (yet :-)
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] practicality of zfs send/receive for failover

2007-10-16 Thread Matthew Ahrens
Richard Elling wrote:
> Paul B. Henson wrote:
>> On Fri, 12 Oct 2007, Paul B. Henson wrote:
>>
>>> I've read a number of threads and blog posts discussing zfs send/receive
>>> and its applicability is such an implementation, but I'm curious if
>>> anyone has actually done something like that in practice, and if so how
>>> well it worked.
>> So I didn't hear from anyone on this thread actually running such an
>> implementation in production? Could someone maybe comment on a theoretical
>> level :) whether this would be realistic for multiple terabytes, or if I
>> should just give up on it?
> 
> It should be more reasonable to use ZFS send/recv than a dumb volume
> block copy.  It should be on the same order of goodness as rsync-style
> copying.  I use send/recv quite often, but my wife doesn't have a TByte
> of pictures (yet :-)

Incremental zfs send/recv is actually orders of magnitude "more goodness" 
than rsync (due to much faster finding of changed files).

I know of customers who are using send|ssh|recv to replicate entire thumpers 
across the country, in production.  I'm sure they'll speak up here if/when 
they find this thread...

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss