Re: [zfs-discuss] lots of zil_clean threads

2009-09-22 Thread Nils Goroll
I should add that I have quite a lot of datasets: and maybe I should also add that I'm still running an old zpool version in order to keep the ability to boot snv_98: aggis:~$ zpool upgrade This system is currently running ZFS pool version 14. The following pools are out of date, and can b

Re: [zfs-discuss] zfs bug

2009-09-22 Thread Trevor Pretty
Of course I meant 2009.06   :-) Trevor Pretty wrote: BTW Reading your bug. I assumed you meant? zfs set mountpoint=/home/pool tank ln -s /dev/null /home/pool I then tried on OpenSolaris 2008.11 r...@norton:~# zfs set mountpoint= r...@norton:~# zfs set mountpoint=

Re: [zfs-discuss] zfs bug

2009-09-22 Thread Jeremy Kister
On 9/22/2009 11:17 PM, Trevor Pretty wrote: zfs set mountpoint=/home/pool tank ln -s /dev/null /home/pool ahha, I dumbed down the process too much (trying to make it simple to reproduce). the key is in the /Auto/pool snippet that i put in the CR, but switched to /dev/null in the reproduce

Re: [zfs-discuss] zfs bug

2009-09-22 Thread Trevor Pretty
BTW Reading your bug. I assumed you meant? zfs set mountpoint=/home/pool tank ln -s /dev/null /home/pool I then tried on OpenSolaris 2008.11 r...@norton:~# zfs set mountpoint= r...@norton:~# zfs set mountpoint=/home/pool tank r...@norton:~# zpool export tank r...@norton:~# rm -r /home/p

Re: [zfs-discuss] zfs bug

2009-09-22 Thread Trevor Pretty
Jeremy You sure? http://bugs.opensolaris.org/view_bug.do%3Bjsessionid=32d28f683e21e4b5c35832c2e707?bug_id=6883885 BTW:  I only found this by hunting for one of my bugs  6428437 and changing the URL!  I think the searching is broken - but using bugster has always been a black art even when

[zfs-discuss] zfs bug

2009-09-22 Thread Jeremy Kister
I entered CR 6883885 at bugs.opensolaris.org. someone closed it - not reproducible. Where do i find more information, like which planet's gravitational properties affect the zfs source code ?? -- Jeremy Kister http://jeremy.kister.net./ ___ zfs-di

Re: [zfs-discuss] What does 128-bit mean

2009-09-22 Thread Trevor Pretty
http://blogs.sun.com/bonwick/entry/128_bit_storage_are_you Trevor Pretty wrote: http://en.wikipedia.org/wiki/ZFS Shu Wu wrote: Hi pals, I'm now looking into zfs source and have been puzzled about 128-bit. It's announced that ZFS is an 128-bit file system. But what does 128-bit m

Re: [zfs-discuss] What does 128-bit mean

2009-09-22 Thread Trevor Pretty
http://en.wikipedia.org/wiki/ZFS Shu Wu wrote: Hi pals, I'm now looking into zfs source and have been puzzled about 128-bit. It's announced that ZFS is an 128-bit file system. But what does 128-bit mean? Does that mean the addressing capability is 2^128? But in the source, 'zp_size' (in 'stru

[zfs-discuss] What does 128-bit mean

2009-09-22 Thread Shu Wu
Hi pals, I'm now looking into zfs source and have been puzzled about 128-bit. It's announced that ZFS is an 128-bit file system. But what does 128-bit mean? Does that mean the addressing capability is 2^128? But in the source, 'zp_size' (in 'struct znode_phys'), the file size in bytes, is defined a

Re: [zfs-discuss] ZFS file disk usage

2009-09-22 Thread Andrew Deason
On Tue, 22 Sep 2009 13:26:59 -0400 Richard Elling wrote: > > That seems to differ quite a bit from what I've seen; perhaps I am > > misunderstanding... is the "+ 1 block" of a different size than the > > recordsize? With recordsize=1k: > > > > $ ls -ls foo > > 2261 -rw-r--r-- 1 root root

Re: [zfs-discuss] Persistent errors - do I believe?

2009-09-22 Thread Chris Murray
I've had an interesting time with this over the past few days ... After the resilver completed, I had the message "no known data errors" in a zpool status. I guess the title of my post should have been "how permanent are permanent errors?". Now, I don't know whether the action of completing the

Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-22 Thread Jeremy Kister
On 9/22/2009 1:55 PM, Jeremy Kister wrote: (b) 2 of them have 268GB raw 26 HP 300GB SCA disks with mirroring + 2 hot spares 28 * 300G = 8.2T. Not 268G. "Math class is tough!" -- Jeremy Kister http://jeremy.kister.net./ ___ zfs-discuss mailin

Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-22 Thread Jeremy Kister
On 9/18/2009 1:51 PM, Steffen Weiberle wrote: # of systems 6 not including dozens of zfs root. amount of storage (a) 2 of them have 96TB raw, 46 WD SATA 2TB disks in two raidz2 pools + 2 hot spares each raidz2 pool is on it's own shelf on it's own PCIx controller (b) 2 of them have

Re: [zfs-discuss] URGENT: very high busy and average service time with ZFS and USP1100

2009-09-22 Thread Richard Elling
comment below... On Sep 22, 2009, at 9:57 AM, Jim Mauro wrote: Cross-posting to zfs-discuss. This does not need to be on the confidential alias. It's a performance query - there's nothing confidential in here. Other folks post performance queries to zfs-discuss Forget %b - it's useless.

Re: [zfs-discuss] ZFS file disk usage

2009-09-22 Thread Richard Elling
On Sep 22, 2009, at 8:07 AM, Andrew Deason wrote: On Mon, 21 Sep 2009 18:20:53 -0400 Richard Elling wrote: On Sep 21, 2009, at 2:43 PM, Andrew Deason wrote: On Mon, 21 Sep 2009 17:13:26 -0400 Richard Elling wrote: You don't know the max overhead for the file before it is allocated. You c

Re: [zfs-discuss] ZFS Recv slow with high CPU

2009-09-22 Thread Matthew Ahrens
Tristan Ball wrote: OK, Thanks for that. From reading the RFE, it sound's like having a faster machine on the receive side will be enough to alleviate the problem in the short term? That's correct. --matt ___ zfs-discuss mailing list zfs-discuss@o

[zfs-discuss] rpool import when another rpool already mounted ?

2009-09-22 Thread andy
Hi I've a situation that I cant find any answers to after searching docs etc. I'm testing a DR process of installing solaris on to zfs mirror using rpool . Then I am breaking the rpool mirror , recreating the none live half as newrpool and restoring my backup to the none-live mirror disk via

Re: [zfs-discuss] URGENT: very high busy and average service time with ZFS and USP1100

2009-09-22 Thread Jim Mauro
Cross-posting to zfs-discuss. This does not need to be on the confidential alias. It's a performance query - there's nothing confidential in here. Other folks post performance queries to zfs-discuss Forget %b - it's useless. It's not the bandwidth that's hurting you, it's the IOPS. One of t

Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-22 Thread Steffen Weiberle
On 09/18/09 14:34, Jeremy Kister wrote: On 9/18/2009 1:51 PM, Steffen Weiberle wrote: I am trying to compile some deployment scenarios of ZFS. # of systems do zfs root count? or only big pools? non root is more interesting to me. however, if you are sharing the root pool with your data, w

Re: [zfs-discuss] ZFS file disk usage

2009-09-22 Thread Andrew Deason
On Mon, 21 Sep 2009 18:20:53 -0400 Richard Elling wrote: > On Sep 21, 2009, at 2:43 PM, Andrew Deason wrote: > > > On Mon, 21 Sep 2009 17:13:26 -0400 > > Richard Elling wrote: > > > >> You don't know the max overhead for the file before it is > >> allocated. You could guess at a max of 3x size

Re: [zfs-discuss] Migrate from iscsitgt to comstar?

2009-09-22 Thread Peter Cudhea
cc'ing to storage-discuss where this topic also came up recently. By default for most backing stores, COMSTAR will put its disk metadata in the first 64K of the backing store as you say. So if you take a backing store disk that is in use as an iscsitgt LUN and then run "sbdadm create-lu /pa

Re: [zfs-discuss] ZFS Recv slow with high CPU

2009-09-22 Thread Tristan Ball
OK, Thanks for that. From reading the RFE, it sound's like having a faster machine on the receive side will be enough to alleviate the problem in the short term? The hardware I'm using at the moment is quite old, and not particularly fast - although this is the first out & out performance lim

Re: [zfs-discuss] lots of zil_clean threads

2009-09-22 Thread Nils Goroll
Hi Neil and all, thank you very much for looking into this: So I don't know what's going on. What is the typical call stack for those zil_clean() threads? I'd say they are all blocking on their respective CVs: ff0009066c60 fbc2c0300 0 60 ff01d25e1180 PC: