Re: [zfs-discuss] question about ZFS performance for webserving/java

2006-06-02 Thread Roch Bourbonnais - Performance Engineering
You propose ((2-way mirrored) x RAID-Z (3+1)) . That gives you 3 data disks worth and you'd have to loose 2 disk in each mirror (4 total) to loose data. For random read load you describe, I could expect that the per device cache to work nicely; That is file blocks stored at some given

Re: Re[4]: [zfs-discuss] question about ZFS performance for webserving/java

2006-06-02 Thread Rainer Orth
Robert Milkowski <[EMAIL PROTECTED]> writes: > So it can look like: [...] >c0t2d0s1c0t2d0s1 SVM mirror, SWAP SWAP/s1 size = >sizeof(/ + /var + > /opt) You can avoid this by swapping to a zvol, though at the moment t

Re: [zfs-discuss] Re: 3510 configuration for ZFS

2006-06-02 Thread Roch Bourbonnais - Performance Engineering
Tao Chen writes: > Hello Robert, > > On 6/1/06, Robert Milkowski <[EMAIL PROTECTED]> wrote: > > Hello Anton, > > > > Thursday, June 1, 2006, 5:27:24 PM, you wrote: > > > > ABR> What about small random writes? Won't those also require reading > > ABR> from all disks in RAID-Z to read the

Re: [zfs-discuss] question about ZFS performance for webserving/java

2006-06-02 Thread Gavin Maltby
On 06/02/06 10:09, Rainer Orth wrote: Robert Milkowski <[EMAIL PROTECTED]> writes: So it can look like: [...] c0t2d0s1c0t2d0s1 SVM mirror, SWAP SWAP/s1 size = sizeof(/ + /var + /opt) You can avoid this by swappi

Re: [zfs-discuss] question about ZFS performance for webserving/java

2006-06-02 Thread Rainer Orth
Gavin Maltby writes: > > You can avoid this by swapping to a zvol, though at the moment this > > requires a fix for CR 6405330. Unfortunately, since one cannot yet dump to > > a zvol, one needs a dedicated dump device in this case ;-( > > Dedicated dump devices are *always* best, so this is no l

Re: [zfs-discuss] question about ZFS performance for webserving/java

2006-06-02 Thread Darren J Moffat
Rainer Orth wrote: Gavin Maltby writes: You can avoid this by swapping to a zvol, though at the moment this requires a fix for CR 6405330. Unfortunately, since one cannot yet dump to a zvol, one needs a dedicated dump device in this case ;-( Dedicated dump devices are *always* best, so this i

Re: [zfs-discuss] question about ZFS performance for webserving/java

2006-06-02 Thread Rainer Orth
Darren J Moffat writes: > >>> You can avoid this by swapping to a zvol, though at the moment this > >>> requires a fix for CR 6405330. Unfortunately, since one cannot yet dump > >>> to > >>> a zvol, one needs a dedicated dump device in this case ;-( > >> Dedicated dump devices are *always* best,

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-06-02 Thread Roch
Anton Rang writes: > On May 31, 2006, at 8:56 AM, Roch Bourbonnais - Performance > Engineering wrote: > > > I'm not taking a stance on this, but if I keep a controler > > full of 128K I/Os and assuming there are targetting > > contiguous physical blocks, how different is that t

[zfs-discuss] [Probably a bug] zfs disk got UNAVAIL state and can not be repair.

2006-06-02 Thread axa
Hello All : I have a 16xSATA disks DISK ARRAY with JBOD configuration and it's attached LSI FC HBA card. I use 2 raidz groups there are combine with 8 disks. zpool status result as following: === zpool status == NAME STATE READ WRITE CKSUM pool

Re: [zfs-discuss] [Probably a bug] zfs disk got UNAVAIL state and can not be repair.

2006-06-02 Thread Eric Schrock
On Fri, Jun 02, 2006 at 10:42:08AM -0700, axa wrote: > Hello All : > I have a 16xSATA disks DISK ARRAY with JBOD configuration and it's attached > LSI FC HBA card. I use 2 raidz groups there are combine with 8 disks. zpool > status result as following: > > === zpool status == > > NAME

[zfs-discuss] disk write cache, redux

2006-06-02 Thread Philip Brown
hi folks... I've just been exposed to zfs directly, since I'm trying it out on "a certain 48-drive box with 4 cpus" :-) I read in the archives, the recent " hard drive write cache " thread. in which someone at sun made the claim that zfs takes advantage of the disk write cache, selectively enabl

Re: [zfs-discuss] disk write cache, redux

2006-06-02 Thread Bill Moore
On Fri, Jun 02, 2006 at 12:42:53PM -0700, Philip Brown wrote: > hi folks... > I've just been exposed to zfs directly, since I'm trying it out on > "a certain 48-drive box with 4 cpus" :-) > > I read in the archives, the recent " hard drive write cache " > thread. in which someone at sun made the c

[zfs-discuss] zfs going out to lunch

2006-06-02 Thread Joe Little
I've been writing via tar to a pool some stuff from backup, around 500GB. Its taken quite a while as the tar is being read from NFS. My ZFS partition in this case is a RAIDZ 3-disk job using 3 400GB SATA drives (sil3124 card) Ever once in a while, a "df" stalls and during that time my io's go fla

[zfs-discuss] Overview (rollup) of recent activity on zfs-discuss

2006-06-02 Thread Eric Boutilier
For background on what this is, see: http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416 http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200 = zfs-discuss 05/16 - 05/31 = Threads or announcements origin