Re: [zfs-discuss] Re: Re[2]: Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Luke Lonergan
Doug, On 8/8/06 10:15 AM, "Doug Scott" <[EMAIL PROTECTED]> wrote: > I dont think there is much chance of achieving anywhere near 350MB/s. > That is a hell of a lot of IO/s for 6 disks+raid(5/Z)+shared fibre. While you > can always get very good results from a single disk IO, your percentage > gai

Re: [zfs-discuss] Re: ZFS/Thumper experiences

2006-08-08 Thread Luke Lonergan
Jochen, On 8/8/06 10:47 AM, "Jochen M. Kaiser" <[EMAIL PROTECTED]> wrote: > I really appreciate such information, could you please give us some additional > insight regarding your statement, that "[you] tried to drive ZFS to its limit, > [...] > found that the results were less consistent or pre

Re: [zfs-discuss] 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Torrey McMahon
I read through the entire thread, I think, and have some comments. * There are still some "granny smith" to "Macintosh" comparisons going on. Different OS revs, it looks like different server types, and I can't tell about the HBAs, links or the LUNs being tested. * Before you test

Re: [zfs-discuss] 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Torrey McMahon
Robert Milkowski wrote: Hello Richard, Monday, August 7, 2006, 6:54:37 PM, you wrote: RE> Hi Robert, thanks for the data. RE> Please clarify one thing for me. RE> In the case of the HW raid, was there just one LUN? Or was it 12 LUNs? Just one lun which was build on 3510 from 12 luns in raid-1

Re[2]: [zfs-discuss] Re: ZFS RAID10

2006-08-08 Thread Robert Milkowski
Hello Matthew, Tuesday, August 8, 2006, 8:08:39 PM, you wrote: MA> On Tue, Aug 08, 2006 at 10:42:41AM -0700, Robert Milkowski wrote: >> filebench in varmail by default creates 16 threads - I configrm it >> with prstat, 16 threrads are created and running. MA> Ah, OK. Looking at these results, i

Re: [zfs-discuss] Re: Lots of seeks?

2006-08-08 Thread Spencer Shepler
On Tue, Anton B. Rang wrote: > So while I'm feeling optimistic :-) we really ought to be able to do this in > two I/O operations. If we have, say, 500K of data to write (including all of > the metadata), we should be able to allocate a contiguous 500K block on disk > and write that with a single

[zfs-discuss] Re: Lots of seeks?

2006-08-08 Thread Anton B. Rang
So while I'm feeling optimistic :-) we really ought to be able to do this in two I/O operations. If we have, say, 500K of data to write (including all of the metadata), we should be able to allocate a contiguous 500K block on disk and write that with a single operation. Then we update the überbl

Re: [zfs-discuss] SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-08 Thread eric kustarz
Leon Koll wrote: On 8/8/06, eric kustarz <[EMAIL PROTECTED]> wrote: Leon Koll wrote: > I performed a SPEC SFS97 benchmark on Solaris 10u2/Sparc with 4 64GB > LUNs, connected via FC SAN. > The filesystems that were created on LUNS: UFS,VxFS,ZFS. > Unfortunately the ZFS test couldn't complete b

Re: [zfs-discuss] Re: ZFS RAID10

2006-08-08 Thread Matthew Ahrens
On Tue, Aug 08, 2006 at 10:42:41AM -0700, Robert Milkowski wrote: > filebench in varmail by default creates 16 threads - I configrm it > with prstat, 16 threrads are created and running. Ah, OK. Looking at these results, it doesn't seem to be CPU bound, and the disks are not fully utilized either

[zfs-discuss] Re: ZFS/Thumper experiences

2006-08-08 Thread Jochen M. Kaiser
Hello, I really appreciate such information, could you please give us some additional insight regarding your statement, that "[you] tried to drive ZFS to its limit, [...] found that the results were less consistent or predictable". Especially when taking a closer look at the upcoming rdbms+thum

[zfs-discuss] Re: ZFS RAID10

2006-08-08 Thread Robert Milkowski
bash-3.00# zpool status zfs_raid10_32disks pool: zfs_raid10_32disks state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zfs_raid10_32disks ONLINE 0 0 0 mirror ONLINE 0 0 0 c3t16d0 ONLINE

[zfs-discuss] Re: ZFS RAID10

2006-08-08 Thread Robert Milkowski
filebench in varmail by default creates 16 threads - I configrm it with prstat, 16 threrads are created and running. bash-3.00# lockstat -kgIW sleep 60|less Profiling interrupt: 23308 events in 60.059 seconds (388 events/sec) Count genr cuml rcnt nsec Hottest CPU+PILCaller -

Re: [zfs-discuss] Re: ZFS RAID10

2006-08-08 Thread Robert Milkowski
Hello Doug, Tuesday, August 8, 2006, 7:28:07 PM, you wrote: DS> Looks like somewhere between the CPU and your disks you have a limitation of <9500 ops/sec. DS> How did you connect 32 disks to your v440? Some 3510 JBODs connected directly over FC. -- Best regards, Robert

Re: [zfs-discuss] ZFS RAID10

2006-08-08 Thread Matthew Ahrens
On Tue, Aug 08, 2006 at 09:54:16AM -0700, Robert Milkowski wrote: > Hi. > > snv_44, v440 > > filebench/varmail results for ZFS RAID10 with 6 disks and 32 disks. > What is suprising is that the results for both cases are almost the same! > > > > 6 disks: > >IO Summary: 566997 ops

Re[2]: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Robert Milkowski
Hello Matthew, Tuesday, August 8, 2006, 7:25:17 PM, you wrote: MA> On Tue, Aug 08, 2006 at 06:11:09PM +0200, Robert Milkowski wrote: >> filebench/singlestreamread v440 >> >> 1. UFS, noatime, HW RAID5 6 disks, S10U2 >> 70MB/s >> >> 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as

[zfs-discuss] Re: ZFS RAID10

2006-08-08 Thread Doug Scott
Looks like somewhere between the CPU and your disks you have a limitation of <9500 ops/sec. How did you connect 32 disks to your v440? Doug > Hi. > > snv_44, v440 > lebench/varmail results for ZFS RAID10 with 6 disks > and 32 disks. > What is suprising is that the results for both cases > a

Re: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Matthew Ahrens
On Tue, Aug 08, 2006 at 06:11:09PM +0200, Robert Milkowski wrote: > filebench/singlestreamread v440 > > 1. UFS, noatime, HW RAID5 6 disks, S10U2 > 70MB/s > > 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1) > 87MB/s > > 3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2 >

[zfs-discuss] Lots of seeks?

2006-08-08 Thread Anton B. Rang
I moved my main workspaces over to ZFS a while ago and noticed that my disk got really noisy (yes, one of those subjective measurements). It sounded like the head was being bounced around a lot at the end of each transaction group. Today I grabbed the iosnoop dtrace script (from

[zfs-discuss] Re: Re[2]: Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Doug Scott
> Robert, > > On 8/8/06 9:11 AM, "Robert Milkowski" > <[EMAIL PROTECTED]> wrote: > > > 1. UFS, noatime, HW RAID5 6 disks, S10U2 > > 70MB/s > > 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the > same lun as in #1) > > 87MB/s > > 3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2 > > 130MB/s

[zfs-discuss] ZFS RAID10

2006-08-08 Thread Robert Milkowski
Hi. snv_44, v440 filebench/varmail results for ZFS RAID10 with 6 disks and 32 disks. What is suprising is that the results for both cases are almost the same! 6 disks: IO Summary: 566997 ops 9373.6 ops/s, (1442/1442 r/w) 45.7mb/s, 299us cpu/op, 5.1ms latency IO Summary:

Re: Re[4]: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Luke Lonergan
Robert, > LL> Most of my ZFS experiments have been with RAID10, but there were some > LL> massive improvements to seq I/O with the fixes I mentioned - I'd expect > that > LL> this shows that they aren't in snv44. > > So where did you get those fixes? >From the fine people who implemented them!

Re: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Mark Maybee
Luke Lonergan wrote: Robert, On 8/8/06 9:11 AM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote: 1. UFS, noatime, HW RAID5 6 disks, S10U2 70MB/s 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1) 87MB/s 3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2 130MB/s 4. ZFS, atime

Re: [zfs-discuss] DTrace IO provider and oracle

2006-08-08 Thread Tao Chen
On 8/8/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: Hello,Solaris 10 GA + latest recommended patches:while runing dtrace:bash-3.00# dtrace -n 'io:::start [EMAIL PROTECTED], args[2]->fi_pathname] = count();}'...  oracle 

Re[4]: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Robert Milkowski
Hello Luke, Tuesday, August 8, 2006, 6:18:39 PM, you wrote: LL> Robert, LL> On 8/8/06 9:11 AM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote: >> 1. UFS, noatime, HW RAID5 6 disks, S10U2 >> 70MB/s >> 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1) >> 87MB/s >> 3. ZFS,

Re: Re[2]: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Luke Lonergan
Robert, On 8/8/06 9:11 AM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote: > 1. UFS, noatime, HW RAID5 6 disks, S10U2 > 70MB/s > 2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1) > 87MB/s > 3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2 > 130MB/s > 4. ZFS, atime=off, SW

Re[2]: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Robert Milkowski
Hello Luke, Tuesday, August 8, 2006, 4:48:38 PM, you wrote: LL> Does snv44 have the ZFS fixes to the I/O scheduler, the ARC and the prefetch logic? LL> These are great results for random I/O, I wonder how the sequential I/O looks? LL> Of course you'll not get great results for sequential I/O

Re: [zfs-discuss] Apple Time Machine

2006-08-08 Thread Frank Cusack
On August 8, 2006 3:04:09 PM +0930 Darren J Moffat <[EMAIL PROTECTED]> wrote: Adam Leventhal wrote: When a file is modified, the kernel fires off an event which a user-land daemon listens for. Every so often, the user-land daemon does something like a snapshot of the affected portions of the fil

Re: [zfs-discuss] SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-08 Thread Leon Koll
On 8/8/06, eric kustarz <[EMAIL PROTECTED]> wrote: Leon Koll wrote: > I performed a SPEC SFS97 benchmark on Solaris 10u2/Sparc with 4 64GB > LUNs, connected via FC SAN. > The filesystems that were created on LUNS: UFS,VxFS,ZFS. > Unfortunately the ZFS test couldn't complete bacuase the box was h

Re: [zfs-discuss] Re: ZFS + /var/log + Single-User

2006-08-08 Thread Robert Milkowski
Hello Pierre, Tuesday, August 8, 2006, 4:51:20 PM, you wrote: PK> Thanks for your answer Eric! PK> I don't see any problem mounting a filesystem under 'legacy' PK> options as long as i can have the freedom of ZFS features by being PK> able to add/remove/play around with disks really! PK> I teste

[zfs-discuss] Re: ZFS + /var/log + Single-User

2006-08-08 Thread Pierre Klovsjo
Thanks for your answer Eric! I don't see any problem mounting a filesystem under 'legacy' options as long as i can have the freedom of ZFS features by being able to add/remove/play around with disks really! I tested the 'zfs mount -a' and of course my /var/log[b]/test[/b] became visible and my

Re[2]: [zfs-discuss] zil_disable

2006-08-08 Thread Robert Milkowski
Hello Neil, Tuesday, August 8, 2006, 3:54:31 PM, you wrote: NP> Robert Milkowski wrote: >> Hello Neil, >> >> Monday, August 7, 2006, 6:40:01 PM, you wrote: >> >> NP> Not quite, zil_disable is inspected on file system mounts. >> >> I guess you right that umount/mount will suffice - I just hadn'

RE: [zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Luke Lonergan
Does snv44 have the ZFS fixes to the I/O scheduler, the ARC and the prefetch logic? These are great results for random I/O, I wonder how the sequential I/O looks? Of course you'll not get great results for sequential I/O on the 3510 :-) - Luke Sent from my GoodLink synchronized handheld (www.g

Re: [zfs-discuss] DTrace IO provider and oracle

2006-08-08 Thread Robert Milkowski
Hello przemolicc, Tuesday, August 8, 2006, 3:54:26 PM, you wrote: ppf> Hello, ppf> Solaris 10 GA + latest recommended patches: ppf> while runing dtrace: ppf> bash-3.00# dtrace -n 'io:::start [EMAIL PROTECTED], ppf> args[2]->fi_pathname] = count();}' ppf> ... ppf> vim

[zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Robert Milkowski
Hi. This time some RAID5/RAID-Z benchmarks. This time I connected 3510 head unit with one link to the same server as 3510 JBODs are connected (using second link). snv_44 is used, server is v440. I also tried changing max pending IO requests for HW raid5 lun and checked with DTrace that larger

Re: [zfs-discuss] zil_disable

2006-08-08 Thread Neil Perrin
Robert Milkowski wrote: Hello Eric, Monday, August 7, 2006, 6:29:45 PM, you wrote: ES> Robert - ES> This isn't surprising (either the switch or the results). Our long term ES> fix for tweaking this knob is: ES> 6280630 zil synchronicity ES> Which would add 'zfs set sync' as a per-dataset op

Re: [zfs-discuss] zil_disable

2006-08-08 Thread Neil Perrin
Robert Milkowski wrote: Hello Neil, Monday, August 7, 2006, 6:40:01 PM, you wrote: NP> Not quite, zil_disable is inspected on file system mounts. I guess you right that umount/mount will suffice - I just hadn't time to check it and export/import worked. Anyway is there a way for file systems

[zfs-discuss] DTrace IO provider and oracle

2006-08-08 Thread przemolicc
Hello, Solaris 10 GA + latest recommended patches: while runing dtrace: bash-3.00# dtrace -n 'io:::start [EMAIL PROTECTED], args[2]->fi_pathname] = count();}' ... vim /zones/obsdb3/root/opt/sfw/bin/vim 296 tnslsnr

Re: [zfs-discuss] Querying ZFS version?

2006-08-08 Thread Luke Scharf
Darren Reed wrote: On Solaris, pkginfo -l SUNWzfsr would give you a package version for that part of ZFS.. and "modinfo | grep zfs" will tell you something about the kernel module rev. No such luck. Modinfo doesn't show the ZFS module as loaded; that's probably because I'm not running anythi

Re: [zfs-discuss] Querying ZFS version?

2006-08-08 Thread Luke Scharf
George Wilson wrote: Luke, You can run 'zpool upgrade' to see what on-disk version you are capable of running. If you have the latest features then you should be running version 3: hadji-2# zpool upgrade This system is currently running ZFS version 3. Unfortunately this won

Re: [zfs-discuss] Querying ZFS version?

2006-08-08 Thread George Wilson
Luke, You can run 'zpool upgrade' to see what on-disk version you are capable of running. If you have the latest features then you should be running version 3: hadji-2# zpool upgrade This system is currently running ZFS version 3. Unfortunately this won't tell you if you are running the late

Re: [zfs-discuss] Querying ZFS version?

2006-08-08 Thread Darren Reed
Luke Scharf wrote: Although regular Solaris is good for what I'm doing at work, I prefer apt-get or yum for package management for a desktop. So, I've been playing with Nexenta / GnuSolaris -- which appears to be the open-sourced Solaris kernel and low-level system utilities with Debian pack

[zfs-discuss] Querying ZFS version?

2006-08-08 Thread Luke Scharf
Although regular Solaris is good for what I'm doing at work, I prefer apt-get or yum for package management for a desktop. So, I've been playing with Nexenta / GnuSolaris -- which appears to be the open-sourced Solaris kernel and low-level system utilities with Debian package management -- and

Re[2]: [zfs-discuss] ZFS/Thumper experiences

2006-08-08 Thread Robert Milkowski
Hello David, Tuesday, August 8, 2006, 3:39:42 AM, you wrote: DJO> Thanks, interesting read. It'll be nice to see the actual DJO> results if Sun ever publishes them. You may bet I'll post some results hopefully soon :) -- Best regards, Robertmailto:[EMAIL PROTECTED

Re[2]: [zfs-discuss] 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID

2006-08-08 Thread Robert Milkowski
Hello Richard, Monday, August 7, 2006, 6:54:37 PM, you wrote: RE> Hi Robert, thanks for the data. RE> Please clarify one thing for me. RE> In the case of the HW raid, was there just one LUN? Or was it 12 LUNs? Just one lun which was build on 3510 from 12 luns in raid-1(0). -- Best regards,

Re[2]: [zfs-discuss] zil_disable

2006-08-08 Thread Robert Milkowski
Hello Neil, Monday, August 7, 2006, 6:40:01 PM, you wrote: NP> Not quite, zil_disable is inspected on file system mounts. I guess you right that umount/mount will suffice - I just hadn't time to check it and export/import worked. Anyway is there a way for file systems to make it active without

Re[2]: [zfs-discuss] zil_disable

2006-08-08 Thread Robert Milkowski
Hello Eric, Monday, August 7, 2006, 6:29:45 PM, you wrote: ES> Robert - ES> This isn't surprising (either the switch or the results). Our long term ES> fix for tweaking this knob is: ES> 6280630 zil synchronicity ES> Which would add 'zfs set sync' as a per-dataset option. A cut from the ES>

Re: [zfs-discuss] Apple Time Machine

2006-08-08 Thread Tim Foster
Bryan Cantrill wrote: So in short (and brace yourself, because I know it will be a shock): mentions by executives in keynotes don't always accurately represent a technology. DynFS, anyone? ;) I'm shocked and stunned, and not a little amazed! I'll bet the OpenSolaris PPC guys are thrilled at