> On Mon, 17 Jul 2006, Roch wrote:
>
> >
> > Sorry to plug my own blog but have you had a look
> at these ?
> >
> >
>    http://blogs.sun.com/roller/page/roch?entry=when_to_a
> nd_not_to (raidz)
> >
>    http://blogs.sun.com/roller/page/roch?entry=the_dynam
> ics_of_zfs
> >
> > Also, my thinking is that raid-z is probably more
> friendly
> > when the config contains (power-of-2 + 1) disks (or
> + 2 for
> > raid-z2).
>
Yes I did, and please, plug away!! These are awesome blog entries, and I've read both of them several times. You rule! Really. I wish I could understood a bit more of your second one, it's a bit over my head I'm afraid.

I understand that 8 disks is not optimal for a raidz set, especially for random inputs, and your blog entry is the reason for my comment to that effect near the bottom of my first post.

My lastest raid 50 results were much more healthy, but I don't know that I'm ready to sacrifice 300GB of storage to that slight improve - especially as zfs can't grow the individual stripes (yet...)

>
> I think that 5 disks for a raidz is the sweet spot
> IMHO.  But ... YMMV etc.etc.
>
> FWIW: here's a datapoint from a dirty raidz system
> with 8Gb of RAM & 5 *
> 300Gb SATA disks:
>
> Version  1.03       ------Sequential Output------
> --Sequential Input- --Random-
> -Per Chr- --Block-- -Rewrite-
> -Per Chr- --Block-- --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP
> K/sec %CP K/sec %CP  /sec %CP
> zfs0            16G 88937  99 195973  47 95536  29
> 75279  95 228022  27 433.9   1
> ------Sequential Create------
>  --------Random Create--------
> -Create-- --Read--- -Delete--
>  -Create-- --Read--- -Delete--
> files  /sec %CP  /sec %CP  /sec %CP
>   /sec %CP  /sec %CP  /sec %CP
> 16 31812  99 +++++ +++ +++++ +++ 28761
>   99 +++++ +++ +++++ +++
> s0,16G,88937,99,195973,47,95536,29,75279,95,228022,27,
> 433.9,1,16,31812,99,+++++,+++,+++++,+++,28761,99,+++++
> ,+++,+++++,+++

Here is my version with 5 disks in a single raidz:

-------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 5 disks 16384 62466 72.5 133768 28.0 97698 21.7 66504 88.1 241481 20.7 118.2 1.4

Ouch, your one is much better! Can you tell me more about your setup?

> I'm *very* pleased with the current release of ZFS.
>  That being said, ZFS
> an be frustrating at times.  Occasionally it'll issue
> in excess of 1k I/O
> ops a Second (IOPS) and you'll say "holy snit, look
> at..." - and then
> there are times you wonder why it won't issue more
> that ~250 IOPS.  But,
> for a Rev 1 filesystem, with the technical complexity
> of ZFS, this level
> of performance is excellent IMHO and I expect that
> all kinds of
> improvements will continue to be made on the code
> over time.

I don't really have a point of comparison to know how well my hardware should be performing in the real world, just a gut feeling that it should be doing better, and some rather odd scaling issues. Please don't take this as zfs bashing. I still can't stop telling everyone I know about how I can create a 2TB raid in 3 seconds - I think ZFS is wicked cool!

This thread is two fold, 1) I'm hoping to learn more about the zfs & solaris performance tuning by digging on in and investigating. 2) I have some notion of hopefully being helpful by providing developers with some real world data that might help in improving the code. I'm more then happy to to any testing that anyone can throw at me. I've already had one email from one person asking me to run their dtrace script with benchmarking and email back the results. This is great. I can't code, but if I can help give back in any way here - hurrah!

> Jonathan - I expect the answer to your performance
> expectations is that
> ZFS is-what-it-is at the moment.

Along those lines, I'll upgrade to the lastest nevada as soon as my connection finishes it. 5 CDs is very non-trivial down in this part of the world sadly.


> Regards,
>
> Al Hopper  Logical Approach Inc, Plano, TX.

Thanks for the reply Al,
Jonathan Wheeler
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to