On 2/1/07, Marion Hakanson <[EMAIL PROTECTED]> wrote:
There's also the potential of too much seeking going on for the raidz pool,
since there are 9 LUN's on top of 7 physical disk drives (though how Hitachi
divides/stripes those LUN's is not clear to me).

Marion,

That is the part of your setup that puzzled me.  You took the same 7
disk raid5 set and split them into 9 LUNS.  The Hitachi likely splits
the "virtual disk" into 9 continuous partitions so each LUN maps back
to different parts of the 7 disks.  I speculate that ZFS thinks it is
talking to 9 different disks so spreads out the writes accordingly.
What ZFS thinks is sequential writes becomes well spaced writes across
the entire disk & blows your seek time off the roof.

I'm interested how it looks like from the Hitachi end.  If you can,
repeat the test with the Hitachi presenting all 7 disks directly to
ZFS as LUNs?

One thing I noticed which puzzles me is that in both configurations, though
more so in the divided-up raidz pool, there were long periods of time where
the LUN's showed in "iostat -xn" output at 100% busy but with no I/O's
happening at all.  No paging, CPU 100% idle, no less than 2GB of free RAM,
for as long as 20-30 seconds.  Sure puts a dent in the throughput.

Interesting... what you are suggesting is that %b is 100% when w/s and r/s is 0?


--
Just me,
Wire ...
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to