Bob Friesenhahn wrote:
On Fri, 24 Jul 2009, Tristan Ball wrote:
I've used 8K IO sizes for all the stage one tests - I know I might get
it to go faster with a larger size, but I like to know how well systems
will do when I treat them badly!
The Stage_1_Ops_thru_run is interesting. 2000+ ops/sec on random writes,
5000 on reads.
This seems like rather low random write performance. My 12-drive
array of rotating rust obtains 3708.89 ops/sec. In order to be
effective, it seems that a synchronous write log should perform
considerably better than the backing store.
That really depends on what you're trying to achieve. Even if this
single drive is only showing equivilient performance to a twelve drive
array (and I suspect your 3700 ops/sec would slow down over a bigger
data set, as seeks make more of an impact), that still means that if the
SSD is used as a ZIL, those sync writes don't have to be written to the
spinning disks immediately giving the scheduler a better change to order
the IO's providing better over all latency response for the requests
that are going to disk.
And while I didn't make it clear, I actually intend to use the 128G
drive as a L2ARC. While it's effectiveness will obviously depend on the
access patterns, the cost of adding the drive to the array is basically
trivial, and it significantly increases the total ops/sec the array is
capable of during those times that the access patterns provide for it.
For my use, it was a case of "might as well". :-)
Tristan.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss