> ZIL OPTIONS: Obviously a DDRdrive is the ideal (36k 4k random 
> IOPS***) but for the same budget I can get 2x Vertex 2 EX 50GB 
> drives and put each behind it’s own P410 512MB BBWC controller.

The Vertex 2 EX goes for approximately $900 each online, while the 
P410/512 BBWC is listed at HP for $449 each.  Cost wise you should 
contact us for a quote, as we are price competitive with just a single 
SSD/HBA combination.  Especially, as one obtains 4GB instead of 
512MB of ZIL accelerator capacity.

> Assuming the SSDs can do 6300 4k random IOPS*** and that the 
> controller cache confirms those writes in the same latency as the 

For 4KB random writes you need to look closely at slides 47/48 of the 
referenced presentation (http://www.ddrdrive.com/zil_accelerator).

The 6443 IOPS is obtained after testing for *only* 2 hours post 
unpackaging or secure erase.  The slope of both curves gives a hint, as 
the Vertex 2 EX does not level off and will continue to decrease.  I am 
working on a new presentation focusing on this very fact for random 
write IOPS performance over time (life of the device).  Suffice to say, 
6443 IOPS is *not* worst case performance for random writes on the 
Vertex 2 EX.

> DDRdrive (both PCIe attached RAM?****) then we should have 
> DDRdrive type latency up to 6300 sustained IOPS.

All tests used a QD (Queue Depth) of 32 which will hide the device 
latency of a single IO.  Very meaningful, as real life workloads can 
be bound by even a single outstanding IO.  Let's trace the latency to
determine which has the advantage.  For the SSD/HBA combination 
an IO has to run the gauntlet through two controllers (HBA and SSD)
and propagate over a SATA cable.  The DDRdrive X1 has a single 
unified controller and no extraneous SATA cable, see slides 15-17.

Best regards,

Christopher George
Founder/CTO
www.ddrdrive.com
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to