On 07/06/2011 22:57, LaoTsao wrote:
You have un balance setup
Fc 4gbps vs 10gbps nic
It's actually 2x 4Gbps (using MPXIO) vs 1x 10Gbps.
After 10b/8b encoding it is even worse, but this not yet impact your benchmark
yet
Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D
On Jun 7, 2011, at 5:46 PM, Phil Harman<phil.har...@gmail.com> wrote:
On 07/06/2011 20:34, Marty Scholes wrote:
I'll throw out some (possibly bad) ideas.
Thanks for taking the time.
Is ARC satisfying the caching needs? 32 GB for ARC should almost cover the
40GB of total reads, suggesting that the L2ARC doesn't add any value for this
test.
Are the SSD devices saturated from an I/O standpoint? Put another way, can ZFS
put data to them fast enough? If they aren't taking writes fast enough, then
maybe they can't effectively load for caching. Certainly if they are saturated
for writes they can't do much for reads.
The SSDs are barely ticking over, and can deliver almost as much throughput as
the current SAN storage.
Are some of the reads sequential? Sequential reads don't go to L2ARC.
That'll be it. I assume the L2ARC is just taking metadata. In situations such
as mine, I would quite like the option of routing sequential read data to the
L2ARC also.
I do notice a benefit with a sequential update (i.e. COW for each block), and I
think this is because the L2ARC satisfies most of the metadata reads instead of
having to read them from the SAN.
What does iostat say for the SSD units? What does arc_summary.pl (maybe
spelled differently) say about the ARC / L2ARC usage? How much of the SSD
units are in use as reported in zpool iostat -v?
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss