On Jun 7, 2011, at 9:12 AM, Phil Harman wrote:

> Ok here's the thing ...
> 
> A customer has some big tier 1 storage, and has presented 24 LUNs (from four 
> RAID6 groups) to an OI148 box which is acting as a kind of iSCSI/FC bridge 
> (using some of the cool features of ZFS along the way). The OI box currently 
> has 32GB configured for the ARC, and 4x 223GB SSDs for L2ARC. It has a dual 
> port QLogic HBA, and is currently configured to do round-robin MPXIO over two 
> 4Gbps links. The iSCSI traffic is over a dual 10Gbps card (rather like the 
> one Sun used to sell).

The ARC size is not big enough to hold the data for the L2ARC headers for the 
size
of the L2ARC.

> 
> I've just built a fresh pool, and have created 20x 100GB zvols which are 
> mapped to iSCSI clients. I have initialised the first 20GB of each zvol with 
> random data. I've had a lot of success with write performance (e.g. in 
> earlier tests I had 20 parallel streams writing 100GB each at over 600MB/sec 
> aggregate), but read performance is very poor.
> 
> Right now I'm just playing with 20 parallel streams of reads from the first 
> 2GB of each zvol (i.e. 40GB in all). During each run, I see lots of writes to 
> the L2ARC, but less than a quarter the volume of reads. Yet my FC LUNS are 
> hot with 1000s of reads per second. This doesn't change from run to run. Why?

Writes to the L2ARC devices are throttled to 8 or 16 MB/sec. If the L2ARC fill 
cannot keep up,
the data is unceremoniously evicted.

> Surely 20x 2GB of data (and it's associated metadata) will sit nicely in 4x 
> 223GB SSDs?

On Jun 7, 2011, at 12:34 PM, Marty Scholes wrote:

> I'll throw out some (possibly bad) ideas.
> 
> Is ARC satisfying the caching needs?  32 GB for ARC should almost cover the 
> 40GB of total reads, suggesting that the L2ARC doesn't add any value for this 
> test.
> 
> Are the SSD devices saturated from an I/O standpoint?  Put another way, can 
> ZFS put data to them fast enough?  If they aren't taking writes fast enough, 
> then maybe they can't effectively load for caching.  Certainly if they are 
> saturated for writes they can't do much for reads.
> 
> Are some of the reads sequential?  Sequential reads don't go to L2ARC.

This is not a true statement. If the primarycache policy is set to the default, 
all data will
be cached in the ARC.

> 
> What does iostat say for the SSD units?  What does arc_summary.pl (maybe 
> spelled differently) say about the ARC / L2ARC usage?  How much of the SSD 
> units are in use as reported in zpool iostat -v?

The ARC statistics are nicely documented in arc.c and available as kstats.
 -- richard


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to