Therein lies my dillemma:

  - We know the I/O sub-system is capable of much higher I/O rates
  - Under the test setup I've SAS datasets which are lending themselves to 
compression. This should manifest itself as lots of read I/O resulting in much 
smaller (4x) write I/O due to compression. This means my read rates should be 
driven higher to keep the compression. I don't see this, as I said in my 
original post I see reads comes in waves.

I'm beginning to think my write rates are hitting a a bottleneck in compression 
as follows:
  - ZFS issues reads
  - ZFS starts compressing the data before the write and cannot drain the input 
buffers fast enough; this results in reads to stop.
  - ZFS completes compression and writes out data at a much smaller rate due to 
the smaller compressed data stream.

I'm not a filesystem wizard but shouldn't ZFS take advantage of my available 
CPUs to drain the input buffer faster (parallel)? It is possible that you've 
some internal throttles in place to make ZFS a good citizen in the Solaris 
landscape; a la algorithms in place to prevent cache flooding by one 
host/device in EMC/Hitachi.

I'll perform some more tests with different datasets and report to the forum. 
Now if only I can convince my storage administrator to provision me raw disks 
instead of the mirrored disks so I can let ZFS do the same for me, another 
battle another day ;-)

Thanks.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to