On 10/28/11 00:54, Neil Perrin wrote:

On 10/28/11 00:04, Mark Wolek wrote:

Still kicking around this idea and didn’t see it addressed in any of the threads before the forum closed. 

 

If one made an all ssd pool, would a log/cache drive just slow you down?  Would zil slow you down?  Thinking rotate MLC drives with sandforce controllers every few years to avoid losing a drive to “sorry no more writes aloud” scenarios. 

 

Thanks

Mark


Interesting question. I don't think there's a straightforward answer. Oracle uses write optimised log devices and read optimised cache devices in it's appliances. However, assuming all the SSDs are the same then I suspect neither a log nor a cache device would help:

Log
If there is a log then it is solely used, and can be written to in parallel with periodic TXG commit writes to the other pool devices.  If that log were part of the pool then the ZIL code will spread the load among all pool devices, but will compete with TXG commit writes.  My gut feeling is that this would be the higher performing option though.  I think, a long time ago, I experimented with designating one disk out of the pool as a log and saw degradation on synchronous performance. That seems to be the equivalent to your SSD question.

Cache
Similarly for cache devices the read would compete at TXG commit writes, but otherwise performance ought to be higher.

Neil.
Did some quick tests with disks to check if my memory was correct.
'sb' is a simple problem to spawn a number of threads to fill a file of a certain size
with specified sized non zero writes. Bandwidth is also important.

1. Simple 2 disk system.
   32KB synchronous writes filling 1GB with 20 threads

zpool create whirl  <2 disks>; zfs set recordsize=32k whirl
st1 -n /whirl/f -f 1073741824 -b 32768 -t 20
        Elapsed time 95s  10.8MB/s

zpool create whirl <disk> log <disk> ; zfs set recordsize=32k whirl
st1 -n /whirl/f -f 1073741824 -b 32768 -t 20
        Elapsed time 151s  6.8MB/s

2. Higher end 6 disk system.
   32KB synchronous writes filling 1GB with 100 threads

zpool create whirl <6 disks>; zfs set recordsize=32k whirl
st1 -n /whirl/f -f 1073741824 -b 32768 -t 100
        Elapsed time 33s  31MB/s

zpool create whirl <5 disks>  log <1disk>; zfs set recordsize=32k whirl
st1 -n /whirl/f -f 1073741824 -b 32768 -t 100
        Elapsed time 147s  7.0MB/s

and for interest:
zpool create whirl <5 disk> log <SSD>; zfs set recordsize=32k whirl
st1 -n /whirl/f -f 1073741824 -b 32768 -t 100
         Elapsed time 8s  129MB/s

3. Higher end smaller writes
   2K synchronous writes filling 128MB with 100 threads

zpool create whirl <6 disks>: zfs set recordsize=1k whirl
st1 -n /whirl/f -f 134217728 -b 2048 -t 100
        Elapsed time 16s  8.2MB/s

zpool create whirl <5 disks>  log <1 disk>
zfs set recordsize=1k whirl
ds8 -n /whirl/f -f 134217728 -b 2048 -t 100
        Elapsed time 24s  5.5MB/s


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to