Thanks Jens,
I have a vdbench profile and script that will run the new SNIA Solid State 
Storage (SSS)
Performance Test Suite (PTS). I'd be happy to share if anyone is interested.
 -- richard

On Jul 28, 2011, at 7:10 AM, Jens Elkner wrote:

> Hi,
> 
> Roy Sigurd Karlsbakk wrote:
>> Crucial RealSSD C300 has been released and showing good numbers for use as 
>> Zil and L2ARC. Does anyone know if this unit flushes its cache on request, 
>> as opposed to Intel units etc?
>> 
> 
> I had a chance to get my hands on a Crucial RealSSD C300/128MB yesterday and 
> did
> some quick testing. Here are the numbers first, some explanation follows 
> below:
> 
> cache enabled, 32 buffers:
> Linear read, 64k blocks: 134 MB/s
> random read, 64k blocks: 134 MB/s
> linear read, 4k blocks: 87 MB/s
> random read, 4k blocks: 87 MB/s
> linear write, 64k blocks: 107 MB/s
> random write, 64k blocks: 110 MB/s
> linear write, 4k blocks: 76 MB/s
> random write, 4k blocks: 32 MB/s
> 
> cache enabled, 1 buffer:
> linear write, 4k blocks: 51 MB/s (12800 ops/s)
> random write, 4k blocks: 7 MB/s (1750 ops/s)
> linear write, 64k blocks: 106 MB/s (1610 ops/s)
> random write, 64k blocks: 59 MB/s (920 ops/s)
> 
> cache disabled, 1 buffer:
> linear write, 4k blocks: 4.2 MB/s (1050 ops/s)
> random write, 4k blocks: 3.9 MB/s (980 ops/s)
> linear write, 64k blocks: 40 MB/s (650 ops/s)
> random write, 64k blocks: 40 MB/s (650 ops/s)
> 
> cache disabled, 32 buffers:
> linear write, 4k blocks: 4.5 MB/s, 1120 ops/s
> random write, 4k blocks: 4.2 MB/s, 1050 ops/s
> linear write, 64k blocks: 43 MB/s, 680 ops/s
> random write, 64k blocks: 44 MB/s, 690 ops/s
> 
> cache enabled, 1 buffer, with cache flushes
> linear write, 4k blocks, flush after every write: 1.5 MB/s, 385 writes/s
> linear write, 4k blocks, flush after every 4th write: 4.2 MB/s, 1120 writes/s
> 
> 
> The numbers are rough numbers read quickly from iostat, so please don't
> multiply block size by ops and compare with the bandwidth given ;)
> The test operates directly on top of LDI, just like ZFS.
> - "nk blocks" means the size of each read/write given to the device driver
> - "n buffers" means the number of buffers I keep in flight. This is to keep
>   the command queue of the device busy
> - "cache flush" means a synchronous ioctl DKIOCFLUSHWRITECACHE
> 
> These numbers contain a few surprises (at least for me). The biggest surprise
> is that with cache disabled one cannot get good data rates with small blocks,
> even if one keeps the command queue filled. This is completely different from
> what I've seen from hard drives.
> Also the IOPS with cache flushes is quite low, 385 is not much better than
> a 15k hdd, while the latter scales better. On the other hand, from the large
> drop in performance when using flushes one could infer that they indeed flush
> properly, but I haven't built a test setup for that yet.
> 
> Conclusion: From the measurements I'd infer the device makes a good L2ARC,
> but for a slog device the latency is too high and it doesn't scale well.
> 
> I'll do similar tests on a x-25 and ocz vertex 2 pro as soon as they arrive.
> 
> If there are numbers you are missing please tell me, I'll measure them if
> possible. Also please ask if there are questions regarding the test setup.
> 
> --
> Arne
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to