On Dec 13, 2009, at 5:04 PM, Jens Elkner wrote:

On Sat, Dec 12, 2009 at 04:23:21PM +0000, Andrey Kuzmin wrote:
As to whether it makes sense (as opposed to two distinct physical
devices), you would have read cache hits competing with log writes for
bandwidth. I doubt both will be pleased :-)

Hmm - good point. What I'm trying to accomplish:

Actually our current prototype thumper setup is:
        root pool (1x 2-way mirror SATA)
        hotspare  (2x SATA shared)
        pool1 (12x 2-way mirror SATA)   ~25% used       user homes
        pool2 (10x 2-way mirror SATA)   ~25% used       mm files, archives, ISOs

So pool2 is not really a problem - delivers about 600MB/s uncached,
about 1.8 GB/s cached (i.e. read a 2nd time, tested with a 3.8GB iso)
and is not contineously stressed. However sync write is ~ 200 MB/s
or 20 MB/s and mirror, only.

Problem is pool1 - user homes! So GNOME/firefox/eclipse/subversion/ soffice usually via NFS and a litle bit via samba -> a lot of more or less small
files, probably widely spread over the platters. E.g. checkin' out a
project from a svn|* repository into a home takes "hours". Also having
its workspace on NFS isn't fun (compared to linux xfs driven local soft
2-way mirror).

This is probably a latency problem, not a bandwidth problem. Use zilstat
to see how much ZIL traffic you have and, if the number is significant,
consider using the F20 for a separate log device.
 -- richard


So data are coming in/going out currently via 1Gbps aggregated NICs, for
X4540 we plan to use one (may be experiment with two some time later)
10 Gbps NIC. So max. 2 GB/s read and write. This leaves still 2GB/s in
and out for the last PCIe 8x Slot - the F20. Since IO55 is bound
with 4GB/s bidirectional HT to the Mezzanine Connector1, in theory those
2 GB/s to and from the F20 should be possible.

So IMHO wrt. bandwith basically it makes not really a difference, whether
one puts 4 SSDs into HDD slots or using the 4 Flash-Modules on the F20
(even when distributing the SSDs over the IO55(2) and MCP55).

However, having it on a separate HT than the HDDs might be an advantage.
Also one would be much more flexible/able to "scale immediately", i.e.
don't need to re-organize the pools because of the now "unavailable"
slots/ is still able to use all HDD slots with normal HDDs.
(we are certainly going to upgrade x4500 to x4540 next year ...)
(And if Sun makes a F40 - dropping the SAS ports and putting 4 other
Flash-Modules on it or is able to get flashMods with double speed , one
could probably really get ~ 1.2 GB write and ~ 2GB/s read).

So, seems to be a really interesting thing and I expect at least wrt.
user homes a real improvement, no matter, how the final configuration
will look like.

Maybe the experts at the source are able to do some 4x SSD vs. 1xF20
benchmarks? I guess at least if they turn out to be good enough, it
wouldn't hurt ;-)

Jens Elkner wrote:
...
whether it is possible/supported/would make sense to use a Sun Flash
Accelerator F20 PCIe Card in a X4540 instead of 2.5" SSDs?

Regards,
jel.
--
Otto-von-Guericke University     http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany         Tel: +49 391 67 12768
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to