Craig> Does anyone know off the top of their head, or have a reference
Craig> for, details or thoughts about NAS (network attached storage)
Craig> performance with SSD over spinning rust?

I suspect it's not going to help you much at all... and will certainly
cost you a pretty penny.  Wouldn't it be cheaper to get some local
SSDs and use them as a cache to the backend Drobo instead?

The reason I suspect it won't help is both ethernet limits, but also
Drobo CPU limits.  It's just not designed for IO like this in my
book.  

Craig> I have a Drobo 5N that is currently using 7200rpm drives, and
Craig> is on gig-ethernet. I'm looking to squeeze more performance out
Craig> of it and am considering replacing drives with either SSD or
Craig> 15krpm drives. I'm not concerned about the cost (1Tb drives are
Craig> big enough for this array), I simply don't feel like performing
Craig> the experiment with one, only to find out the other option
Craig> would do better.

So your data set fits within 5Tb of storage?  Or actually less?  

Craig> I suspect -- but am not sure how to measure this -- the
Craig> constraint is with seeking. The App I'm running on my local
Craig> system does a huge amount of read/write seeking all over the
Craig> file system. So my hunch is that it's all about the seek
Craig> time. That leads me to think that steping up to 15krpm drives,
Craig> or to SSD, would be a bit win over the 7200rpm I have now.

Are you doing lots of small writes/reads?  And it's RAID5, right?  

Craig> Drobo savy people: this drobo already has the add-on RAM cache installed.

Can you give more details on the Drobo configuration and setup?

Craig> Sustained i/o: I've done a disk speed test (using
Craig> Blackmagicdesign's Disk Speed Test) and I currently get 72+MB
Craig> write and 85+MB read from the NAS. If I understand the test
Craig> tool, this is a sustained write/read data stream, because the
Craig> tool is meant to test for video i/o performance usability. And
Craig> if my rusty math is right, gigabit ether converts to 128MB/s
Craig> absolute ceiling.  ...therefore I don't think I have a problem
Craig> (nor any room or desire for much improvement) in terms of
Craig> sustained i/o.

If you're going to do some performance testing, maybe try using the
'fio' tool on Linux to characterize your performance.

>From the sound of it, you're happy to spend a few thousand bucks on
this.  Maybe the answer is to get another drobo or two and some extra
network cards and then shard/span the data across multiple devices.

If the data isn't critical, or can be easily re-created from a master
source, then a RAID0 stripe across disks might work better.

And actualy, you'll get better performance by just getting a couple of
SAS/SATA PCIe controllers and a bunch of disks to put local to the
system.  Then share them out via NFS/CIFS to other hosts if need be.

Good luck!
John

P.S.  I haven't even talked about how maybe, just maybe, using lvcache
might give you a bunch of performance as well.  It all depends though.

_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to