> I currently get 72+MB write and 85+MB read from the NAS
You aren't going to get more than ~100 MB/sec out of a 1G link under
real-world conditions.  Granted, going from 72MB/sec to 100 MB/sec is
a 38% improvement, but don't expect anything more.  Now, if latency is
the problem, instead of throughput, that's a different issue.

A few notes:

* You'll get better performance out of a RAID10 array, if you can
afford the overhead, and won't suffer a major write penalty.
* Unless you are doing major streaming reads/writes (which appears to
not be the case), SSDs should do better than spinning disk.  But see
notes below!
* Run "blktrace" (or btrace) to get a sense of what your *actual* disk
workload looks like.  Output is complicated, but there's a ton of good
data.  I also second the recommendation for "fio" for benchmarking.
* If you can't decide, just get SSDs.  They should be at least as
reliable as HDDs, but you probably won't have any warning before they
die...


Note about SSDs:  they are not all the same.  We've had some SSDs at
$day_job that performed *worse* than spinning disks (older Samsung
840/850 drives).  They were fine until we had to start rewriting
blocks for every write...at which point performance just tanked.
Spinning disks, OTOH, didn't have the same "starting" speed, but
performance doesn't change over time.

Another thing:  many/most RAID cards (I'm looking at you LSI...) do
*not* pass TRIM through to the drives.  I don't know what the Drobos
do, but that's something to be aware of.



On Tue, Dec 6, 2016 at 3:35 PM, Craig Constantine
<[email protected]> wrote:
> Does anyone know off the top of their head, or have a reference for, details 
> or thoughts about NAS (network attached storage) performance with SSD over 
> spinning rust?
>
> I have a Drobo 5N that is currently using 7200rpm drives, and is on 
> gig-ethernet. I'm looking to squeeze more performance out of it and am 
> considering replacing drives with either SSD or 15krpm drives. I'm not 
> concerned about the cost (1Tb drives are big enough for this array), I simply 
> don't feel like performing the experiment with one, only to find out the 
> other option would do better.
>
> I suspect -- but am not sure how to measure this -- the constraint is with 
> seeking. The App I'm running on my local system does a huge amount of 
> read/write seeking all over the file system. So my hunch is that it's all 
> about the seek time. That leads me to think that steping up to 15krpm drives, 
> or to SSD, would be a bit win over the 7200rpm I have now.
>
> Other info...
>
> Drobo savy people: this drobo already has the add-on RAM cache installed.
>
> Sustained i/o: I've done a disk speed test (using Blackmagicdesign's Disk 
> Speed Test) and I currently get 72+MB write and 85+MB read from the NAS. If I 
> understand the test tool, this is a sustained write/read data stream, because 
> the tool is meant to test for video i/o performance usability. And if my 
> rusty math is right, gigabit ether converts to 128MB/s absolute ceiling.  
> ...therefore I don't think I have a problem (nor any room or desire for much 
> improvement) in terms of sustained i/o.
>
> -c
>
> _______________________________________________
> Discuss mailing list
> [email protected]
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
> This list provided by the League of Professional System Administrators
>  http://lopsa.org/



-- 
Jesse Becker
_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to