deloptes writes:

Andrei POPESCU wrote:

>> Each LSI card has a 6 bay cage attached and I have raided 6x2TB WD RED
>> spinning discs (for data) and 2x1TB WD RED spinning discs (for OS)
>
> 1TB for OS (assuming RAID1) seems... excessive to me. All my current
> installations are in 10 GiB partitions with only a separate /home.
>
> Even if I'd go "wild" with several desktops installed (I'm only using
> LXDE), LibreOffice, etc. I'd probably get away with 50 GiB or so. Check
> the output of:
>
> du -hx --max-depth=1 /
>
This is true. the root partition is not big - the rest of the space I'll use
for data, but I do not want to use smaller disk, because I will loose two
bays and have the power consumption anyway. I think 1TB is good compromise.
I leave some disk space as spare for LVM and LVM snapshot. I put there the
OS and for example the NFS root/boot stuff or some QEMU machines.

Sounds OK to me :) From my point of view, I would work towards reducing the total number of disks, given that spinning disks of 8 TB capacity and SSDs of 4 TB capacity are readily available today. YMMV

>> I somehow can not convince myself that I need to replace any of these
>> with SSDs.
>> I don't want the cheapest but also not unnecessary expensive drives, I
>> just find it hard to evaluate which drives are reliable.
>
> The reliability matters much less with RAID1. By the way, the "I" in
> RAID stands for "inexpensive" ;)

[...]

I am too old for blind experimenting. This is why I'm asking if someone has
experience with SSD in RAID with consumer grade disks. The once I see are
installed in servers are not available on the consumer market.

If I understood it corretly, you initially asked about NAS-grade-SSDs. I believe that is quite "special" a purpose, because I tend to hink of NAS as slow but large storage spaces where SSDs are indeed rare.

I have had good experience with the following two "consumer-grade" SSDs in an mdadm RAID 1 (taking the I for inexpensive literally :) ). Both have about 8000 hours of operation according to SMART and when in use they ran about 12h/day (i.e. normally not 24/7):

* Samsung 850 EVO 2TB
* Crucial MX300 2TB

At the time, these were the cheapest SSDs I could get with 2TB. Despite their performance being "medicore" (for SSDs, that is), there were no problems with RAID operation whatsoever.

For my new system, I got two Intel DC P4510. These were actually available to me as a "consumer" despite them being (old) "datacenter" SSDs. They run much faster, but most of the time one does not notice the difference. My typical workload that benefits from the "faster" SSDs is installing packages and updates in multiple VMs and containers (e.g. 4--6 VMs and two containers) at once. Apart from the potential difficulties in getting to purchase such SSDs, they are also more difficult to put into systems due to different connectors: Server SSDs use either SAS or (in case of the DC P4510): U.2.

>> I saw there are 1TB WD RED SSDs targeting NAS for about €120,-
>> WESTERN DIGITAL WD RED SA500 NAS 1TB SATA (WDS100T1R0A)
>
> The speed gain of SSD vs. spinning discs for the OS is hard to describe.
> Think jet aircraft vs. car.

[...]

Yes, but as mentioned the LSI I use in the server are SATA2 so it will stick
to bandwidth throughput of 300MB/s - does it make sense to replace the good
WD RED spinning disks with SSD?
I already heard one good argument.

How much do you rely on random access to the actual data? As others have already posted, putting an OS onto the SSD is an exceptional performance gain for all OS-startup related tasks including "running program x for the first time" or OS upgrades (apt-get operations in general). If, however, you are considering to use the SSD mostly for "data", it highly depends on what type of data you have:

* If it is os-style data like VMs, containers, compiler toolchains, chroots
  etc. then there will be a significant performance improvement, because
  these all benefit from reduced latency of access.

* If it is media like music, pictures etc. served over a typical network
  protocl, the performance of HDDs may be entirely sufficient. Some
  media-related tasks like "downscale 10000 images from 700x700 to 500x500"
  may also benefit from the SSD if files are small enough that the access
  time becomes relevant.

* Additionally, if you have a small set of data that you are accessing all
  of the time and the OS manages to cache this into RAM, you will only
  benefit from the SSD performance upon first access. On systems that run
  24/7, the benefit of SSDs is gerater in database-style continouously
  random-access-intensive applications rather than typical file access
  patterns.

As others have noted, the performance gain of SSDs is largely independent of connector. You can get an improvement even on old connectors and to some extent also on old systems. Unless you are thinking of using IDE SSDs (special-purpose devices which are mostly _not_ used for performance), everything should be fine in that regard :)

HTH
Linux-Fan

Attachment: pgp2eTYOrcKAX.pgp
Description: PGP signature

Reply via email to