On 8/1/2020 8:06 PM, David Christensen wrote:
On 2020-08-01 16:30, Leslie Rhorer wrote:
I am a big proponent of having a separate boot drive
+1
no matter what the file system, I would definitely up the memory to
the 16GB max,
1. I try very hard not to spend money on obsolete technology.
What is "obsolete"? If a system does its job adequately, it isn't
"obsolete", no matter how old it might be or how many newer bells and
whistles it may fail to have. What's more, why spend $2000 or more on
unnecessary hardware when $70 for a minor upgrade to older hardware will
do the job perfectly well? To be sure, there comes a point where money
spent on an older system that can no longer easily meet newer demands is
poorly spent - maybe. Memory is often an example, as newer systems
often cannot use older memory. Other upgrades, such as newer, larger
drives, a new display, etc, can easily be used with a new system when
the old one is finally retired.
I will admit one should be very cognizant of exactly how much one is
actually saving in the long run by spending money on old technology.
2. I would wait until the computer is put into use and measurements
indicate memory is insufficient.
I won't quibble with this. It's not the way I would probably handle
it, but it is not an unreasonable position.
3. One or more small, fast SSD's used as a ZFS cache devices and/or
intent log devices can dramatically improve performance of ZFS.
Or lots of other things. On something like a laptop, it is virtually
insane not to employ an SSD. Even a high-powered server is very well
served to boot from SSDs, however, and the cost has come down so
drastically, there is little justification not to employ SSDs. If I
were running ZFS, I definitely would employ SSDs as cache and intent log
devices. For that matter, putting md intent bitmaps on SSDs is a pretty
good idea, and booting from SSDs is a really good idea no matter what
the mix of file systems.
4. ZFS and non-ECC memory risk the "scrub of death" scenario, which has
been debated endlessly.
Yes. As I said, ZFS has many great features, especially for an
enterprise server. For a simple NAS, not so much, I think.
I also ran desktop hardware as servers for many years, but decided to
buy a "real" server (with ECC memory) when building a ZFS file server.
The ECC is invisible, but the combination of Xeon processor,
dual-channel memory, and server chips throughout provide obvious
performance.
Or not. No matter how mighty, the server will not be able to
out-perform the infrastructure. My fairly humble desktop servers can
easily pump out more than 4Gbps to clients, and indeed the nightly sync
between the main array and the backup array proceeds at just such a rate
across the two 10G optical links into my servers from the switch. The
links from the switch to the hosts, however, are all either 1G or 100M.
While technically, all the hosts banging out full bore together might
just be able to reach the transfer limit of the server, in practical
terms, there is no way that amount of data could be digested or
regurgitated from the limited number of hosts on the LAN. The fact the
two motherboards are not server class and are over 16 years old really
doesn't matter.