> Erik Trimble sez:
> Honestly, I've said it before, and I'll say it (yet) again:  unless you 
> have very stringent power requirement (or some other unusual 
> requirement, like very, very low noise),  used (or even new-in-box, 
> previous generation excess inventory) OEM stuff is far superior to any 
> build-it-yourself rig you can come up with. 
It's horses for courses, I guess. I've had to live with server fan noise and
power requirements and it's not pleasant. I very much like the reliability
characteristics of older servers, but they eat a lot of power and are noisy
as you say.

On the other hand, I did the calculation of the difference in cost of 
electricity over two years at my local rates (central Texas) and it's easy to
save a few hundred dollars over two years of 24/7 operation with low power
systems. I am NOT necessarily saying that my system is something to emulate, 
nor that my choices are right for everyone, particularly hardware building 
amateurs. My past includes a lot of hardware design and build. So putting 
together a Frankenserver is not something that is daunting. I also have a 
history which includes making educated guesses about failure rates and the 
cost of losing data. So I made choices based on my experience and skills.

For **me**, putting together a server out of commercial parts is a far better 
bet than running a server in a virtual machine on desktop parts of 
any vintage, which was the original question - whether a virtual server on top
of Windows running on some hardware was advisable for a beginner. For me, 
it's not. The relative merits will vary from user to user according to their 
skills
and experience level. I was willing to learn Solaris to get zfs. Given what's 
happened with Oracle since I started that, that may have been a bad bet, but
my server and data do now live and breathe for better or worse. 

But I have no fears of breathing life into new hardware and copying 
the old data over. Nor is it a trial to me to fire up a last-generation server, 
install a new OS and copy the data over. To me, that's all a cost/benefit 
calculation.

>So much so, in fact, that we   should really consider the reference 
> recommendation for a ZFS fileserver 
> to be certain configs of brand-name hardware, and NOT
> try to recommend other things to folks.
I personally would have loved to have that when I started the zfs/Opensolaris 
trek a year ago. It was not available, and I paid my dues learning the OS and
zfs. I'm not sure, given where Oracle is taking Solaris, that there is any need 
to
recommend any particular hardware to folks in general. I think the number of 
people following the path I took, using OpenSolaris to get zfs, and 
buying/building
a home machine to do it, are going to nosedive dramatically, by Oracle's design.

To me the data stability issues dictated zfs, and Opensolaris was where I
got that. I put up with the labyrinthine mess of figuring out what would and 
would
not run OS to get zfs, and it worked OK. To me, data integrity was what I was
after.

I had sub issues. It's silly (in my estimation) to worry about data integrity 
on 
disks and not in memory. That made ECC an issue. Hence my burrowing through 
the most cost-efficient way to get ECC. Oh, yeah, cost. I wanted it to be as
cheap as possible, given the other constraints. Then hardware reliability. I 
actually bought an off-duty server locally because of the cost advantages and 
the
perceived hardware realiability. I can't get OS to work on it - yet at least. 
I'm sure
that it's my problems with being an Open Solaris neophyte. But it sure is noisy.

What **my** compromise was is 
- new hardware to stay inside the shallow end of the failure-rate bathtub
- burn in to get past the infant mortality issues
- ECC as cheaply as possible, given that I actually wanted it to work
- modern SATA controllers for the storage, which dragged in PCIe and compatible
controllers under Opensolaris
- as low a power as possible, as that can save about $100 a year *for me*
- as low a noise factor as possible because I've spent too much of my life 
listening
to machines desperately try to stay cool.

What I could trade for this was not caring whether the hardware was 
particularly 
fast; it was a layer of data backup, not a mission critical server. And it had 
to 
run zfs, which is why I started this mess. Also, I don't have huge data storage 
problems. I enforce the live backup data to be under 4TB. Yep, that's tiny by
comparison. I have small problems. 8-)

Result: a new-components server that runs zfs, works on my house network, uses
under 100W as measured at the wall socket, and stores 4TB. I got what I set out
to get, so I'm happy with it. 

This is not the system for everybody, but it works for me. Writing down what 
you're trying to do is a great tool. People used to get really mad at me for 
saying, 
in Very Serious Business Meetings "If we were completely successful, what would 
that look like?"  It almost always happened that the guys who were completely
in agreement that we should do X and not Y disagreed violently when X had to 
be written down.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to