On 03/07/17 13:21, Roderick wrote:
> On Tue, 7 Mar 2017, Christian Weisgerber wrote:
> 
>> On 2017-03-07, Roderick <hru...@gmail.com> wrote:
>>
>>>      Disk are to be readable for many decades. Standard File System
>>>      readable after moving the Disks to another computer, different
>>>      hardware, perhaps with different OS.
>>
>> *uncontrollable laughter*
> 
> Then you got, what is the problem. But it is not for laughing, it was
> the escense of my question.
> 
> I can read till now very old SCSI disks, also 3 1/2'' floppies, but
> not always 5 1/4'' (because of the low level formatting). Of course,
> depending on the file system. I would not use FAT, ufs would be a
> good choice.

actually, I've had far better luck reading 5.25" floppies than old 3.5".
 Last time I checked my 8" floppies, they were doing better than the
5.25".  In fact, I've got the gear to calibrate a 5.25" and 8" floppy
drive -- you "repair" a 3.5" floppy drive with the trashcan.  Which is
great..assuming you can get a new one.

Very old SCSI and IDE disks that made it five years usually made it a
lot longer (and some never seem to die).  However, I'm seeing a
continued decline in long term product reliability as prices continue to
plummet in the IT world.  I really don't think you will be seeing ten
year old 2TB SATA (or SAS) disks very often.  (please don't quote MTBFs
to me, it makes me laugh for a moment, and then I get sad when I realize
people believe that shit)

Technology changes.  Hard to tell when.  A number of years ago, I had an
opportunity to buy a bunch of IDE to SCSI enclosures for really cheap.
I ... uh ... loaded up.  These things were great -- 16 IDE disks,
attached to one SCSI port -- you could carve up the array into multiple
virtual drives, and I was thinking, "wow...16 500G disks...that's a lot
of storage!  1TB disks are coming soon, too!"  Well, very shortly after
I acquired the last of these things, the market turned, and SATA killed
IDE.  Only a few token IDE disks remained being produced, and they were
EXPENSIVE compared to the SATAs.  I found a cute little IDE to SATA
adapter that actually fit in the array's trays, but then I quickly
discovered that 1T was the limit of the array's disk handling abilities.
 Meanwhile, the rest of the world said, "What's SCSI?" -- finding
something to plug the array into was becoming a trick.  I.e., I loaded
up on a lot of junk.  And someone was freaking brilliant to know when to
get OUT of that technology.

The point is, you can't design ONE box for ten years of life.  With
modern SSD tech, I suspect you won't see a SATA port on a computer in
ten years.

What you need to do is have simple, reliable, and movable solutions,
where the REPLACEMENT of the solution is part of it.

Your desire to be able to move the disks from one computer to another is
good -- when your base hw dies, you need to be able to transport your
disks to something else.  I can't think of another OS that does that
better than OpenBSD. But you take that opportunity as a clue that maybe
you need to update your tech, too.


Build a simple solution with simple hw of today.  When that hw starts
getting old and looking rather "different" than newer hw, migrate.  Your
data is just data, that's what's important.  The hw, the platform, the
OS, can all be swapped out...AND SHOULD BE swapped out when appropriate.
 You ain't marrying your solution, quit trying to make it last longer
than modern marriages last.

A word on ZFS:  I've used it.  I've used a few features many people
probably haven't.  It's got a lot of features.  It has a huge number of
knobs.  It's about as anti-OpenBSD as I can imagine, and I'm not talking
about the license.  It is about as far from "Just Works" as you can get
a file system to be anymore.  I had a friend tell me once how he'd never
want to run a database on anything OTHER than ZFS because of all the
file system integrity features.  Then he admitted how many times the
system crashed on him...  Um. crashes for databases are bad, file system
magic doesn't change that.  My experience with ZFS was that it had the
stability of a pig on stilts, and not much more grace.  In many ways,
ZFS seems to me to be a throwback to the 1980s when file systems needed
to be "tuned" and maintained.  Your opinion may vary.  I know some
people who's opinion I respect a lot that think ZFS is the greatest
thing ever.  I just disagree on that point.


Note: OpenBSD's softraid supports three disk RAID1.  A lot of people
don't understand that -- it's THREE copies of your data.  Lose a disk,
you still got TWO copies to rebuild from.


So my recommendation would be a simple solution that will fit you for
maybe two or three years, maybe three disk RAID1, and every two or three
years look at your system and the alternatives out there and ask if it
makes sense to upgrade now or wait a year or two.  Move your data to a
new system when appropriate, asking yourself each time, "what's a good
solution NOW?".  And have an off-site rotated backup of all your data.

Nick.

Reply via email to