Michelle Sullivan
http://www.mhix.org/
Sent from my iPad
> On 05 May 2019, at 05:36, Chris <chrco...@gmail.com> wrote:
> 
> Sorry t clarify, Michelle I do believe your tail of events, just I
> meant that it reads like a tale as its so unusual.

There are multiple separate instances of problems over 8 years, but the final 
killer was without a doubt a catalog of disasters..  

> 
> I also agree that there probably at this point of time should be more
> zfs tools written for the few situations that do happen when things
> get broken.

This is my thought..though I am in agreement with the devs that a ZFS “fsck” is 
not the way to go.  I think we (anyone using zfs) needs to have a “salvage what 
data you can to elsewhere” type tool...  I am yet to explore the one written 
under windows that a dev sent me to see if that works (only because of the 
logistics of getting a windows 7 image on a USB drive that I can put into the 
server for recovery attempting.). If it works a version for command line would 
be the real answer to my prayers (and others I imagine.)

> 
> Although I still standby my opinion I consider ZFS a huge amount more
> robust than UFS, UFS always felt like I only had to sneeze the wrong
> way and I would get issues.  There was even one occasion simply
> installing the OS on its defaults, gave me corrupted data on UFS (9.0
> release had nasty UFS journalling bug which corrupted data without any
> power cuts etc.).

Which I find interesting in itself as I have a machine running 9.3 which 
started life as a 5.x (which tells you how old it is) and it’s still running on 
the same *compaq* raid5 with UFS on it... with the original drives, with a hot 
spare that still hasn’t been used... and the only thing done to it hardware 
wise is I replaced the motherboard 12 months ago as it just stopped POSTing and 
couldn’t work out what failed...never had a drive corruption barring the fscks 
following hard power issues... it went with me from Brisbane to Canberra, back 
to Brisbane by back of car, then to Malta, back from Malta and is still 
downstairs...  it’s my primary MX server and primary resolver for home and 
handles around 5k email per day..

> 
> In future I suggest you use mirror if the data matters.  I know it
> costs more in capacity for redundancy but in todays era of large
> drives its the only real sensible option.

Now it is and it was on my list of things to start just before this happened... 
in fact I have already got 4*6T drives to copy everything off ready to rebuild 
the entire pool with 16*6T drives in a raid 10 like config... the 
power/corruption beat me to it.

> 
> On the drive failures you have clearly been quite unlucky, and the
> other stuff is unusual.
> 

Drive failure wise, I think my “luck” has been normal... remember this is an 8 
year old system drives are only certified for 3 years... getting 5 years when 
24x7 is not bad (especially considering its workload).  The problem has always 
been how zfs copes, and this has been getting better overtime, but this 
metadata corruption is something I have seen similar before and that is where I 
have a problem with it... (especially when zfs devs start making statements 
about how the system is always right and everything else is because of hardware 
and if you’re not running enterprise hardware you deserve what you get... then 
advocating installing it on laptops etc..!)

> Best of luck

Thanks, I’ll need it as my changes to the code did not allow the mount though 
it did allow zdb to parse the drive... guess what I thought was there in zdb is 
not the same code in the zfs module.

Michelle

> 
>> On Sat, 4 May 2019 at 09:54, Pete French <petefre...@ingresso.co.uk> wrote:
>> 
>> 
>> 
>>> On 04/05/2019 01:05, Michelle Sullivan wrote:
>>> New batteries are only $19 on eBay for most battery types...
>> 
>> Indeed, my problem is actual physical access to the machine, which I
>> havent seen in ten years :-) I even have a relacement server sitting
>> behind my desk which we never quite got around to installing. I think
>> the next move it makes will be to the cloud though, so am not too worried.
>> 
>> _______________________________________________
>> freebsd-stable@freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
>> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
_______________________________________________
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to