This article seems to disagree with you :
http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

I know and i won't be using zfs on hardware raid anyway. My issue here is
hardware compatibility as i don't have enough sata ports to run my disks on
my motherboard. Either i run pure hardware raid or hba + zfs, just need to
be sure of the card.

zfs performance cost is not an issue when you have a high end desktop imo.


2016-02-01 18:39 GMT+01:00 Gordon Messmer <gordon.mess...@gmail.com>:

> On 02/01/2016 08:33 AM, thibaut noah wrote:
>
>> Yeah i saw some build reviews on freenas, they recommend ecc but it is
>> not mandatory (and actually after reading some tests i don't get all the
>> fuss on ecc ram).
>>
>
> Just as with disks, bits can flip in RAM.  Probably the most important
> feature of ZFS is checksums on all blocks so that bit flips can be detected
> and repaired.  ECC RAM does the same for memory.  If you don't have ECC
> RAM, and bits flip in memory, you're likely to silently corrupt data.
>
> Saw that too and i don't get it, i mean, what the hell? You can replace
>> disks with bigger one but you'll have all this trouble if you want to
>> expand the array? That doesn't feel right.
>>
>
> The same is true of any disk array, I'd think.  If you replace a disk, you
> need to rebuild the array.  The array size is determined by the smallest
> member in the array.  Given those two constraints, there's nothing unusual
> about the process.
>
> Thing is spending 600+$$ on a nas doesn't seem worth it compared to buying
>> an high end raid card.
>>
>
> ZFS (and btrfs) and hardware RAID are not, in my opinion, comparable.
> RAID arrays don't keep checksum information on each block, so if a bit
> flips they don't have a means of reliably repairing it.  ZFS can repair bit
> flips.  You probably don't want to use ZFS on hardware RAID, since many of
> ZFS' features rely on accessing each disk individually.  A battery backed
> write cache can be useful, but I don't think it's better than having a UPS
> that's monitored.
>
> Also it's either having a second case or buying a dual system case which
>> cost more than 500$$, those guys...
>> Spending much money on a raid card also seem like spending money for
>> nothing too as it seems i'll have better performances with hba card + zfs
>> that using a raid card. (did some research meanwhile)
>>
>
> It's possible, but I don't think that's necessarily true.  ZFS' features
> come at a performance cost, in general.
>
>
> --
> users mailing list
> users@lists.fedoraproject.org
> To unsubscribe or change subscription options:
> https://admin.fedoraproject.org/mailman/listinfo/users
> Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
> Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
> Have a question? Ask away: http://ask.fedoraproject.org
>
-- 
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org

Reply via email to