Borys Saulyak <borys.saulyak <at> eumetsat.int> writes:
> 
> > Your pools have no redundancy...
>
> Box is connected to two fabric switches via different HBAs, storage is
> RAID5, MPxIP is ON, and all after that my pools have no redundancy?!?! 

As Darren said: no, there is no redundancy that ZFS can use. It is important 
to understand that your setup _prevents_ ZFS from self-healing itself. You 
need a ZFS-redundant pool (mirror, raidz or raidz2) or an fs with the 
attribute copies=2 to enable self-healing.

I would recommend you to make multiple LUNs visible to ZFS, and create 
redundant pools out of them. Browse he past 2 years or so of the zfs-discuss@ 
archives to give you an idea about how others with the same kind of hardware 
as you are doing it. For example, export each disk as a LUN, and create 
multiple raidz vdevs. Or create 2 hardware raid5 arrays and mirror them with 
ZFS, etc.

> > ...and got corrupted, therefore there is nothing ZFS
> This is exactly what I would like to know. HOW this could happened? 

Ask your hardware vendor. The hardware corrupted your data, not ZFS.

> I'm just questioning myself. Is it really reliable filesystem as presented,
> or it's better to keep away from it on production environment.

Consider yourself lucky that the corruption was reported by ZFS. Other 
filesystems would have returned silently corrupted data and it would have 
maybe taken you days/weeks to troubleshoot it. As to myself, I use ZFS in 
production to backup 10+ million files, have seen occurences of hw causing 
data corruption, and have seen ZFS self-heal itself. So yes I trust it.

-marc


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to