Jan Betlach wrote: 

> - FFS seems to be reliable and stable enough for my purpose. ZFS is too 
> complicated and bloated (of course it has its advantages), however major 
> factor for me has been that it is not possible to encrypt ZFS natively 
> on FreeBSD as of now.

Illumos distro OmniOS CE 

https://omniosce.org/

has support for native encryption since r151032

https://github.com/omniosorg/omnios-build/blob/r151032/doc/ReleaseNotes.md

Patrick Marchand wrote:

> Hi,
>
> 
> I'll be playing around with DragonflyBSD Hammer2 (and multiple offsite
> backups) for a home NAS over the next few weeks. I'll probably do a
> presentation about the experience at the Montreal BSD user group
> afterwards. It does not require as many ressources as ZFS or BTRFS,
> but offers many similar features.
> 

Been there, done that! 


dfly# uname -a
DragonFly dfly.int.bagdala2.net 5.6-RELEASE DragonFly v5.6.2-RELEASE
#26: Sun Aug 11 16:04:07 EDT 2019
r...@dfly.int.bagdala2.net:/usr/obj/usr/src/sys/X86_64_GENERIC  x86_64

# Device                Mountpoint      FStype  Options         Dump
Pass#
/dev/serno/B620550018.s1a               /boot           ufs     rw
      1       1
# /dev/serno/B620550018.s1b             none            swap    sw
      0       0
# Next line adds swapcache on the separate HDD instead of original swap
commented out above
/dev/serno/451762B0E46228230099.s1b             none            swap
sw              0       0
/dev/serno/B620550018.s1d               /               hammer  rw
      1       1
/pfs/var                /var            null    rw              0
0
/pfs/tmp                /tmp            null    rw              0
0
/pfs/home               /home           null    rw              0
0
/pfs/usr.obj    /usr/obj                null    rw              0
0
/pfs/var.crash  /var/crash              null    rw              0
0
/pfs/var.tmp    /var/tmp                null    rw              0
0
proc                    /proc           procfs  rw              0
0


# Added by Predrag Punosevac
/dev/serno/ZDS01176.s1a         /data   hammer  rw              2
2
/dev/serno/5QG00WTH.s1a         /mirror hammer  rw              2
2
# /dev/serno/5QG00XF0.s1e       /test-hammer2   hammer2 rw
2       2


# Mount pseudo file systems from the master drive which is used as a
backup for my desktop
/data/pfs/backups /data/backups         null    rw              0
0
/data/pfs/nfs /data/nfs                 null    rw              0
0


H2 lacks built in backup mechanism. I was hoping that H2 will get some
kind "hammer mirror-copy" of H1, or "zfs send/receive". My server is
still on H1 and I really enjoy being able to continuously back it up.
That's the only thing I am missing in H2. On the positive note H2 did
get support for boot environment manager last year.

https://github.com/newnix/dfbeadm

Also DF jails are stuck in 2004 or something like that. I like their
NFSv3. DragonFly which gets it software RAID discipline through old
unmaintained FreeBSD natacontrol utility. Hardware RAID cards are not
frequently tested and community seems to be keen on treating DF as a
desktop OS rather than a storage workhorse. Having said that HDD are
cheap this days and home users probably don't need anything bigger than
a 12TB mirror. 


Zhi-Qiang Lei wrote:

> 1. FreeBSD was my first consideration because of ZFS, but as far as I
> know, ZFS doesn't work well with RAID controller, 

Of course not. ZFS is a volume manager and file system in one. How would
ZFS detect errors and do self-healing if it relies on the HW Raid
controller to get the info about block devices?

> and neither FreeBSD
> nor OpenBSD has a driver for the B120i array controller on the
> mainboard (HP is to be blamed). I could use AHCI mode instead RAID
> which also suits ZFS of FreeBSD, yet there is a notorious fan noise
> issue of that approach.
> 

That is not a genuine HWRaid card. That is a build in software
raid. You should not be using that crap. 


> 2. A HP P222 array controller works right out of the box on
> OpenBSD, maybe FreeBSD as well but the combination of ZFS and RAID
> controller seems weird to me. 
> 

FreeBSD has a better support for HWRaid cards than OpenBSD. I am talking
about serious HWRaid cards like former LSI Controllers. Only Areca used
to fully support OpenBSD. Also FreeBSD UFS journaling is more advanced
than OpenBSD journaling. However unless you put H1 on H2 on the top of
hardware RAID you will not get COW, snapshots, history, and all other
stuff with any version of UFS. 

I know people on this list who prefer HWRaid and also know people on
this list who prefer softward (including ZFS).


> 3. OpenBSD is actually out of my expectation. CIFS and NFS is just
> easy to setup. The most fabulous thing to me is the full disk
> encryption. I had a disk failure and the array controller was burnt
> once because I had some cooling issue. However, I was confident to get
> a replacement and no data was lost.


OpenBSD NFS server implementation is slow comparing to others but for
home users YMMV. OpenBSD softraid RAID 1 discipline although functional
(I use on this very desktop)

Code:
# bioctl sd4                                                 
Volume      Status               Size Device
softraid0 0 Online      2000396018176 sd4     RAID1
          0 Online      2000396018176 0:0.0   noencl <sd0a>
          1 Online      2000396018176 0:1.0   noencl <sd1a>

is very crude. It took me 4 days to rebuild 1TB mirror after accidental
power off one HDD. That is just not something usable for a storage
purpose in real life.  


At work where I have to store petabytes of data I use only ZFS. At home
that is another story. 

For the record BTRFS is a vaporware and I would never store the pictures
of my kids to that crap.

Cheers,
Predrag

Reply via email to