A fundamental element missing from the 1st mail is on which hardware should
run your software-defined NAS and for which use.

I exclude you are talking about several nodes, on which you can run Ceph or
GlusterFS filesystems.

Is it a single full size multi-disk server planned for intensive activity?
In this case don't reinvent the wheel, you got:
- FreeNAS
- napp-it (over solaris/omnios/openindiana)
- Nexenta
Just don't forget to substitute whatever raid SAS controller with an IT
mode enabled one (e.g. LSI 2308) in order to really benefit of ZFS.

Is it for home use? Why not considering some low consumption hardware? If
you want multidisk RAID just buy a qnap/synology.

If one disk is enough, buy Odroid HC2 which mounts 3.5" SATA disks, where a
6TB one fits perfectly. Dunno if OpenBSD may install on it (armhf v7 arch),
but for sure either armbian or openmediavault are good choices to run on,
having full 1Gb/s throughput and consuming even less current than some
famous brand NAS, like the ones named before.

This said, if the aim of the project is just having fun creating a NAS from
scratch on casual hardware running OpenBSD for the sake of it, I shut my
mouth.

Have phun!

Il sab 16 nov 2019, 07:11 Jordan Geoghegan <jor...@geoghegan.ca> ha scritto:

>
> On 2019-11-15 20:47, Predrag Punosevac wrote:
> > Jan Betlach wrote:
> >
> > [snip]
> >
> >> 2. A HP P222 array controller works right out of the box on
> >> OpenBSD, maybe FreeBSD as well but the combination of ZFS and RAID
> >> controller seems weird to me.
> >>
> > FreeBSD has a better support for HWRaid cards than OpenBSD. I am talking
> > about serious HWRaid cards like former LSI Controllers. Only Areca used
> > to fully support OpenBSD. Also FreeBSD UFS journaling is more advanced
> > than OpenBSD journaling.
>
> OpenBSD's UFS doesn't do any journalling.
>
> [snip]
>
> >> 3. OpenBSD is actually out of my expectation. CIFS and NFS is just
> >> easy to setup. The most fabulous thing to me is the full disk
> >> encryption. I had a disk failure and the array controller was burnt
> >> once because I had some cooling issue. However, I was confident to get
> >> a replacement and no data was lost.
> >
> > OpenBSD NFS server implementation is slow comparing to others but for
> > home users YMMV.
> I was able to get Gigabit line rate from an OpenBSD NAS to CentOS
> clients no problem. The OpenBSD NFS client is admittedly somewhat slow--
> I was only able to get ~70MB/s out of it when connected to the same NAS
> that gets 100MBps+ from Linux based NFS clients.
> >
> > Code:
> > # bioctl sd4
> > Volume      Status               Size Device
> > softraid0 0 Online      2000396018176 sd4     RAID1
> >            0 Online      2000396018176 0:0.0   noencl <sd0a>
> >            1 Online      2000396018176 0:1.0   noencl <sd1a>
> >
> > is very crude. It took me 4 days to rebuild 1TB mirror after accidental
> > power off one HDD. That is just not something usable for a storage
> > purpose in real life.
>
> I have an OpenBSD NAS at home with 20TB of RAID1 storage comprised of 10
> 4TB drives. Last time I had to rebuild one of the arrays, it took just
> under 24 hours to rebuild. This was some months ago, but I remember
> doing the math and I was getting just under 50MB/s rebuild speed. This
> was on a fairly ancient Xeon rig using WD Red NAS drives. If it took
> your machine 4 days to rebuild a 1TB mirror, something must be wrong,
> possibly hardware related as that's less than 4MB/s rebuild speed.
>
> >
> > At work where I have to store petabytes of data I use only ZFS. At home
> > that is another story.
> >
> > For the record BTRFS is a vaporware and I would never store the pictures
> > of my kids to that crap.
> >
> > Cheers,
> > Predrag
>
> Cheers,
>
> Jordan
>
>

Reply via email to