----------------------------------------
> Subject: Re: sysutils/iocage in a NAS environment
> To: freebsd-jail@freebsd.org
> From: allanj...@freebsd.org
> Date: Fri, 31 Jul 2015 10:22:39 -0400
>
> On 2015-07-31 06:24, Kai Gallasch wrote:
>>
>> Hi.
>>
>> Just read that FreeNAS 10 is going to use sysutils/iocage for managing
>> local jails on the NAS. That is great news and it will give iocage more
>> publicity and a wider user base!
>>
>> I am currently testing a FreeNAS 9 as a NAS for my FreeBSD servers. Each
>> (FreeBSD 10) server is running between 10-50 iocage jails.
>>
>> iocage's documentation states that each iocage installation needs a
>> zpool to run on.
>>
>> So the only way I see to use a NAS for iocage deployment would be to
>> make use of iSCSI (block based) mounts. The NAS would offer an iscsi
>> target to the jailhost. When mounted, it just shows up as a block based
>> LUN. You then could create a zpool on this LUN and use this zpool for
>> iocage. (Each time the jailhost starts up, the iSCSI mount + zpool
>> import would have to happen automatically)
>>
>> Does this approach make any sense when both performance or stability are
>> needed?
>>
>> Is it generally adviseable to use zpools on iSCSI targets, because they
>> are basically iSCSI exported zvols running on top of another zpool?
>>
>> Regards,
>> Kai.
>>
>
> If FreeBSD 9 is your NAS, why are the disks remote?
>
> Normally, you'd run iocage on the NAS (the machine with the physical
> disks in it) and have direct access to the zpool.
>

I guess that Kai looks at FreeNAS as, well... only a NAS. I mean, an appliance
whom the main task is to provide networked storage to servers, which are
running, in his example, another version of FreeBSD.

Personally I'm in line with him by preferring to manage ZFS pools at the OS
layer and not at the SAN/NAS layer. Actually, I'm currently working for testing
a solution where a number of "SAN nodes" simply provide to "computational
nodes" (via iSCSI) access to physical disks (each disk will be seen as an
iSCSI LUN). In such a design, SAN nodes will become a sort of "networked
disk providers", running with minimal complexity and resources, while ZFS
pool creation, dataset and zvol management will be done by servers' OS.

My goal is to cut off layers (no more ZFS on iSCSI LUNs on zvol on physical
disks) to simplify management, reduce complexity and hardware requirements,
in order to allow any computational node to import and export ZFS pools built
by vdevs composed by mirrored disks (each of them provided by a different
 storage node), thus obtaining fault-tolerance of servers, storage nodes and
physical disks.

If someone is interested, I'll be glad to post the result of performance tests 
with
near-production hardware. I'll should have them before the end of this month.

Regards.

Andrew                                    
_______________________________________________
freebsd-jail@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-jail
To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"

Reply via email to