On 19 nov. 2010, at 03:53, Edward Ned Harvey wrote:

>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> 
>> SAS Controller
>> and all ZFS Disks/ Pools are passed-through to Nexenta to have full
> ZFS-Disk
>> control like on real hardware. 
> 
> This is precisely the thing I'm interested in.  How do you do that?  On my
> ESXi (test) server, I have a solaris ZFS VM.  When I configure it... and add
> disk ... my options are (a) create a new virtual disk (b) use an existing
> virtual disk, or (c) (grayed out) raw device mapping.  There is a comment
> "Give your virtual machine direct access to a SAN."  So I guess it only is
> available if you have some iscsi target available...
> 
> But you seem to be saying ... don't add the disks individually to the ZFS
> VM.  You seem to be saying...  Ensure the bulk storage is on a separate
> sas/scsi/sata controller from the ESXi OS...  And then add the sas/scsi/sata
> PCI device to the guest, which will implicitly get all of the disks.  Right?
> 
> Or maybe ... the disks have to be scsi (sas)?  And then you can add the scsi
> device directly pass-thru?

As mentioned by Will, you'll need to use the VMDirectPath which allows you to 
map a hardware device (the disk controller) directly to the VM without passing 
through the VMware managed storage stack. Note that you are presenting the 
hardware directly so it needs to be a compatible controller.

You'll need two controllers in the server since ESXi needs at least one disk 
that it controls to be formatted a VMFS to hold some of its files as well as 
the .vmx configuration files for the VM that will host the storage (and the 
swap file so it's got to be at least as large as the memory you plan to assign 
to the VM). Caveats - while you can install ESXi onto a USB drive, you can't 
manually format a USB drive as VMFS so for best performance you'll want at 
least one SATA or SAS controller that you can leave controlled by ESXi and the 
second controller where the bulk of the storage is attached for the ZFS VM.

As far as the eggs in one basket issue goes, you can either use a clustering 
solution like the Nexenta HA between two servers and then you have a highly 
available storage solution based on two servers that can also run your VMs or 
for a more manual failover, just use zfs send|recv to replicate the data.

You can also accomplish something similar if you have only the one controller 
by manually created local Raw Device Maps of the local disks and presenting 
them individually to the ZFS VM but you don't have direct access to the 
controller so I don't think stuff like blinking a drive will work in this 
configuration since you're not talking directly to the hardware. There's no UI 
for creating RDMs for local drives, but there's a good procedure over at 
<http://www.vm-help.com/esx40i/SATA_RDMs.php> which explains the technique.

>From a performance standpoint it works really well - I have NFS hosted VMs in 
>this configuration getting 396Mo/s throughput on simple dd tests backed by 10 
>zfs mirrored disks, all protected with hourly send|recv to a second box.

Cheers,

Erik
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to