On Wed, Aug 11, 2010 at 6:15 PM, Saxon, Will <will.sa...@sage.com> wrote:

>
> It really depends on your VM system, what you plan on doing with VMs and how 
> you plan to do it.
>
> I have the vSphere Enterprise product and I am using the DRS feature, so VMs 
> are vmotioned around
> my cluster all throughout the day. All of my VM users are able to create and 
> manage their own VMs
> through the vSphere client. None of them care to know anything about VM 
> storage as long as it's
> fast, and most of them don't want to have to make choices about which 
> datastore to put their new
> VM on. Only 30-40% of the total number of VMs registered in the cluster are 
> powered on at any given time.

        We have three production VMware VSphere 4 clusters, each with
four hosts. The number of guests varies, but ranges from a low of 40
on one cluster to 80 on another. We do not generally have many guests
being created or destroyed, but a slow steady growth in their numbers.

        The guests are both production as well as test / development
and the vast majority of them are Windows, mostly Server 2008. The
rule is to roll out Windows servers as VMs with the notable exception
of the Exchange servers, which are physical servers. The VMs are used
for everything including domain controllers, file servers, print
servers, dhcp servers, dns servers, workstations (my physical desktop
run Linux but I need a Windows system for Outlook and a few other
applications, that runs as a VM), SharePoint servers, MS-SQL servers,
and other assorted application servers.

        We are using DRS and VMs do migrate around a bit
(transparently). We take advantage of "maintenance mode" for exactly
what the name says.

        We have had a fairly constant, but low rate of FC issues with
VMware, from when we first rolled out VMware (version 3.0) through
today (4.1). The multi-pathing seems to occasionally either loose one
or more paths to a given LUN or completely loose access to a given
LUN. These problems do not happen often, but when they do it has
caused downtime on production VMs. Part of the reason we started
looking at NFS/iSCSI was to get around the VMware (Linux) FC drivers.
We also like the low overhead snapshot feature of ZFS (and are
leveraging it for other data extensively).

        Now we are getting serious about using ZFS + NFS/iSCSI and are
looking to learn from other's experience as well as our own. For
example, is anyone using NFS with Oracle Cluster for HA storage for
VMs or are sites trusting to a single NFS server ?

-- 
{--------1---------2---------3---------4---------5---------6---------7---------}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to