On Aug 13, 2009, at 1:37 AM, James Hess
wrote:
The real benefit of the of using a
separate zvol for each vm is the instantaneous
cloning of a machine, and the clone will take almost
no additional space initially. In our case we build a
You don't have to use ZVOL devices to do that.
As menti
om: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of James Hess
Sent: Thursday, 13 August 2009 3:38 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!
> The real benefit of the of using a
> separate zvol for ea
> The real benefit of the of using a
> separate zvol for each vm is the instantaneous
> cloning of a machine, and the clone will take almost
> no additional space initially. In our case we build a
You don't have to use ZVOL devices to do that.
As mentioned by others...
> zfs create my_pool/group1
om: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Brent Jones
> Sent: Thursday, 2 July 2009 12:58 PM
> To: HUGE | David Stahl
> Cc: Steve Madden; zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!
>
>
scuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!
On Wed, Jul 1, 2009 at 7:29 PM, HUGE | David Stahl
wrote:
> The real benefit of the of using a separate zvol for each vm is the
> instantaneous cloning of a machine, and the clone will take almost no
> additional s
> "rw" == Ross Walker writes:
rw> you can create a LAG which does redundancy and load balancing.
be careful---these aggregators are all hash-based, so the question is,
of what is the hash taken? The widest scale on which the hash can be
taken is L4 (TCP source/dest port numbers) because
@opensolaris.org
Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!
Why the use of zvols, why not just;
zfs create my_pool/group1
zfs create my_pool/group1/vm1
zfs create my_pool/group1/vm2
and export my_pool/group1
If you don't want the people in group1 to see vm2 anymore just zfs
rename it
> Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!
>
> Why the use of zvols, why not just;
>
> zfs create my_pool/group1
> zfs create my_pool/group1/vm1
> zfs create my_pool/group1/vm2
>
> and export my_pool/group1
>
> If you don't want the people in group1
esx machine is kind of a bummer.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org on behalf of Steve Madden
Sent: Wed 7/1/2009 8:46 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!
Why the use of zvols, why not just;
zfs create
Why the use of zvols, why not just;
zfs create my_pool/group1
zfs create my_pool/group1/vm1
zfs create my_pool/group1/vm2
and export my_pool/group1
If you don't want the people in group1 to see vm2 anymore just zfs rename it to
a different group.
I'll admit I am coming into this green - but i
I would think you would run into the same problem I have. Where you can't
view child zvols from a parent zvol nfs share.
> From: Scott Meilicke
> Date: Fri, 19 Jun 2009 08:29:29 PDT
> To:
> Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!
>
> So how are folks getti
So how are folks getting around the NFS speed hit? Using SSD or battery backed
RAM ZILs?
Regarding limited NFS mounts, underneath a single NFS mount, would it work to:
* Create a new VM
* Remove the VM from inventory
* Create a new ZFS file system underneath the original
* Copy the VM to that fi
Scott Meilicke wrote:
> Obviously iSCSI and NFS are quite different at the storage level, and I
> actually like NFS for the flexibility over iSCSI (quotas, reservations,
> etc.)
Another key difference between them is that with iSCSI, the VMFS filesystem
(built on the zvol presented as a block dev
On Jun 16, 2009, at 17:47, Scott Meilicke wrote:
I think (don't quote me) that ESX can only mount 64 iSCSI targets,
so you aren't much better off. But, COMSTAR (2009.06) exports a
single iSCSI target with multiple LUNs, so that gets around the
limitation. I could be all wet on this one, how
> From: Scott Meilicke
> Date: Tue, 16 Jun 2009 14:47:26 PDT
> To:
> Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!
>
> My testing with 2008.11 iSCSI vs NFS was that iSCSI was about 2x faster. I
> used a 3 stripe 5 disk raidz (15 1.5TB sata disks). I just used the defau
My testing with 2008.11 iSCSI vs NFS was that iSCSI was about 2x faster. I used
a 3 stripe 5 disk raidz (15 1.5TB sata disks). I just used the default zil, no
SSD or similar to make NFS faster.
I think (don't quote me) that ESX can only mount 64 iSCSI targets, so you
aren't much better off. But
HUGE | David Stahl wrote:
That is a very interesting idea Ryan. Not as ideal as I hoped, but does open
up a way of maximizing my amount of vm guests.
Thanks for that suggestion.
Also if I added another subnet and another vmkernel would I be allowed
another 32 nfs mounts? So is it 32 nfs mounts p
mounts
period?
--
HUGE
David Stahl
Systems Administrator
718 233 9164 / F 718 625 5157
www.hugeinc.com <http://www.hugeinc.com>
> From: Ryan Arneson
> Date: Tue, 16 Jun 2009 15:14:31 -0600
> To: HUGE | David Stahl
> Cc:
> Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh
Try iSCSI?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
HUGE | David Stahl wrote:
I'm curious if anyone else has run into this problem, and if so, what
solutions they use to get around it.
We are using Vmware Esxi servers with an Opensolaris NFS backend.
This allows us to leverage all the awesomeness of ZFS, including the
snapshots and clones. The
I'm curious if anyone else has run into this problem, and if so, what
solutions they use to get around it.
We are using Vmware Esxi servers with an Opensolaris NFS backend. This
allows us to leverage all the awesomeness of ZFS, including the snapshots
and clones. The best feature of this is we ca
21 matches
Mail list logo