On 9 déc. 2010, at 13:41, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> Also, if you have a NFS datastore, which is not available at the time of
> ESX
>> bootup, then the NFS datastore d
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> Also, if you have a NFS datastore, which is not available at the time of
ESX
> bootup, then the NFS datastore doesn't come online, and there seems to be
> no
> way of tell
On Dec 8, 2010, at 11:41 PM, Edward Ned Harvey
wrote:
> For anyone who cares:
>
> I created an ESXi machine. Installed two guest (centos) machines and
> vmware-tools. Connected them to each other via only a virtual switch. Used
> rsh to transfer large quantities of data between the two guest
For anyone who cares:
I created an ESXi machine. Installed two guest (centos) machines and
vmware-tools. Connected them to each other via only a virtual switch. Used
rsh to transfer large quantities of data between the two guests,
unencrypted, uncompressed. Have found that ESXi virtual switch
Suppose if you wanted to boot from an iscsi target, just to get vmware & a
ZFS server up. And then you could pass-thru the entire local storage
bus(es) to the ZFS server, and you could create other VM's whose storage is
backed by the ZFS server on local disk.
One way you could do this is to buy F
> From: Saxon, Will [mailto:will.sa...@sage.com]
>
> What I am wondering is whether this is really worth it. Are you planning
to
> share the storage out to other VM hosts, or are all the VMs running on the
> host using the 'local' storage? I know we like ZFS vs. traditional RAID
and
> volume manag
> Also, most of the big name vendors have a USB or SD
> option for booting ESXi. I believe this is the 'ESXi
> Embedded' flavor vs. the typical 'ESXi Installable'
> that we're used to. I don't think it's a bad idea at
> all. I've got a not-quite-production system I'm
> booting off USB right now, an
> -Original Message-
> From: Edward Ned Harvey [mailto:sh...@nedharvey.com]
> Sent: Friday, November 19, 2010 8:03 AM
> To: Saxon, Will; 'Günther'; zfs-discuss@opensolaris.org
> Subject: RE: [zfs-discuss] Faster than 1G Ether... ESX to ZFS
>
> >
On Fri, 19 Nov 2010 07:16:20 PST, Günther wrote:
i have the same problem with my 2HE supermicro server (24x2,5",
connected via 6x mini SAS 8087) and no additional mounting
possibilities for 2,5" or 3,5" drives.
on those machines i use one sas port (4 drives) of an old adaptec
3805 (i have used
i have the same problem with my 2HE supermicro server (24x2,5", connected via
6x mini SAS 8087) and no additional mounting possibilities for 2,5" or 3,5"
drives.
on those machines i use one sas port (4 drives) of an old adaptec 3805 (i have
used them in my pre zfs-times) to build a raid-1 + hot
On 19 nov. 2010, at 15:04, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Günther
>>
>> Disabling the ZIL (Don't)
>
> This is relative. There are indeed situations where it's acceptable to
> disable ZIL.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Günther
>
> Disabling the ZIL (Don't)
This is relative. There are indeed situations where it's acceptable to
disable ZIL. To make your choice, you need to understand a few things...
#1
> From: Gil Vidals [mailto:gvid...@gmail.com]
>
> connected to my ESXi hosts using 1 gigabit switches and network cards: The
> speed is very good as can be seen by IOZONE tests:
>
> KB reclen write rewrite read reread
> 512000 32 71789 76155 94382 101022
> 512000 1024
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of VO
>
> This sounds interesting as I have been thinking something similar but
never
> implemented it because all the eggs would be in the same basket. If you
> don't mind me asking for more infor
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of VO
>
> How to accomplish ESXi 4 raw device mapping with SATA at least:
> http://www.vm-help.com/forum/viewtopic.php?f=14&t=1025
It says:
You can pass-thru individual disks, if you have SCSI,
> From: Saxon, Will [mailto:will.sa...@sage.com]
>
> In order to do this, you need to configure passthrough for the device at
the
> host level (host -> configuration -> hardware -> advanced settings). This
Awesome. :-)
The only problem is that once a device is configured to pass-thru to the
gues
hmmm
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
Disabling the ZIL (Don't)
Caution: Disabling the ZIL on an NFS server can lead to client side corruption.
The ZFS pool integrity itself is not compromised by this tuning.
so especially with nfs i won`t disable it.
it
On 19 nov. 2010, at 03:53, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>
>> SAS Controller
>> and all ZFS Disks/ Pools are passed-through to Nexenta to have full
> ZFS-Disk
>> control like on real hardware.
>
> This is precisely the thing I'm int
I haven't seen too much talk about the actual file read and write speeds. I
recently converted from using OpenFiler, which seems defunct based on their
lack of releases, to using NexentaStor. The NexentaStor server is connected
to my ESXi hosts using 1 gigabit switches and network cards: The speed
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of
> Edward Ned Harvey
> Sent: Thursday, November 18, 2010 9:54 PM
> To: 'Günther'; zfs-discuss@opensolaris.org
> Subject: Re: [zf
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
> Sent: 19 November 2010 09:54
> To: 'Günther'; zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] Faster
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>
> SAS Controller
> and all ZFS Disks/ Pools are passed-through to Nexenta to have full
ZFS-Disk
> control like on real hardware.
This is precisely the thing I'm interested in. How do you do that? On my
ESXi (test) server, I hav
Up to last year we have had 4 exsxi4 server, each with its own NFS-storage
server (NexentaStor/ Core+napp-it), directly connected via 10Gbe CX4. The
second CX4 Storage-Port was connected to our San (Hp 2910 10Gbe Switch) for
backups. The second port of each ESXI Server was connected (tagged Vlan
On Wed, 17 Nov 2010 16:31:32 -0500, Ross Walker
wrote:
> On Wed, Nov 17, 2010 at 3:00 PM, Pasi Kärkkäinen wrote:
>> On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote:
>>> Hi all,
>>>
>>> Let me tell you all that the MC/S *does* make a difference...I had
a
>>> windows fileserver
I confirm that form the fileserver point of view and storage, i had more
network connections used.
Bruno
On Wed, 17 Nov 2010 22:00:21 +0200, Pasi Kärkkäinen wrote:
> On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote:
>>Hi all,
>>
>>Let me tell you all that the MC/S *does* mak
On Wed, Nov 17, 2010 at 3:00 PM, Pasi Kärkkäinen wrote:
> On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote:
>> Hi all,
>>
>> Let me tell you all that the MC/S *does* make a difference...I had a
>> windows fileserver using an ISCSI connection to a host running snv_134
>> with
On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote:
>Hi all,
>
>Let me tell you all that the MC/S *does* make a difference...I had a
>windows fileserver using an ISCSI connection to a host running snv_134
>with an average speed of 20-35 mb/s...After the upgrade to snv_151a
Hi all,
Let me tell you all that the MC/S *does* make a difference...I
had a windows fileserver using an ISCSI connection to a host running
snv_134 with an average speed of 20-35 mb/s...After the upgrade to snv_151a
(Solaris 11 express) this same fileserver got a performance boost and now
has a
scuss-boun...@opensolaris.org
Date: Tue, 16 Nov 2010 22:05:05
To: Jim Dunham
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
On Nov 16, 2010, at 7:49 PM, Jim Dunham wrote:
> On Nov 16, 2010, at 6:37 PM, Ross Walker wrote:
>> On Nov 16, 2010, at 4:04 PM, Tim Cook wrote:
>>> AFAIK, esx/i doesn't support L4 hash, so that's a non-starter.
>>
>> For iSCSI one just needs to have a second (third or fourth...) iSCSI session
On Nov 16, 2010, at 6:37 PM, Ross Walker wrote:
> On Nov 16, 2010, at 4:04 PM, Tim Cook wrote:
>> AFAIK, esx/i doesn't support L4 hash, so that's a non-starter.
>
> For iSCSI one just needs to have a second (third or fourth...) iSCSI session
> on a different IP to the target and run mpio/mpxio/m
On Nov 16, 2010, at 4:04 PM, Tim Cook wrote:
>
>
> On Wed, Nov 17, 2010 at 7:56 AM, Miles Nordin wrote:
> > "tc" == Tim Cook writes:
>
>tc> Channeling Ethernet will not make it any faster. Each
>tc> individual connection will be limited to 1gbit. iSCSI with
>tc> mpxio may wo
On Wed, Nov 17, 2010 at 7:56 AM, Miles Nordin wrote:
> > "tc" == Tim Cook writes:
>
>tc> Channeling Ethernet will not make it any faster. Each
>tc> individual connection will be limited to 1gbit. iSCSI with
>tc> mpxio may work, nfs will not.
>
> well...probably you will run into
> "tc" == Tim Cook writes:
tc> Channeling Ethernet will not make it any faster. Each
tc> individual connection will be limited to 1gbit. iSCSI with
tc> mpxio may work, nfs will not.
well...probably you will run into this problem, but it's not
necessarily totally unsolved.
I am
Edward,
I recently installed a 7410 cluster, which had added Fiber Channel HBAs.
I know the site also has Blade 6000s running VMware, but no idea if they
were planning to run fiber to those blades (or even had the option to do so).
But perhaps FC would be an option for you?
Mark
On Nov 12, 201
Hi,
we have the same issue, ESX(i) and Solaris on the Storage.
Link Aggregation does not work with ESX(i) (i tried a lot with that for
NFS), when you want to use more than one 1G connection you must
configure one network or vlan and min. one share for each connection.
But this is also limited
On 11/13/10 04:03 AM, Edward Ned Harvey wrote:
Since combining ZFS storage backend, via nfs or iscsi, with ESXi
heads, I’m in love. But for one thing. The interconnect between the
head & storage.
1G Ether is so cheap, but not as fast as desired. 10G ether is fast
enough, but it’s overkill a
Check infiniband, the guys at anandtech/zfsbuild.com used that as well.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0:26 AM
To: Edward Ned Harvey
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/12/2010 10:03 AM, Edward Ned Harvey wrote:
>
> Since combining ZFS storage backend, via nfs or iscsi, with ESXi
>
On Fri, Nov 12, 2010 at 09:34:48AM -0600, Tim Cook wrote:
> Channeling Ethernet will not make it any faster. Each individual connection
> will be limited to 1gbit. iSCSI with mpxio may work, nfs will not.
Would NFSv4 as cluster system over multiple boxes work?
(This question is not limited to ESX
Channeling Ethernet will not make it any faster. Each individual connection
will be limited to 1gbit. iSCSI with mpxio may work, nfs will not.
On Nov 12, 2010 9:26 AM, "Eugen Leitl" wrote:
> On Fri, Nov 12, 2010 at 10:03:08AM -0500, Edward Ned Harvey wrote:
>> Since combining ZFS storage backend,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/12/2010 10:03 AM, Edward Ned Harvey wrote:
>
> Since combining ZFS storage backend, via nfs or iscsi, with ESXi
> heads, I?m in love. But for one thing. The interconnect between
> the head & storage.
>
>
>
> 1G Ether is so cheap, but not as f
On Fri, Nov 12, 2010 at 10:03:08AM -0500, Edward Ned Harvey wrote:
> Since combining ZFS storage backend, via nfs or iscsi, with ESXi heads, I'm
> in love. But for one thing. The interconnect between the head & storage.
>
>
>
> 1G Ether is so cheap, but not as fast as desired. 10G ether is f
Since combining ZFS storage backend, via nfs or iscsi, with ESXi heads, I'm
in love. But for one thing. The interconnect between the head & storage.
1G Ether is so cheap, but not as fast as desired. 10G ether is fast enough,
but it's overkill and why is it so bloody expensive? Why is there
44 matches
Mail list logo