On Sep 1, 2009, at 1:28 PM, Jason wrote:
I guess I should come at it from the other side:
If you have 1 iscsi target box and it goes down, you're dead in the
water.
Yep.
If you have 2 iscsi target boxes that replicate and one dies, you
are OK but you then have to have a 2:1 total storage
You are completely off your rocker :)
No, just kidding. Assuming the virtual front-end servers are running on
different hosts, and you are doing some sort of raid, you should be fine.
Performance may be poor due to the inexpensive targets on the back end, but you
probably know that. A while bac
I guess I should come at it from the other side:
If you have 1 iscsi target box and it goes down, you're dead in the water.
If you have 2 iscsi target boxes that replicate and one dies, you are OK but
you then have to have a 2:1 total storage to usable ratio (excluding expensive
shared disks).
On Sep 1, 2009, at 12:17 PM, Jason wrote:
True, though an enclosure for shared disks is expensive. This isn't
for production but for me to explore what I can do with x86/x64
hardware. The idea being that I can just throw up another x86/x64
box to add more storage. Has anyone tried anythi
On Tue, Sep 1, 2009 at 2:17 PM, Jason wrote:
> True, though an enclosure for shared disks is expensive. This isn't for
> production but for me to explore what I can do with x86/x64 hardware. The
> idea being that I can just throw up another x86/x64 box to add more storage.
> Has anyone tried a
True, though an enclosure for shared disks is expensive. This isn't for
production but for me to explore what I can do with x86/x64 hardware. The idea
being that I can just throw up another x86/x64 box to add more storage. Has
anyone tried anything similar?
--
This message posted from openso
On Sep 1, 2009, at 11:45 AM, Jason wrote:
So aside from the NFS debate, would this 2 tier approach work? I am
a bit fuzzy on how I would get the RAIDZ2 redundancy but still
present the volume to the VMware host as a raw device. Is that
possible or is my understanding wrong? Also could it
So aside from the NFS debate, would this 2 tier approach work? I am a bit
fuzzy on how I would get the RAIDZ2 redundancy but still present the volume to
the VMware host as a raw device. Is that possible or is my understanding
wrong? Also could it be defined as a clustered resource?
--
This m
On Mon, 2009-08-31 at 18:26 -0400, David Magda wrote:
> On Aug 31, 2009, at 17:29, Tim Cook wrote:
>
> > I've got MASSIVE deployments of VMware on NFS over 10g that achieve
> > stellar
> > performance (admittedly, it isn't on zfs).
>
> Without a separate ZIL device NFS would probably be slower
On Aug 31, 2009, at 17:29, Tim Cook wrote:
I've got MASSIVE deployments of VMware on NFS over 10g that achieve
stellar
performance (admittedly, it isn't on zfs).
Without a separate ZIL device NFS would probably be slower with NFS--
hence why Sun's own appliances use SSDs.
Specifically I remember storage vmotion being supported on NFS last as well as
jumbo frames. Just the impression I get from past features, perhaps they are
doing better with that.
I know the performance problem had specifically to do with ZFS and the way it
handled something. I know lots of i
On Mon, Aug 31, 2009 at 4:26 PM, Jason wrote:
> Well, I knew a guy who was involved in a project to do just that for a
> production environment. Basically they abandoned using that because there
> was a huge performance hit using ZFS over NFS. I didn’t get the specifics
> but his group is usual
Well, I knew a guy who was involved in a project to do just that for a
production environment. Basically they abandoned using that because there was
a huge performance hit using ZFS over NFS. I didn’t get the specifics but his
group is usually pretty sharp. I’ll have to check back with him.
On Mon, Aug 31, 2009 at 3:42 PM, Jason wrote:
> I've been looking to build my own cheap SAN to explore HA scenarios with
> VMware hosts, though not for a production environment. I'm new to
> opensolaris but I am familiar with other clustered HA systems. The features
> of ZFS seem like they woul
14 matches
Mail list logo