First I have to disagree with the suggestion to use a SAN/NAS.  At this
small size it does not make sense.  If you have 4 or less VM Hosts then
direct attached or Fiber Channel makes more sense when you consider cost,
performance, and reliability.  I have seen far too many cases of 3 or 4
hypervisors with a single NAS to drive them all.  Gigabit ethernet is not
enough, and you now have a single point of failure for everything.


RAID 5 is a bad, bad, bad idea.  Basically with the cost of storage, there
is no reason not to be using RAID 6 for data that you care about, or RAID
10 if you really do need the write performance (mdadm supports both).  Hot
spare(s) + RAID 5 is a recipe for data loss.  The reason is that while RAID
5 can survive a single device failure, you stress the remaining drives when
rebuilding which dramatically increases the chance of an additional failure
which will take all of your data with it.  None of these is a substitute
for proper backups.

If you are used to VMWare, then that might make some sense.  I am not a fan
of VMWare so I can not personally endorse it.  I run KVM on vanilla Ubuntu
and Debian servers with no web service or GUI installed.  I just use virsh
from the command line or virt-manager from my laptop or workstation.  If
you are not comfortable with the virsh command line (libvirt in general),
proxmox is a pretty good place to start to get a VMWare like experience
without all the crap that is VMWare (they have excellent tech, it is just
very expensive for any of the stuff that is worthwhile in their stack).
 ESXi also has terrible hardware support, which is the primary reason why I
no longer use or recommend it.  Their licensing shenanigans in the past 2
years are the other big reason.

Hth,


On Tue, Sep 17, 2013 at 3:27 PM, Andrew Robinson
<and...@boohahaonline.com>wrote:

> **
>
> This looks like a pretty interesting alternative to ESXI.  One thing I
> couldn't see on their website however, is does this install to a hard drive
> or can it be installed to a USB key like ESXI?
>
> -----Original message-----
> *From:* Jon Copeland <copela...@gmail.com>
> *Sent:* Tuesday 17th September 2013 15:04
> *To:* CLUG General <clug-talk@clug.ca>
> *Subject:* Re: [clug-talk] Virtualized Server Question
>
> Look into Proxmox (www.proxmox.com).  It uses KVM so has a solid
> stability reputation and has a web UI much like ESXi so it's ridiculously
> simple to use.  I've got it deployed at two sites and have experienced
> absolutely zero problems.  The hosts run a variety of Windows / Linux and
> Unix VM's.  As far as storage goes you should consider using a SAN (or high
> end NAS) and move away from Software RAID5, especially if your intended use
> of the VM would be in a production environment.
>
> /.j
> forwardbase solutions
>
>
> On Tue, Sep 17, 2013 at 2:45 PM, Andrew Robinson <and...@boohahaonline.com
> > wrote:
>
>> **
>>
>> Sorry I guess I should have been a bit more clear, I just forget the
>> proper terms at the moment :)  ESXI would be the host running the various
>> different VM's, or I would be running another os (say scientific Linux)
>> with virtual box on top of it, running the VM's.  I'm trying to get a sense
>> of the preferred way of doing things.
>>
>> -----Original message-----
>> *From:* Anand Singh <an...@linizen.com>
>> *Sent:* Tuesday 17th September 2013 14:36
>> *To:* CLUG General <clug-talk@clug.ca>
>> *Subject:* Re: [clug-talk] Virtualized Server Question
>>
>> Hi Andrew,
>>
>> If you're looking for an open source alternative to ESXi, it's not
>> Virtualbox.  Even running headless, it's not in the same league for server
>> performance (nor was it designed to be).  Look at KVM or Xen instead.  I've
>> got about a dozen of those Supermicro servers running at various sites.
>> Despite my ambitions to go open source, they're all running ESXi.
>>
>> Anand.
>>
>> _______________________________________________
>>
>> clug-talk mailing list
>> clug-talk@clug.ca
>> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
>>
>> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
>>
>> **Please remove these lines when replying
>>
>>
>> _______________________________________________
>> clug-talk mailing list
>> clug-talk@clug.ca
>> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
>> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
>> **Please remove these lines when replying
>>
>
> _______________________________________________
>
> clug-talk mailing list
> clug-talk@clug.ca
> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
>
> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
>
> **Please remove these lines when replying
>
>
> _______________________________________________
> clug-talk mailing list
> clug-talk@clug.ca
> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> **Please remove these lines when replying
>
_______________________________________________
clug-talk mailing list
clug-talk@clug.ca
http://clug.ca/mailman/listinfo/clug-talk_clug.ca
Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
**Please remove these lines when replying

Reply via email to