"J. Roeleveld" <jo...@antarean.org> writes:

> On Friday, April 24, 2015 10:23:01 PM lee wrote:
>> "J. Roeleveld" <jo...@antarean.org> writes:
>> > On Thursday, April 23, 2015 11:03:53 PM lee wrote:
>> > Do you have anything that you find insufficiently documented or is too
>> > difficult?
>> sure, lots
>
> Have you contacted the Xen project with this?

I've been asking questions on mailing lists.  What do you expect?  I
could tell them "your documentation sucks" and they might say "go ahead
and improve it then".  I tried to improve it the little bit I could;
it's on the wiki, if it's still there.

>> > Containers.
>> > Chroots don't have much when it comes to isolation.
>> 
>> What exactly are the issues with containers?  Ppl seem to work on them
>> and to manage to make them more secure over time.
>
> Lack of clear documentation on how to use them. All the examples online refer 
> to systemd-only commands.

True, there isn't much, if any, clear documentation.  I followed the
Gentoo wiki and it's working fine, though.

>> >> > Virtualbox is nice for a quick test. I wouldn't use it for production.
>> >> 
>> >> Why not?
>> > 
>> > Several reasons:
>> > 
>> > 1) I wouldn't trust a desktop application for a server
>> 
>> So that's a gut feeling?
>
> No, a combination of experience and common sense.
> A desktop application dies when the desktop dies.

You cannot run it from the command line?  It only runs in an X session?
If that is so, I'm going to need something else.

>> > 2) The overhead from Virtualbox is quite high (still better then VMWare's
>> > desktop versions though)
>> 
>> Overhead in which way?  I haven't done much with virtualbox yet and
>> merely found it rather easy to use, very useful and to just work fine.
>
> Virtualbox is easy when all you want is to quickly run a VM for a quick test.
> It isn't designed to run multiple VMs with maximum performance.
> In my experience I get on average 80% of the performance inside a Virtualbox 
> VM when compared to running them on the machine directly. With Xen, I can get 
> 95%.
> (This is using normal work-loads, lets not talk about 3D inside a VM)

Someone told me that you may find xen reducing the performance by up to
40%.

>> Compared to containers, the overhead xen requires is enormous,
>
> Hardly comparable. Containers run inside the same kernel. With Xen, or any 
> other virtualisation technology, you run a full OS.

How is that not comparable?

You don't need to run a full OS, and you're not stuck with fixed memory
assignments without even the ability to overcommit when you use
containers.  With xen, you're stuck with what you initially assigned,
may your VM currently use it or not.

If my mail server was a xen VM, I'd have assigned 2GB to it; as a
container, it uses less than one.  If the machine I'm working on was a
xen VM, I'd have assigned at least 16GB to it; as the host of the
container, it costs me nothing.

So obviously, the overhead required by xen is enormous.

>
>> and it
>> doesn't give you a stable system to run VMs on because dom0 is already
>> virtualized itself.
>
> Why doesn't it provide a stable system?
> The dom0 has 1 task and 1 task only: Manage the VMs and resources provided to 
> the VMs. That part can be made extremely stable.

It's already virtualized itself.  With containers, I have a
non-virtualized system as usual, as stable as they are.  A container is
like just another service I can start or stop, and I can access it
easily because it simply resides under /etc/lxc while I can use the host
for whatever else I'm doing.

With xen, I have a virtualized system to begin with, which is wasted
because it's only purpose is to provide a way to maintain other VMs.  I
can't fully use any of these VMs because for what I'm doing, I'd have to
pass through my NVDIA card to one of them.  IIUC, I wouldn't even be
able to log in to the host because it won't have a graphics card,
provided that I actually could pass the graphics card through, which
appears to be pretty much impossible.

The VMs would reside on LVM volumes and be hardly accessible --- though
now I'd use ZFS subvolumes, making that as easy as with containers.

Power management wouldn't work.  The xen documentation sucks. Everything
would be difficult and troublesome.  It's extremely difficult to install
a VM --- I never figured out how to actually do that.  I'd be hugely
wasting resources.

I'd never have the feeling that there is a stable platform to work with,
and it's not something I would want to have to maintain.

> My Lab machine (which only runs VMs for testing and development) currently 
> has 
> an uptime of over a year. In that time I've had VMs crashing because of bad 
> code inside the VM. Not noticing any issues there. Neither with stability nor 
> with performance.
> My only interaction with the dom0 there is the create/destroy/start/stop/... 
> VMs.

How can you use it for testing when it's so ridiculously difficult to
install a VM?  How do you do updates without rebooting when a new kernel
version comes along?  How do you adjust resource allocations depending
on changing workloads?

>> I don't know how that compares to virtualbox --- I
>> didn't have time to look into it and it just worked, allowing me to run
>> a VM on the fly on the same machine I'm working on without any ado.
>
> For that scenario, VirtualBox is quite well suited. I wouldn't run Xen on my 
> desktop or laptop.
>
>> That VM was simply a copy of a VM taken from a vmware server, and the
>> copy could be used without any conversion or anything.
>
> Good luck doing that when you installed the VMWare client tools and drivers 
> inside a MS Windows VM.

That's what it was.

It gets troublesome when the VM is distributed across multiple files ---
I started trying to convert that and never had the time to finish.

>> You can't do
>> that with xen because you'll be having lots of trouble to convert the
>> VM, to convert the machine you're working on to xen and to get it to
>> work, to work around all the problems xen brings about ...  Some days
>> later you might finally have it working --- which is out of the question
>> because the VM is needed right away. And virtualbox does just that.
>
> Look into the pre-configured versions of Xen, like what Citrix offers.
> I can import VMs from VMWare as well without issue. (Apart from the VMWare 
> client tools as mentioned, but Virtualbox has the same issues)

Citrix is commercial, isn't it?

IIRC, I tried to get xen to work with Centos (no chance), something else
which I don't remember and finally Debian.  Debian worked, but it is a
total mess with too much ancient software and their backports, requiring
backports kernels for xen and because of kernel bugs and problems with
dracut/initramfs-tools (which are supposed to be fixed now).

Add to that the sucking documentation of xen, things like power
management not working and all the quirks like the clocks being offset
despite the docs saying that they will be synced automagically, the
impossibility to install an OS in a VM and the enormous waste of
resources, and I really don't want to use xen --- particularly not for
production.  It's simply too troublesome.

>> I was really surprised that virtualbox worked that well.  Maybe xen will
>> get there some time.
>
> Xen already is there.

Not by far, it doesn't even have good documentation yet and is too
troublesome and too difficult to use.

> Please understand that Xen and Virtualbox have their own usecases:
>
> Xen is for dedicated hosts running VMs 24/7

If you can get it to work and if you have the time to get it there, it's
a nice solution for VMs.  That's very big ifs, not even to mention to
work around the quirks.

> Virtualbox is for testing stuff quickly on a laptop/desktop

If it works only for that, what's the alternative?  Sooner than later,
I'll have to set up some Windoze VMs at work, and I really don't want to
use xen.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.

Reply via email to