Current patch/systemvm/debian is based on debian squeeze,
which kernel is 2.6.32-5-686-bigmem. In that system vm,
cloud-early-config silently fails :
/etc/init.d/cloud-early-config: line 109: /dev/vport0p1: No such file or
directory
So I've upgraded to wheezy (which includes virtio-console.ko)
# I pushed some patch for this.
I think we need to ANNOUNCE the incompatibility of this,
and hopfuly give some upgrade paths for cloudstack users.
(2013/03/05 7:24), Marcus Sorensen wrote:
I think this just requires an updated system vm (the virtio-serial
portion). I've played a bit with the old debian 2.6.32-5-686-bigmem
one and can't get the device nodes to show up, even though the
/boot/config shows that it has CONFIG_VIRTIO_CONSOLE=y. However, if I
try this with a CentOS 6.3 VM, on a CentOS 6.3 or Ubuntu 12.04 KVM
host it works. So I'm not sure what's being used for the ipv6 update,
but we can probably make one that works. We'll need to install qemu-ga
and start it within the systemvm as well.
On Mon, Mar 4, 2013 at 12:41 PM, Edison Su <edison...@citrix.com> wrote:
-----Original Message-----
From: Marcus Sorensen [mailto:shadow...@gmail.com]
Sent: Sunday, March 03, 2013 12:13 PM
To: cloudstack-...@incubator.apache.org
Subject: [DISCUSS] getting rid of KVM patchdisk
For those who don't know (this probably doesn't matter, but...), when KVM
brings up a system VM, it creates a 'patchdisk' on primary storage. This
patchdisk is used to pass along 1) the authorized_keys file and 2) a 'cmdline'
file that describes to the systemvm startup services all of the various
properties of the system vm.
Example cmdline file:
template=domP type=secstorage host=172.17.10.10 port=8250 name=s-1-
VM
zone=1 pod=1 guid=s-1-VM
resource=com.cloud.storage.resource.NfsSecondaryStorageResource
instance=SecStorage sslcopy=true role=templateProcessor mtu=1500
eth2ip=192.168.100.170 eth2mask=255.255.255.0 gateway=192.168.100.1
public.network.device=eth2 eth0ip=169.254.1.46 eth0mask=255.255.0.0
eth1ip=172.17.10.150 eth1mask=255.255.255.0 mgmtcidr=172.17.10.0/24
localgw=172.17.10.1 private.network.device=eth1 eth3ip=172.17.10.192
eth3mask=255.255.255.0 storageip=172.17.10.192
storagenetmask=255.255.255.0 storagegateway=172.17.10.1
internaldns1=8.8.4.4 dns1=8.8.8.8
This patch disk has been bugging me for awhile, as it creates a volume that
isn't really tracked anywhere or known about in cloudstack's database. Up
until recently these would just litter the KVM primary storages, but there's
been some triage done to attempt to clean them up when the system vms
go away. It's not perfect. It also can be inefficient for certain primary
storage
types, for example if you end up creating a bunch of 10MB luns on a SAN for
these.
So my question goes to those who have been working on the system vm.
My first preference (aside from a full system vm redesign, perhaps
something that is controlled via an API) would be to copy these up to the
system vm via SCP or something. But the cloud services start so early on that
this isn't possible. Next would be to inject them into the system vm's root
disk before starting the server, but if we're allowing people to make their
own system vms, can we count on the partitions being what we expect? Also
I don't think this will work for RBD, which qemu directly connects to, with the
host OS unaware of any disk.
Options?
Could you take a look at the status of this projects in KVM?
http://wiki.qemu.org/Features/QAPI/GuestAgent
https://fedoraproject.org/wiki/Features/VirtioSerial
Basically, we need a way to talk to guest VM(sending parameters to KVM guest)
after VM is booted up. Both VMware/Xenserver has its own way to send parameters
to guest VM through PV driver, but there is no such thing for KVM few years ago.