Hmm. I think I was deceived by the /etc/init directory existing. I'm not sure why it's there, but I don't think the template is using upstart. I'm having a hard time reliably recreating the issue, but I think it's related to other reports where the default gateway is missing (I've seen this myself on a secondary storage VM, but it went away when I rebooted it, and couldn't get the problem to come back). This happens unrelated to any changes I'm making.
On Wed, Mar 6, 2013 at 3:54 PM, Marcus Sorensen <shadow...@gmail.com> wrote: > After reading a little more about upstart, I don't think this script > does anything. I'm not entirely sure at the moment however how best to > ensure that networking starts after cloud-early-config, short of > converting cloud-early-config to an upstart script. It looks like this > debian build is using upstart just for networking, and everything else > uses the standard sysvinit ordering. > > On Wed, Mar 6, 2013 at 2:49 PM, Marcus Sorensen <shadow...@gmail.com> wrote: >> Just to be clear, that script may have no effect whatsoever, and I'm >> not sure how to verify other than rebooting a bunch of times. I don't >> have the time to do that at the moment. >> >> On Wed, Mar 6, 2013 at 2:48 PM, Marcus Sorensen <shadow...@gmail.com> wrote: >>> There may be one other minor thing that needs to be addressed. In >>> getting rid of the patchdisk, my networking on the router is a bit >>> inconsistent. It looks like maybe networking is starting before >>> cloud-early-config completes, as /etc/network/interfaces looks right, >>> but I don't always get an ip on eth0. >>> >>> I know next to nothing about upstart, and haven't had a chance to test >>> much, so if someone else can help that would be great. I've tried this >>> though and it worked the two times I rebooted, after 70% failures on >>> reboot. It goes it /etc/init/cloud-early-config-wait.conf >>> >>> ----- script start here ---- >>> #cloud-early-config-wait >>> start on (starting networking or starting network-interface) >>> instance $JOB >>> >>> script >>> >>> start cloud-early-config || true >>> >>> # Waiting forever is ok.. upstart will kill this job when >>> # the service we tried to start above either starts or stops >>> while sleep 3600 ; do :; done >>> >>> end script >>> ----script end here--- >>> >>> On Wed, Mar 6, 2013 at 9:10 AM, Marcus Sorensen <shadow...@gmail.com> wrote: >>>> On Wed, Mar 6, 2013 at 2:09 AM, Rohit Yadav <bhais...@apache.org> wrote: >>>>> Thanks a lot Marcus, your findings have been useful. I've applied the >>>>> locale fix and a grub2 boot timeout fix (systemvms should boot 5 >>>>> seconds faster now). >>>>> Alright so far we're good, tested and systemvm seems to work on KVM >>>>> (Marcus) and Xen, anyone to help us with VMWare? >>>>> >>>>> Marcus, about the qemu-ga, we need to patch all our templates as per >>>>> systemvm type (ssvm, cpvm or rvm), for that we're using the >>>>> systemvm.iso to patch the template appliance and we reboot once >>>>> patching is done successfully in cloud-early-config. So, with using >>>>> qemu-ga or our own daemon (assumming through socket we already got >>>>> authorized key), do we want to make mgmt server or host copy the >>>>> scripts inside the systemvm or just continue using current patching >>>>> mechanism that uses the iso to mount and patch? Marcus can you share >>>>> how we can use the new systemvm on devcloud-kvm (osx/vmware-fusion). >>>>> >>>>> Regards. >>>> >>>> I think the systemvm.iso is a completely fine way of getting new code >>>> onto the system vms. My main goal at this point was to just get rid of >>>> the patch disk portion. Also, since it sounds like we're wanting to >>>> move to a link-local API to control the system vms I think we'll >>>> forego qemu-guest-agent or putting our own daemon on the virtio serial >>>> device and simply use it to copy the cmdline/authorized keys. >>>> >>>> If this updated system vm checks out, I'll update the devcloud-kvm >>>> packages with it preinstalled, replacing the older one. Or in the >>>> meantime what I've been doing is simply downloading yours and moving >>>> it into place over the existing one, giving it the same name, before >>>> deploying anything. >>>> >>>>> >>>>> On Wed, Mar 6, 2013 at 6:43 AM, Marcus Sorensen <shadow...@gmail.com> >>>>> wrote: >>>>>> Oh, and I have yet to test all of the vpc functions, but so far so >>>>>> good. I was able to bring up the VPC, it got it's gateways all >>>>>> configured, and my public ip with portforwarding rule/ acl to allow 22 >>>>>> in worked. >>>>>> >>>>>> On Tue, Mar 5, 2013 at 6:04 PM, Marcus Sorensen <shadow...@gmail.com> >>>>>> wrote: >>>>>>> Rohit, I think I tracked down why the router keeps rebooting. When it >>>>>>> comes up, the first thing we do is run get_template_version.sh, which >>>>>>> replies: >>>>>>> >>>>>>> /bin/bash: warning: setlocale: LC_ALL: cannot change locale >>>>>>> (en_US.UTF-8) >>>>>>> Cloudstack Release 4.2.0 Tue Mar 5 13:17:51 UTC >>>>>>> 2013&a8af8cdd546e575e64f69b6f80ef949c >>>>>>> >>>>>>> Looks like we don't like that locale warning: >>>>>>> >>>>>>> GetDomRVersionAnswer":{"result":false,"details":"bash: warning: >>>>>>> setlocale: LC_ALL: cannot change locale (en_US.UTF-8)" >>>>>>> >>>>>>> I can fix it by running this in the system vm: >>>>>>> >>>>>>> locale-gen en_US.UTF-8 >>>>>>> >>>>>>> On Tue, Mar 5, 2013 at 10:43 AM, Chiradeep Vittal >>>>>>> <chiradeep.vit...@citrix.com> wrote: >>>>>>>> OK, one more niggle about the previous system vm. We tried to enable >>>>>>>> aesni >>>>>>>> [1] to boost encryption performance (ipsec vpn, anything ssl), but the >>>>>>>> system vm would crash on Vmware if we did that (hence the module >>>>>>>> blacklisted). Could someone try the new systemvm on VMWare with aesni >>>>>>>> enabled? I believe it is as simple as >>>>>>>> modprobe aesni_intel and >>>>>>>> openssl 1.0.1 >>>>>>>> >>>>>>>> [1] http://en.wikipedia.org/wiki/AES_instruction_set >>>>>>>> >>>>>>>> On 3/4/13 10:46 PM, "Rohit Yadav" <bhais...@apache.org> wrote: >>>>>>>> >>>>>>>>>Hi all, >>>>>>>>> >>>>>>>>>Thanks to Mate >>>>>>>>>(blogs.citrix.com/2012/10/04/convert-a-raw-image-to-xenserver-vhd/) >>>>>>>>>I'm able to ship appliances that work for Xen. Chiradeep, there is no >>>>>>>>>need to use the powershell hack now, if people still want vhdx, they >>>>>>>>>can use that hack. The current appliance for Xen (vbox->raw->vhd) >>>>>>>>>works. >>>>>>>>> >>>>>>>>>At least appliance for HyperV and Xen works: >>>>>>>>>http://jenkins.cloudstack.org/job/build-systemvm-master/ >>>>>>>>> >>>>>>>>>I've tested and found that: >>>>>>>>>- patching happens >>>>>>>>>- password server works >>>>>>>>>- apache was running, user data works >>>>>>>>>- template creation works >>>>>>>>>- snapshot to template works >>>>>>>>> >>>>>>>>>I won't be able to test VPC/advance zone of DevCloud, ipv6 etc. >>>>>>>>>someone from QA would have to help. >>>>>>>>>Thanks Marcus for your suggestion, will compress qcow2 and test on KVM >>>>>>>>>today. >>>>>>>>>I need help on testing/fixing VMWare systemvm template appliance. >>>>>>>>> >>>>>>>>>Ahmad :) all natural: >>>>>>>>>http://highlatencylife.files.wordpress.com/2010/12/awesomesauce.png >>>>>>>>> >>>>>>>>>Regards. >>>>>>>>>PS. Was AFK yesterday, down with flu, much better now. >>>>>>>>> >>>>>>>>>On Fri, Mar 1, 2013 at 11:29 PM, Chiradeep Vittal >>>>>>>>><chiradeep.vit...@citrix.com> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 3/1/13 4:03 AM, "Rohit Yadav" <bhais...@apache.org> wrote: >>>>>>>>>>> >>>>>>>>>>>- Saw systemvms started from the template, saw patching happening, >>>>>>>>>>>logged in with creds (root/password) to verify that it was indeed the >>>>>>>>>>>new one (Linux 3.2 :) >>>>>>>>>>>- The agents were running fine, there was a latency issue (agents >>>>>>>>>>>were >>>>>>>>>>>lagging behind) >>>>>>>>>>>- (Applied a fix describe on CLOUDSTACK-1370 to make the deployVM >>>>>>>>>>>work) VR came up, did it's SDN magic and tinyLinux was deployed >>>>>>>>>>>- Console proxy worked for me as well >>>>>>>>>> >>>>>>>>>> I would also test >>>>>>>>>> - password server >>>>>>>>>> - user data management (is apache web server running?) >>>>>>>>>> In addition >>>>>>>>>> - zone-to-zone template copy >>>>>>>>>> - template creation >>>>>>>>>> - convert snapshot to template >>>>>>>>>> - vpc >>>>>>>>>> - ipv6 >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>Chiradeep, is there a way to convert VHD (HyperV) to VHD (Xen), I >>>>>>>>>>>hear >>>>>>>>>>>that they both differ in some magic bits? >>>>>>>>>> Actually since we intend to support Windows 2012, we should be using >>>>>>>>>>VHDX. >>>>>>>>>> There's a way to do it with Powershell (from vhd(hyper-v) -> vhdx) >>>>>>>>>> >>>>>>>>>>http://blogs.msdn.com/b/virtual_pc_guy/archive/2012/10/03/using-powershel >>>>>>>>>>l- >>>>>>>>>> to-convert-a-vhd-to-a-vhdx.aspx >>>>>>>>>> >>>>>>>>>> >>>>>>>>