Erik, You are absolutely right. the 4.4.3 template that I have is shown as USER and not SYSTEM. I will try changing this in the evening and giving it a go.
Is there an error in the documentation? Should the Routing be set as Yes when adding a new system vm template? Thanks Andrei -- Andrei Mikhailovsky Director Arhont Information Security Web: http://www.arhont.com http://www.wi-foo.com Tel: +44 (0)870 4431337 Fax: +44 (0)208 429 3111 PGP: Key ID - 0x2B3438DE PGP: Server - keyserver.pgp.com DISCLAIMER The information contained in this email is intended only for the use of the person(s) to whom it is addressed and may be confidential or contain legally privileged information. If you are not the intended recipient you are hereby notified that any perusal, use, distribution, copying or disclosure is strictly prohibited. If you have received this email in error please immediately advise us by return email at [email protected] and delete and purge the email and any attachments without making a copy. ----- Original Message ----- From: "Erik Weber" <[email protected]> To: [email protected] Sent: Wednesday, 6 May, 2015 1:43:17 PM Subject: Re: KVM - Unable to start VRs after upgrading to 4.4.3 (Shapeblue Repo) 1) is the type of the new systemvm template 'USER' or 'SYSTEM'? 2) if 'USER', try setting the routing flag in the database and retry. -- Erik On Wed, May 6, 2015 at 2:42 PM, Andrei Mikhailovsky <[email protected]> wrote: > Erik, > > You are absolutely right, I did have an issue with 4.3.1 i believe, > however, when the 4.3.2 was released I've not had any issues upgrading to > it. So I have assumed the issue have been corrected. > > Andrei > > > ----- Original Message ----- > From: "Erik Weber" <[email protected]> > To: [email protected] > Sent: Wednesday, 6 May, 2015 1:27:07 PM > Subject: Re: KVM - Unable to start VRs after upgrading to 4.4.3 (Shapeblue > Repo) > > According to Google you had the exact same problem while going from 4.2 to > 4.3 [1] :-) > > It might not have been a new systemvm for 4.4.3, and thus the database > won't get upgraded with your template. > Should be easy enough to check if the type is 'USER' or 'SYSTEM' in the ui. > > Log in to your SSVM or CPVM and 'cat /etc/cloudstack-release', that should > reveal which version your are running. > > For the manual router template override to work, you afaik have to check > the 'Routing' checkbox. > > > [1] > > http://mail-archives.apache.org/mod_mbox/cloudstack-users/201405.mbox/%3C28532217.943.1399419539197.JavaMail.andrei@tuchka%3E > -- > Erik > > > > On Wed, May 6, 2015 at 2:22 PM, Andrei Mikhailovsky <[email protected]> > wrote: > > > Erik, > > > > No I did not check the Routing option when creating the template as per > > the Documentation. All options should be set to No according to the > upgrade > > instructions. I've done the same procedure when I was upgrading from > 4.2.x > > branch to 4.3.x and again from 4.3.x to 4.4.2 and not had any issues. > > > > Andrei > > > > > > ----- Original Message ----- > > From: "Erik Weber" <[email protected]> > > To: [email protected] > > Sent: Wednesday, 6 May, 2015 1:12:15 PM > > Subject: Re: KVM - Unable to start VRs after upgrading to 4.4.3 > (Shapeblue > > Repo) > > > > Did you check the 'Routing' checkbox when you added the template? > > > > > > -- > > Erik > > > > On Wed, May 6, 2015 at 1:25 PM, Andrei Mikhailovsky <[email protected]> > > wrote: > > > > > > > > Hello guys, > > > > > > I've recently upgraded from 4.4.2 (shapeblue repo) to 4.4.3 (shapeblue > > > repo). I am running Ubuntu 14.04 LTS on both management and hosts. > > > > > > > > > The upgrade went okay and i've not noticed any issues with the upgrade > > > process. I've also upgraded the systemvm template as per the ShapeBlue > > > instructions. The system vm was downloaded from: > > > > > > > > > > > > http://packages.shapeblue.com/systemvmtemplate/4.4/systemvm64template-4.4-kvm.qcow2.bz2 > > > > > > As during my previous upgrades, i've first downloaded the system vm > > > template and made sure that the status is shown as READY. I've first > > > created a new Template called systemvm-kvm-4.4.3 (as i've already had > the > > > systemvm-kvm-4.4). Following the successfull download and install of > the > > > template, i've changed the router.template.kvm global setting to > > > systemvm-kvm-4.4.3. Following this I've upgraded the management server > > > followed by the agents. > > > > > > > > > After the management server restart I've manually deleted the SSVM and > > > CPVM and they were successfully recreated and shown as UP. > > > > > > However, I've tried to manually restart one of the existing networks > and > > > got an error message. Also, I can't start a newly created VM with a new > > > network. I can successfully create a new VM with an existing network, > > which > > > has a running VR. > > > > > > Further investigating the issue, i've discovered that during the VR > start > > > up, the management server shows the following error: > > > > > > > > > > > > 2015-05-06 11:32:23,994 DEBUG > > [c.c.n.r.VirtualNetworkApplianceManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) KVM > > > won't support system vm, skip it > > > > > > > > > > > > I have downloaded the right template for the KVM hypervisor, which can > be > > > indicated by successfully starting both CPVM and SSVM. The full log > from > > > the management server: > > > > > > > > > > > > --------------------------------------------- > > > > > > 2015-05-06 11:32:23,545 DEBUG [c.c.a.t.Request] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) Seq > > > 8-6236922533954715673: Sending { Cmd , MgmtId: 115129173025118, via: > > > 8(arh-cloud2-ib), Ver: v1, Flags: 100111, > > > > > > [{"com.cloud.agent.api.routing.SavePasswordCommand":{,"vmIpAddress":"10.1.1.67","vmName":"Win7-Pentest","executeInSequence":true,"accessDetails":{"zone.network.type":"Advanced"," > > > router.name > > > > > > ":"r-1200-VM","router.ip":"169.254.3.212","router.guest.ip":"10.1.1.1"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"10.1.1.67","vmName":"Win7-Pentest","executeInSequence":true,"accessDetails":{"zone.network.type":"Advanced"," > > > router.name > > > ":"r-1200-VM","router.ip":"169.254.3.212","router.guest.ip":"10.1.1.1"},"wait":0}}] > > > } > > > 2015-05-06 11:32:23,943 DEBUG [c.c.a.t.Request] > > > (AgentManager-Handler-15:null) Seq 8-6236922533954715673: Processing: > { > > > Ans: , MgmtId: 115129173025118, via: 8, Ver: v1, Flags: 110, > > > > > > [{"com.cloud.agent.api.Answer":{"result":true,"details":"","wait":0}},{"com.cloud.agent.api.Answer":{"result":true,"details":"","wait":0}}] > > > } > > > 2015-05-06 11:32:23,944 DEBUG [c.c.a.m.AgentAttache] > > > (AgentManager-Handler-15:null) Seq 8-6236922533954715673: No more > > commands > > > found > > > 2015-05-06 11:32:23,944 DEBUG [c.c.a.t.Request] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) Seq > > > 8-6236922533954715673: Received: { Ans: , MgmtId: 115129173025118, > via: > > 8, > > > Ver: v1, Flags: 110, { Answer, Answer } } > > > 2015-05-06 11:32:23,945 DEBUG [c.c.n.NetworkModelImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Service > > > SecurityGroup is not supported in the network id=280 > > > 2015-05-06 11:32:23,949 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > Lock > > is > > > acquired for network id 311 as a part of network implement > > > 2015-05-06 11:32:23,949 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Asking > > > ExternalGuestNetworkGuru to implement > > > Ntwk[c9d0ba49-5eeb-481a-89fa-436e80fb61c0|Guest|8] > > > 2015-05-06 11:32:23,969 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Asking > > > VirtualRouter to implemenet > > > Ntwk[c9d0ba49-5eeb-481a-89fa-436e80fb61c0|Guest|8] > > > 2015-05-06 11:32:23,972 DEBUG > > [c.c.n.r.VirtualNetworkApplianceManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > Lock > > is > > > acquired for network id 311 as a part of router startup in > > > > > > Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] > > > : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(48)-Storage()] > > > 2015-05-06 11:32:23,981 DEBUG > > [c.c.n.r.VirtualNetworkApplianceManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Adding > > > nic for Virtual Router in Guest network > > > Ntwk[c9d0ba49-5eeb-481a-89fa-436e80fb61c0|Guest|8] > > > 2015-05-06 11:32:23,981 DEBUG > > [c.c.n.r.VirtualNetworkApplianceManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Adding > > > nic for Virtual Router in Control network > > > 2015-05-06 11:32:23,983 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > Found > > > existing network configuration for offering [Network Offering > > > [3-Control-System-Control-Network]: > > > Ntwk[0eaa3a90-c909-424f-8ea2-1665742d89b5|Control|3] > > > 2015-05-06 11:32:23,983 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > > Releasing lock for Acct[06ee8d45-65f2-11e3-9bd1-d8d38559b2d0-system] > > > 2015-05-06 11:32:23,984 DEBUG > > [c.c.n.r.VirtualNetworkApplianceManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Adding > > > nic for Virtual Router in Public network > > > 2015-05-06 11:32:23,987 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > Found > > > existing network configuration for offering [Network Offering > > > [1-Public-System-Public-Network]: > > > Ntwk[06594454-14b0-4fe5-b0cd-641240896ec6|Public|1] > > > 2015-05-06 11:32:23,987 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > > Releasing lock for Acct[06ee8d45-65f2-11e3-9bd1-d8d38559b2d0-system] > > > 2015-05-06 11:32:23,992 DEBUG > > [c.c.n.r.VirtualNetworkApplianceManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > > Allocating the VR i=1222 in datacenter > > > com.cloud.dc.DataCenterVO$$EnhancerByCGLIB$$c79d907a@1with the > > hypervisor > > > type KVM > > > 2015-05-06 11:32:23,994 DEBUG > > [c.c.n.r.VirtualNetworkApplianceManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) KVM > > > won't support system vm, skip it > > > 2015-05-06 11:32:23,995 DEBUG > > [c.c.n.r.VirtualNetworkApplianceManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > Lock > > is > > > released for network id 311 as a part of router startup in > > > > > > Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] > > > : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(48)-Storage()] > > > 2015-05-06 11:32:23,995 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > > Cleaning up because we're unable to implement the network > > > Ntwk[c9d0ba49-5eeb-481a-89fa-436e80fb61c0|Guest|8] > > > 2015-05-06 11:32:24,000 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > Lock > > is > > > acquired for network Ntwk[c9d0ba49-5eeb-481a-89fa-436e80fb61c0|Guest|8] > > as > > > a part of network shutdown > > > 2015-05-06 11:32:24,003 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Sending > > > network shutdown to VirtualRouter > > > 2015-05-06 11:32:24,004 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Network > > > id=311 is shutdown successfully, cleaning up corresponding resources > now. > > > 2015-05-06 11:32:24,006 DEBUG [c.c.n.g.GuestNetworkGuru] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > > Releasing vnet for the network id=311 > > > 2015-05-06 11:32:24,013 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > Lock > > is > > > released for network Ntwk[c9d0ba49-5eeb-481a-89fa-436e80fb61c0|Guest|8] > > as > > > a part of network shutdown > > > 2015-05-06 11:32:24,014 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > Lock > > is > > > released for network id 311 as a part of network implement > > > 2015-05-06 11:32:24,014 INFO [c.c.v.VirtualMachineManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Unable > > > to contact resource. > > > com.cloud.exception.ResourceUnavailableException: Resource > [DataCenter:1] > > > is unreachable: Can't find all necessary running routers! > > > at > > > > > > com.cloud.network.element.VirtualRouterElement.implement(VirtualRouterElement.java:199) > > > at > > > > > > org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.implementNetworkElementsAndResources(NetworkOrchestrator.java:1080) > > > at > > > > > > org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.implementNetwork(NetworkOrchestrator.java:992) > > > at > > > > > > org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.prepare(NetworkOrchestrator.java:1272) > > > at > > > > > > com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:986) > > > at > > > > > > com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:5201) > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > > > at > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > at > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > at > > > > > > com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107) > > > at > > > > > > com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:5346) > > > at > > > com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102) > > > at > > > > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:502) > > > at > > > > > > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49) > > > at > > > > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56) > > > at > > > > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103) > > > at > > > > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53) > > > at > > > > > > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) > > > at > > > > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:459) > > > at > > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > > > at > > > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > > > at > > > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > > > at java.lang.Thread.run(Thread.java:745) > > > 2015-05-06 11:32:24,016 DEBUG [c.c.v.VirtualMachineManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > > Cleaning up resources for the vm VM[User|i-2-468-VM] in Starting state > > > 2015-05-06 11:32:24,018 DEBUG [c.c.a.t.Request] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) Seq > > > 48-3897021052559032342: Sending { Cmd , MgmtId: 115129173025118, via: > > > 48(arh-clou > > > d4-ib), Ver: v1, Flags: 100011, > > > > > > [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":false,"vmName":"i-2-468-VM","wait":0}}] > > > } > > > 2015-05-06 11:32:24,231 DEBUG [c.c.a.t.Request] > > > (AgentManager-Handler-13:null) Seq 48-3897021052559032342: > Processing: { > > > Ans: , MgmtId: 115129173025118, via: 48, Ver: v1, Flags: 10, > > > [{"com.cloud.agent. > > > api.StopAnswer":{"result":true,"wait":0}}] } > > > 2015-05-06 11:32:24,231 DEBUG [c.c.a.t.Request] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) Seq > > > 48-3897021052559032342: Received: { Ans: , MgmtId: 115129173025118, > via: > > > 48, Ver: v1, Flags: 10, { StopAnswer } } > > > 2015-05-06 11:32:24,239 DEBUG [c.c.n.NetworkModelImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Service > > > SecurityGroup is not supported in the network id=280 > > > 2015-05-06 11:32:24,241 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > > Changing active number of nics for network id=280 on -1 > > > 2015-05-06 11:32:24,245 DEBUG [o.a.c.e.o.NetworkOrchestrator] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Asking > > > VirtualRouter to release > > > NicProfile[2804-468-cb3675f8-dc3c-47b7-9e44-037c83202ab0-10.1.1.67-null > > > 2015-05-06 11:32:24,247 DEBUG [c.c.v.VirtualMachineManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > > Successfully released network resources for the vm VM[User|i-2-468-VM] > > > 2015-05-06 11:32:24,247 DEBUG [c.c.v.VirtualMachineManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > > Successfully cleanued up resources for the vm VM[User|i-2-468-VM] in > > > Starting state > > > 2015-05-06 11:32:24,249 DEBUG [c.c.v.VirtualMachineManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > Root > > > volume is ready, need to place VM in volume's cluster > > > 2015-05-06 11:32:24,249 DEBUG [c.c.v.VirtualMachineManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > > Vol[477|vm=468|ROOT] is READY, changing deployment plan to use this > > pool's > > > dcId: 1 , podId: 1 , and clusterId: 1 > > > 2015-05-06 11:32:24,255 DEBUG [c.c.d.DeploymentPlanningManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Deploy > > > avoids pods: [], clusters: [], hosts: [48] > > > 2015-05-06 11:32:24,255 DEBUG [c.c.d.DeploymentPlanningManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > > DataCenter id = '1' provided is in avoid set, DeploymentPlanner cannot > > > allocate the VM, returning. > > > 2015-05-06 11:32:24,261 DEBUG [c.c.d.DeploymentPlanningManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Deploy > > > avoids pods: [], clusters: [], hosts: [48] > > > 2015-05-06 11:32:24,261 DEBUG [c.c.d.DeploymentPlanningManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > > DataCenter id = '1' provided is in avoid set, DeploymentPlanner cannot > > > allocate the VM, returning. > > > 2015-05-06 11:32:24,267 DEBUG [c.c.c.CapacityManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) VM > > > state transitted from :Starting to Stopped with event: > > OperationFailedvm's > > > original host id: 48 new host id: null host id before state transition: > > 48 > > > 2015-05-06 11:32:24,272 DEBUG [c.c.c.CapacityManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Hosts's > > > actual total CPU: 50400 and CPU after applying overprovisioning: > 1008000 > > > 2015-05-06 11:32:24,272 DEBUG [c.c.c.CapacityManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Hosts's > > > actual total RAM: 33701453824 and RAM after applying overprovisioning: > > > 33701453824 > > > 2015-05-06 11:32:24,272 DEBUG [c.c.c.CapacityManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > release > > > cpu from host: 48, old used: 80000,reserved: 0, actual total: 50400, > > total > > > with overprovisioning: 1008000; new used: 64000,reserved:0; > > > movedfromreserved: false,moveToReserveredfalse > > > 2015-05-06 11:32:24,272 DEBUG [c.c.c.CapacityManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > release > > > mem from host: 48, old used: 32212254720,reserved: 0, total: > 33701453824; > > > new used: 27917287424,reserved:0; movedfromreserved: > > > false,moveToReserveredfalse > > > 2015-05-06 11:32:24,279 ERROR [c.c.v.VmWorkJobHandlerProxy] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > > Invocation exception, caused by: > > > com.cloud.exception.InsufficientServerCapacityException: Unable to > > create a > > > deployment for VM[User|i-2-468-VM]Scope=interface > > com.cloud.dc.DataCenter; > > > id=1 > > > 2015-05-06 11:32:24,279 INFO [c.c.v.VmWorkJobHandlerProxy] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435 ctx-d02e2a98) > > Rethrow > > > exception com.cloud.exception.InsufficientServerCapacityException: > Unable > > > to create a deployment for VM[User|i-2-468-VM]Scope=interface > > > com.cloud.dc.DataCenter; id=1 > > > 2015-05-06 11:32:24,280 DEBUG [c.c.v.VmWorkJobDispatcher] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435) Done with run of > > VM > > > work job: com.cloud.vm.VmWorkStart for VM 468, job origin: 14433 > > > 2015-05-06 11:32:24,280 ERROR [c.c.v.VmWorkJobDispatcher] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435) Unable to > complete > > > AsyncJobVO {id:14435, userId: 3, accountId: 2, instanceType: null, > > > instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: > > > > > > rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAMAAAAAAAAB1HQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAHBwcHBwcHBwc3IAEWphdmEudXRpbC5IYXNoTWFwBQfawcMWYNEDAAJGAApsb2FkRmFjdG9ySQAJdGhyZXNob2xkeHA_QAAAAAAADHcIAAAAEAAAAAF0AApWbVBhc3N3b3JkdAAcck8wQUJYUUFEbk5oZG1Wa1gzQmhjM04zYjNKa3hw, > > > cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, > > > result: null, initMsid: 115129173025118, completeMsid: null, > lastUpdated: > > > null, lastPolled: null, created: Wed May 06 11:32:22 BST 2015}, job > > > origin:14433 > > > com.cloud.exception.InsufficientServerCapacityException: Unable to > create > > > a deployment for VM[User|i-2-468-VM]Scope=interface > > > com.cloud.dc.DataCenter; id=1 > > > at > > > > > > com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:947) > > > at > > > > > > com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:5201) > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > > > at > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > at > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > at > > > > > > com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107) > > > at > > > > > > com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:5346) > > > at > > > com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102) > > > at > > > > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:502) > > > at > > > > > > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49) > > > at > > > > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56) > > > at > > > > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103) > > > at > > > > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53) > > > at > > > > > > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) > > > at > > > > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:459) > > > at > > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > > > at > > > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > > > at > > > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > > > at java.lang.Thread.run(Thread.java:745) > > > 2015-05-06 11:32:24,282 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435) Complete async > > > job-14435, jobStatus: FAILED, resultCode: 0, result: > > > > > > rO0ABXNyABpqYXZhLmxhbmcuUnVudGltZUV4Y2VwdGlvbp5fBkcKNIPlAgAAeHIAE2phdmEubGFuZy5FeGNlcHRpb27Q_R8-GjscxAIAAHhyABNqYXZhLmxhbmcuVGhyb3dhYmxl1cY1Jzl3uMsDAARMAAVjYXVzZXQAFUxqYXZhL2xhbmcvVGhyb3dhYmxlO0wADWRldGFpbE1lc3NhZ2V0ABJMamF2YS9sYW5nL1N0cmluZztbAApzdGFja1RyYWNldAAeW0xqYXZhL2xhbmcvU3RhY2tUcmFjZUVsZW1lbnQ7TAAUc3VwcHJlc3NlZEV4Y2VwdGlvbnN0ABBMamF2YS91dGlsL0xpc3Q7eHBxAH4AB3QAUUpvYiBmYWlsZWQgZHVlIHRvIGV4Y2VwdGlvbiBVbmFibGUgdG8gY3JlYXRlIGEgZGVwbG95bWVudCBmb3IgVk1bVXNlcnxpLTItNDY4LVZNXXVyAB5bTGphdmEubGFuZy5TdGFja1RyYWNlRWxlbWVudDsCRio8PP0iOQIAAHhwAAAADXNyABtqYXZhLmxhbmcuU3RhY2tUcmFjZUVsZW1lbnRhCcWaJjbdhQIABEkACmxpbmVOdW1iZXJMAA5kZWNsYXJpbmdDbGFzc3EAfgAETAAIZmlsZU5hbWVxAH4ABEwACm1ldGhvZE5hbWVxAH4ABHhwAAAAcnQAIGNvbS5jbG91ZC52bS5WbVdvcmtKb2JEaXNwYXRjaGVydAAYVm1Xb3JrSm9iRGlzcGF0Y2hlci5qYXZhdAAGcnVuSm9ic3EAfgALAAAB9nQAP29yZy5hcGFjaGUuY2xvdWRzdGFjay5mcmFtZXdvcmsuam9icy5pbXBsLkFzeW5jSm9iTWFuYWdlckltcGwkNXQAGEFzeW5jSm9iTWFuYWdlckltcGwuamF2YXQADHJ1bkluQ29udGV4dHNxAH4ACwAAADF0AD5vcmcuYXBhY2hlLmNsb3Vkc3RhY2subWFuYWdlZC5jb250ZXh0Lk1hbmFnZWRDb250ZXh0UnVubmFibGUkMXQAG01hbmFnZWRDb250ZXh0UnVubmFibGUuamF2YXQAA3J1bnNxAH4ACwAAADh0AEJvcmcuYXBhY2hlLmNsb3Vkc3RhY2subWFuYWdlZC5jb250ZXh0LmltcGwuRGVmYXVsdE1hbmFnZWRDb250ZXh0JDF0ABpEZWZhdWx0TWFuYWdlZENvbnRleHQuamF2YXQABGNhbGxzcQB-AAsAAABndABAb3JnLmFwYWNoZS5jbG91ZHN0YWNrLm1hbmFnZWQuY29udGV4dC5pbXBsLkRlZmF1bHRNYW5hZ2VkQ29udGV4dHEAfgAadAAPY2FsbFdpdGhDb250ZXh0c3EAfgALAAAANXEAfgAdcQB-ABp0AA5ydW5XaXRoQ29udGV4dHNxAH4ACwAAAC50ADxvcmcuYXBhY2hlLmNsb3Vkc3RhY2subWFuYWdlZC5jb250ZXh0Lk1hbmFnZWRDb250ZXh0UnVubmFibGVxAH4AFnEAfgAXc3EAfgALAAABy3EAfgARcQB-ABJxAH4AF3NxAH4ACwAAAdd0AC5qYXZhLnV0aWwuY29uY3VycmVudC5FeGVjdXRvcnMkUnVubmFibGVBZGFwdGVydAAORXhlY3V0b3JzLmphdmFxAH4AG3NxAH4ACwAAAQZ0AB9qYXZhLnV0aWwuY29uY3VycmVudC5GdXR1cmVUYXNrdAAPRnV0dXJlVGFzay5qYXZhcQB-ABdzcQB-AAsAAAR5dAAnamF2YS51dGlsLmNvbmN1cnJlbnQuVGhyZWFkUG9vbEV4ZWN1dG9ydAAXVGhyZWFkUG9vbEV4ZWN1dG9yLmphdmF0AAlydW5Xb3JrZXJzcQB-AAsAAAJndAAuamF2YS51dGlsLmNvbmN1cnJlbnQuVGhyZWFkUG9vbEV4ZWN1dG9yJFdvcmtlcnEAfgAscQB-ABdzcQB-AAsAAALpdAAQamF2YS5sYW5nLlRocmVhZHQAC1RocmVhZC5qYXZhcQB-ABdzcgAmamF2YS51dGlsLkNvbGxlY3Rpb25zJFVubW9kaWZpYWJsZUxpc3T8DyUxteyOEAIAAUwABGxpc3RxAH4ABnhyACxqYXZhLnV0aWwuQ29sbGVjdGlvbnMkVW5tb2RpZmlhYmxlQ29sbGVjdGlvbhlCAIDLXvceAgABTAABY3QAFkxqYXZhL3V0aWwvQ29sbGVjdGlvbjt4cHNyABNqYXZhLnV0aWwuQXJyYXlMaXN0eIHSHZnHYZ0DAAFJAARzaXpleHAAAAAAdwQAAAAAeHEAfgA4eA > > > 2015-05-06 11:32:24,288 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435) Done executing > > > com.cloud.vm.VmWorkStart for job-14435 > > > 2015-05-06 11:32:24,300 DEBUG [o.a.c.f.j.i.SyncQueueManagerImpl] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435) Sync queue > (3383) > > is > > > currently empty > > > 2015-05-06 11:32:24,300 INFO [o.a.c.f.j.i.AsyncJobMonitor] > > > (Work-Job-Executor-7:ctx-222e5de7 job-14433/job-14435) Remove job-14435 > > > from job monitoring > > > 2015-05-06 11:32:24,301 ERROR [c.c.a.ApiAsyncJobDispatcher] > > > (API-Job-Executor-3:ctx-01eebe9f job-14433) Unexpected exception while > > > executing org.apache.cloudstack.api.command.admin.vm.StartVMCmdByAdmin > > > java.lang.RuntimeException: Job failed due to exception Unable to > create > > a > > > deployment for VM[User|i-2-468-VM] > > > at > > > com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:114) > > > at > > > > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:502) > > > at > > > > > > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49) > > > at > > > > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56) > > > at > > > > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103) > > > at > > > > > > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53) > > > at > > > > > > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) > > > at > > > > > > org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:459) > > > at > > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > > > at > > > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > > > at > > > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > > > at java.lang.Thread.run(Thread.java:745) > > > 2015-05-06 11:32:24,302 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > > (API-Job-Executor-3:ctx-01eebe9f job-14433) Complete async job-14433, > > > jobStatus: FAILED, resultCode: 530, result: > > > > > > org.apache.cloudstack.api.response.ExceptionResponse/null/{"uuidList":[],"errorcode":530,"errortext":"Job > > > failed due to exception Unable to create a deployment for > > > VM[User|i-2-468-VM]"} > > > 2015-05-06 11:32:24,307 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > > > (API-Job-Executor-3:ctx-01eebe9f job-14433) Done executing > > > org.apache.cloudstack.api.command.admin.vm.StartVMCmdByAdmin for > > job-14433 > > > 2015-05-06 11:32:24,309 INFO [o.a.c.f.j.i.AsyncJobMonitor] > > > (API-Job-Executor-3:ctx-01eebe9f job-14433) Remove job-14433 from job > > > monitoring > > > > > > > > > > > > ------------------------------------------- > > > > > > > > > I can't find any errors on the host servers, so I have no idea why ACS > > can > > > create SSVm/CPVM but can't create new virtual routers. > > > > > > Could someone please help me fix this problem. I've got a half broken > > > cloud at the moment as I can't create or restart networks. > > > > > > Many thanks > > > > > > Andrei > > > > > > > > > > > > > > >
