Hi Martin,

  Looks like you have hit a bug, you can patch it from this PR 
https://github.com/apache/cloudstack/pull/1829




On 22/02/17, 4:56 PM, "Martin Emrich" <[email protected]> wrote:

>Hi!
>
>After shutting down a VM for resizing, it no longer starts. The GUI reports 
>insufficient Capacity (but there's plenty), and in the Log I see this:
>
>2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
>(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) Checking 
>if we need to prepare 4 volumes for VM[User|i-18-2998-VM]
>2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
>(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need 
>to recreate the volume: Vol[5050|vm=2998|ROOT], since it already has a pool 
>assigned: 29, adding di
>sk to VM
>2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
>(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need 
>to recreate the volume: Vol[5051|vm=2998|DATADISK], since it already has a 
>pool assigned: 29, addin
>g disk to VM
>2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
>(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need 
>to recreate the volume: Vol[5052|vm=2998|DATADISK], since it already has a 
>pool assigned: 29, addin
>g disk to VM
>2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
>(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need 
>to recreate the volume: Vol[5053|vm=2998|DATADISK], since it already has a 
>pool assigned: 29, addin
>g disk to VM
>2017-02-22 12:18:40,669 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
>(DirectAgent-469:ctx-d6e5768e) 1. The VM i-18-2998-VM is in Starting state.
>2017-02-22 12:18:40,688 DEBUG [c.c.h.x.r.CitrixResourceBase] 
>(DirectAgent-469:ctx-d6e5768e) Created VM e37afda2-9661-4655-e750-1855b0318787 
>for i-18-2998-VM
>2017-02-22 12:18:40,710 DEBUG [c.c.h.x.r.CitrixResourceBase] 
>(DirectAgent-469:ctx-d6e5768e) VBD d560c831-29f8-c82b-7e81-778ce33318ae 
>created for com.cloud.agent.api.to.DiskTO@1d82661a
>2017-02-22 12:18:40,720 DEBUG [c.c.h.x.r.CitrixResourceBase] 
>(DirectAgent-469:ctx-d6e5768e) VBD b083c0c8-31bc-1248-859a-234e276d9b4c 
>created for com.cloud.agent.api.to.DiskTO@5bfd4418
>2017-02-22 12:18:40,729 DEBUG [c.c.h.x.r.CitrixResourceBase] 
>(DirectAgent-469:ctx-d6e5768e) VBD 48701244-a29a-e9ce-f6c3-ed5225271aa7 
>created for com.cloud.agent.api.to.DiskTO@5081b2d6
>2017-02-22 12:18:40,737 DEBUG [c.c.a.m.DirectAgentAttache] 
>(DirectAgentCronJob-352:ctx-569e5f7b) Ping from 337(esc-fra1-xn011)
>2017-02-22 12:18:40,739 DEBUG [c.c.h.x.r.CitrixResourceBase] 
>(DirectAgent-469:ctx-d6e5768e) VBD 755de6cb-3994-8251-c0d5-e45cda52ca98 
>created for com.cloud.agent.api.to.DiskTO@64992bda
>2017-02-22 12:18:40,744 WARN  [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
>(DirectAgent-469:ctx-d6e5768e) Catch Exception: class 
>com.xensource.xenapi.Types$InvalidDevice due to The device name is invalid
>The device name is invalid
>        at com.xensource.xenapi.Types.checkResponse(Types.java:1169)
>        at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
>        at 
> com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
>        at com.xensource.xenapi.VBD.create(VBD.java:322)
>        at 
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVbd(CitrixResourceBase.java:1156)
>        at 
> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:121)
>        at 
> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:53)
>        at 
> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
>        at 
> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1687)
>        at 
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
>        at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>        at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>        at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>        at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>        at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>        at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>        at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>        at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>        at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>        at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>        at java.lang.Thread.run(Thread.java:745)
>
>
>Seems to be a problem with the VM's volumes. I don't see any difference in the 
>ACS database to other VM's volumes.. What could be wrong here?
>
>Thanks,
>
>Martin

[email protected] 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

Reply via email to