Dear Luis,
I see, indeed, there is a problem with volume removal at your CS
instance. Log you provided does not show anything critical. Can you look
forward the log and search for any JAVA exceptions? This process is
asynchronous and error may come some minutes later after you actually
fire the event. There should be something in management log. Another
place to look is KVM host logs. Check syslog. Look for NFS errors.
Regarding HA VM capacity -- post another topic into this list. I don't
know the answer on the second question.
Regards,
Vadim
On 2015-09-25 15:42, Luis wrote:
Hi
I did this, I created a VM called DELETEME (First picture), on the
picture 2 you can see the storage, two record per VM, I don't know why
and I get confuse when trying to create a snapshot, Picture 3 I
destroyed the VM, Picture 4 shows the two records in the storage, after
deleting a lot of VM I was running out of space until I found this, now
I delete them manually in the storage section.
I added my log at the end of this email.
Another thing I notice, not related with this, I have all my VM on HA
in case the server shuts down and restart, at the moment of stating
cloudstack, all my VM get started but the server gets over hot and
shuts down, is there a way to limit the number of VM to start on HA?
Thank you for your help, I really appreciate it.
Picture 1
Picture 2
Picture 3
Picture 4
log
2015-09-25 08:35:40,329 DEBUG [c.c.a.ApiServlet]
(catalina-exec-11:ctx-61765fe6 ctx-f5f4d09f) ===END=== 10.0.8.6 -- GET
command=queryAsyncJobResult&jobId=605e1db3-f3a6-4b7e-b62c-00ccc65944df&response=json&_=1443184540233
2015-09-25 08:35:41,628 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(AsyncJobMgr-Heartbeat-1:ctx-bfc1897e) Execute sync-queue item:
SyncQueueItemVO {id:64, queueId: 504, contentType: AsyncJob, contentId:
1016, lastProcessMsid: 191612618435011, lastprocessNumber: 3,
lastProcessTime: Fri Sep 25 08:35:41 EDT 2015, created: Fri Sep 25
08:35:39 EDT 2015}
2015-09-25 08:35:41,629 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(AsyncJobMgr-Heartbeat-1:ctx-bfc1897e) Schedule queued job-1016
2015-09-25 08:35:41,808 INFO [o.a.c.f.j.i.AsyncJobMonitor]
(Work-Job-Executor-64:ctx-5e28e999 job-1014/job-1016) Add job-1016 into
job monitoring
2015-09-25 08:35:41,812 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-64:ctx-5e28e999 job-1014/job-1016) Executing
AsyncJobVO {id:1016, userId: 2, accountId: 2, instanceType: null,
instanceId: null, cmd:
com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots, cmdInfo:
rO0ABXNyADBjb20uY2xvdWQudm0uc25hcHNob3QuVm1Xb3JrRGVsZXRlQWxsVk1TbmFwc2hvdHOsl-VRajf8cAIAAUwABHR5cGV0ACdMY29tL2Nsb3VkL3ZtL3NuYXBzaG90L1ZNU25hcHNob3QkVHlwZTt4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1ldAASTGphdmEvbGFuZy9TdHJpbmc7eHAAAAAAAAAAAgAAAAAAAAACAAAAAAAAAFV0ABVWTVNuYXBzaG90TWFuYWdlckltcGxw,
cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0,
result: null, initMsid: 191612618435011, completeMsid: null,
lastUpdated: null, lastPolled: null, created: Fri Sep 25 08:35:39 EDT
2015}
2015-09-25 08:35:41,812 DEBUG [c.c.v.VmWorkJobDispatcher]
(Work-Job-Executor-64:ctx-5e28e999 job-1014/job-1016) Run VM work job:
com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots for VM 85, job origin:
1014
2015-09-25 08:35:41,813 DEBUG [c.c.v.VmWorkJobHandlerProxy]
(Work-Job-Executor-64:ctx-5e28e999 job-1014/job-1016 ctx-8deba3f4)
Execute VM work job:
com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots{"userId":2,"accountId":2,"vmId":85,"handlerName":"VMSnapshotManagerImpl"}
2015-09-25 08:35:41,813 DEBUG [c.c.v.VmWorkJobHandlerProxy]
(Work-Job-Executor-64:ctx-5e28e999 job-1014/job-1016 ctx-8deba3f4) Done
executing VM work job:
com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots{"userId":2,"accountId":2,"vmId":85,"handlerName":"VMSnapshotManagerImpl"}
2015-09-25 08:35:41,813 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-64:ctx-5e28e999 job-1014/job-1016 ctx-8deba3f4)
Complete async job-1016, jobStatus: SUCCEEDED, resultCode: 0, result:
rO0ABXNyABFqYXZhLmxhbmcuQm9vbGVhbs0gcoDVnPruAgABWgAFdmFsdWV4cAE
2015-09-25 08:35:41,814 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-64:ctx-5e28e999 job-1014/job-1016 ctx-8deba3f4)
Publish async job-1016 complete on message bus
2015-09-25 08:35:41,814 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-64:ctx-5e28e999 job-1014/job-1016 ctx-8deba3f4) Wake
up jobs related to job-1016
2015-09-25 08:35:41,814 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-64:ctx-5e28e999 job-1014/job-1016 ctx-8deba3f4)
Update db status for job-1016
2015-09-25 08:35:41,814 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-64:ctx-5e28e999 job-1014/job-1016 ctx-8deba3f4) Wake
up jobs joined with job-1016 and disjoin all subjobs created from job-
1016
2015-09-25 08:35:41,907 DEBUG [c.c.v.VmWorkJobDispatcher]
(Work-Job-Executor-64:ctx-5e28e999 job-1014/job-1016) Done with run of
VM work job: com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots for VM
85, job origin: 1014
2015-09-25 08:35:41,907 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-64:ctx-5e28e999 job-1014/job-1016) Done executing
com.cloud.vm.snapshot.VmWorkDeleteAllVMSnapshots for job-1016
2015-09-25 08:35:41,908 INFO [o.a.c.f.j.i.AsyncJobMonitor]
(Work-Job-Executor-64:ctx-5e28e999 job-1014/job-1016) Remove job-1016
from job monitoring
2015-09-25 08:35:42,155 DEBUG [c.c.c.CapacityManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) VM state
transitted from :Stopped to Destroyed with event: DestroyRequestedvm's
original host id: 1 new host id: null host id before state transition:
null
2015-09-25 08:35:42,157 DEBUG [c.c.c.CapacityManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Hosts's actual
total CPU: 28888 and CPU after applying overprovisioning: 28888
2015-09-25 08:35:42,157 DEBUG [c.c.c.CapacityManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Hosts's actual
total RAM: 25140768768 and RAM after applying overprovisioning:
25140768768
2015-09-25 08:35:42,157 DEBUG [c.c.c.CapacityManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) release cpu
from host: 1, old used: 6000,reserved: 500, actual total: 28888, total
with overprovisioning: 28888; new used: 6000,reserved:0;
movedfromreserved: true,moveToReserveredfalse
2015-09-25 08:35:42,157 DEBUG [c.c.c.CapacityManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) release mem
from host: 1, old used: 6710886400,reserved: 536870912, total:
25140768768; new used: 6710886400,reserved:0; movedfromreserved:
true,moveToReserveredfalse
2015-09-25 08:35:42,275 DEBUG [c.c.a.m.AgentManagerImpl]
(AgentManager-Handler-14:null) SeqA 2-245135: Processing Seq 2-245135:
{ Cmd , MgmtId: -1, via: 2, Ver: v1, Flags: 11,
[{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":1,"_loadInfo":"{\n
\"connections\": []\n}","wait":0}}] }
2015-09-25 08:35:42,399 DEBUG [c.c.a.m.AgentManagerImpl]
(AgentManager-Handler-14:null) SeqA 2-245135: Sending Seq 2-245135: {
Ans: , MgmtId: 191612618435011, via: 2, Ver: v1, Flags: 100010,
[{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
2015-09-25 08:35:42,663 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Sync job-1017
execution on object VmWorkJobQueue.85
2015-09-25 08:35:43,243 DEBUG [c.c.a.ApiServlet]
(catalina-exec-2:ctx-c01c9e4a) ===START=== 10.0.8.6 -- GET
command=queryAsyncJobResult&jobId=605e1db3-f3a6-4b7e-b62c-00ccc65944df&response=json&_=1443184543247
2015-09-25 08:35:43,326 DEBUG [c.c.a.ApiServlet]
(catalina-exec-2:ctx-c01c9e4a ctx-7a19cef4) ===END=== 10.0.8.6 -- GET
command=queryAsyncJobResult&jobId=605e1db3-f3a6-4b7e-b62c-00ccc65944df&response=json&_=1443184543247
2015-09-25 08:35:43,480 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(AsyncJobMgr-Heartbeat-1:ctx-f64d0027) Execute sync-queue item:
SyncQueueItemVO {id:65, queueId: 504, contentType: AsyncJob, contentId:
1017, lastProcessMsid: 191612618435011, lastprocessNumber: 4,
lastProcessTime: Fri Sep 25 08:35:43 EDT 2015, created: Fri Sep 25
08:35:42 EDT 2015}
2015-09-25 08:35:43,481 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(AsyncJobMgr-Heartbeat-1:ctx-f64d0027) Schedule queued job-1017
2015-09-25 08:35:43,571 INFO [o.a.c.f.j.i.AsyncJobMonitor]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017) Add job-1017 into
job monitoring
2015-09-25 08:35:43,574 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017) Executing
AsyncJobVO {id:1017, userId: 2, accountId: 2, instanceType: null,
instanceId: null, cmd: com.cloud.vm.VmWorkStop, cmdInfo:
rO0ABXNyABdjb20uY2xvdWQudm0uVm1Xb3JrU3RvcALQ4GymiWjjAgABWgAHY2xlYW51cHhyABNjb20uY2xvdWQudm0uVm1Xb3Jrn5m2VvAlZ2sCAARKAAlhY2NvdW50SWRKAAZ1c2VySWRKAAR2bUlkTAALaGFuZGxlck5hbWV0ABJMamF2YS9sYW5nL1N0cmluZzt4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAAVXQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwA,
cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0,
result: null, initMsid: 191612618435011, completeMsid: null,
lastUpdated: null, lastPolled: null, created: Fri Sep 25 08:35:42 EDT
2015}
2015-09-25 08:35:43,574 DEBUG [c.c.v.VmWorkJobDispatcher]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017) Run VM work job:
com.cloud.vm.VmWorkStop for VM 85, job origin: 1014
2015-09-25 08:35:43,575 DEBUG [c.c.v.VmWorkJobHandlerProxy]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017 ctx-37ced5d2)
Execute VM work job:
com.cloud.vm.VmWorkStop{"cleanup":false,"userId":2,"accountId":2,"vmId":85,"handlerName":"VirtualMachineManagerImpl"}
2015-09-25 08:35:43,576 DEBUG [c.c.v.VirtualMachineManagerImpl]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017 ctx-37ced5d2)
Stopped called on VM[User|i-2-85-VM] but the state is Destroyed
2015-09-25 08:35:43,576 DEBUG [c.c.v.VmWorkJobHandlerProxy]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017 ctx-37ced5d2) Done
executing VM work job:
com.cloud.vm.VmWorkStop{"cleanup":false,"userId":2,"accountId":2,"vmId":85,"handlerName":"VirtualMachineManagerImpl"}
2015-09-25 08:35:43,576 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017 ctx-37ced5d2)
Complete async job-1017, jobStatus: SUCCEEDED, resultCode: 0, result:
null
2015-09-25 08:35:43,577 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017 ctx-37ced5d2)
Publish async job-1017 complete on message bus
2015-09-25 08:35:43,577 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017 ctx-37ced5d2) Wake
up jobs related to job-1017
2015-09-25 08:35:43,577 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017 ctx-37ced5d2)
Update db status for job-1017
2015-09-25 08:35:43,577 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017 ctx-37ced5d2) Wake
up jobs joined with job-1017 and disjoin all subjobs created from job-
1017
2015-09-25 08:35:43,659 DEBUG [c.c.v.VmWorkJobDispatcher]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017) Done with run of
VM work job: com.cloud.vm.VmWorkStop for VM 85, job origin: 1014
2015-09-25 08:35:43,659 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017) Done executing
com.cloud.vm.VmWorkStop for job-1017
2015-09-25 08:35:43,660 INFO [o.a.c.f.j.i.AsyncJobMonitor]
(Work-Job-Executor-65:ctx-0ce67abc job-1014/job-1017) Remove job-1017
from job monitoring
2015-09-25 08:35:43,926 DEBUG [c.c.c.CapacityManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) VM state
transitted from :Destroyed to Expunging with event:
ExpungeOperationvm's original host id: 1 new host id: null host id
before state transition: null
2015-09-25 08:35:43,927 DEBUG [c.c.v.VirtualMachineManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Destroying vm
VM[User|i-2-85-VM]
2015-09-25 08:35:43,927 DEBUG [c.c.v.VirtualMachineManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Cleaning up
NICS
2015-09-25 08:35:43,927 DEBUG [o.a.c.e.o.NetworkOrchestrator]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Cleaning
network for vm: 85
2015-09-25 08:35:44,019 DEBUG [c.c.n.g.DirectNetworkGuru]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Deallocate
network: networkId: 204, ip: 10.0.10.103
2015-09-25 08:35:44,022 DEBUG [c.c.n.g.DirectNetworkGuru]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) remove nic 96
secondary ip
2015-09-25 08:35:44,194 DEBUG [o.a.c.e.o.NetworkOrchestrator]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Removed nic
id=96
2015-09-25 08:35:44,195 DEBUG [o.a.c.e.o.NetworkOrchestrator]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Revoving nic
secondary ip entry ...
2015-09-25 08:35:44,195 DEBUG [c.c.v.VirtualMachineManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Cleaning up
hypervisor data structures (ex. SRs in XenServer) for managed storage
2015-09-25 08:35:44,198 DEBUG [o.a.c.e.o.VolumeOrchestrator]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Cleaning
storage for vm: 85
2015-09-25 08:35:44,202 DEBUG [o.a.c.e.o.VolumeOrchestrator]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Detaching
Vol[139|vm=85|DATADISK]
2015-09-25 08:35:44,388 DEBUG [c.c.a.t.Request]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Seq
1-2274317811822224501: Sending { Cmd , MgmtId: 191612618435011, via:
1(cloud.cosiab.com), Ver: v1, Flags: 100011,
[{"org.apache.cloudstack.storage.command.DeleteCommand":{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"b226ac21-69d6-4e08-a81b-1e5a78559e48","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"0e92d828-2be1-35f0-83da-f21c1137c7f6","id":1,"poolType":"NetworkFilesystem","host":"10.0.10.2","path":"/primary","port":2049,"url":"NetworkFilesystem://10.0.10.2/primary/?ROLE=Primary&STOREUUID=0e92d828-2be1-35f0-83da-f21c1137c7f6"}},"name":"ROOT-85","size":21474836480,"path":"b226ac21-69d6-4e08-a81b-1e5a78559e48","volumeId":138,"vmName":"i-2-85-VM","accountId":2,"format":"QCOW2","provisioningType":"THIN","id":138,"deviceId":0,"hypervisorType":"KVM"}},"wait":0}}]
}
2015-09-25 08:35:44,443 DEBUG [c.c.a.t.Request]
(AgentManager-Handler-10:null) Seq 1-2274317811822224501: Processing: {
Ans: , MgmtId: 191612618435011, via: 1, Ver: v1, Flags: 10,
[{"com.cloud.agent.api.Answer":{"result":true,"wait":0}}] }
2015-09-25 08:35:44,443 DEBUG [c.c.a.t.Request]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Seq
1-2274317811822224501: Received: { Ans: , MgmtId: 191612618435011, via:
1, Ver: v1, Flags: 10, { Answer } }
2015-09-25 08:35:44,564 INFO [o.a.c.s.v.VolumeServiceImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Volume 138 is
not referred anywhere, remove it from volumes table
2015-09-25 08:35:44,852 DEBUG [c.c.v.VirtualMachineManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Expunged
VM[User|i-2-85-VM]
2015-09-25 08:35:44,942 DEBUG [c.c.u.AccountManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Access granted
to Acct[89b1af9a-23d4-11e5-83b6-048d3840e222-admin] to Domain:1/ by
AffinityGroupAccessChecker
2015-09-25 08:35:45,030 DEBUG [c.c.v.UserVmManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Starting
cleaning up vm VM[User|i-2-85-VM] resources...
2015-09-25 08:35:45,032 INFO [c.c.n.s.SecurityGroupManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Disassociated
1 network groups from uservm 85
2015-09-25 08:35:45,131 DEBUG [c.c.n.s.SecurityGroupManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Security group
mappings are removed successfully for vm id=85
2015-09-25 08:35:45,200 DEBUG [c.c.n.f.FirewallManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) No firewall
rules are found for vm id=85
2015-09-25 08:35:45,276 DEBUG [c.c.v.UserVmManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Firewall rules
are removed successfully as a part of vm id=85 expunge
2015-09-25 08:35:45,277 DEBUG [c.c.n.r.RulesManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) No port
forwarding rules are found for vm id=85
2015-09-25 08:35:45,277 DEBUG [c.c.v.UserVmManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Port
forwarding rules are removed successfully as a part of vm id=85 expunge
2015-09-25 08:35:45,278 DEBUG [c.c.v.UserVmManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Removed vm
id=85 from all load balancers as a part of expunge process
2015-09-25 08:35:45,278 DEBUG [c.c.v.UserVmManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Successfully
cleaned up vm VM[User|i-2-85-VM] resources as a part of expunge process
2015-09-25 08:35:45,409 INFO [o.a.c.f.j.i.AsyncJobManagerImpl]
(AsyncJobMgr-Heartbeat-1:ctx-8b683ae7) Begin cleanup expired async-jobs
2015-09-25 08:35:45,411 INFO [o.a.c.f.j.i.AsyncJobManagerImpl]
(AsyncJobMgr-Heartbeat-1:ctx-8b683ae7) End cleanup expired async-jobs
2015-09-25 08:35:45,444 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Complete async
job-1014, jobStatus: SUCCEEDED, resultCode: 0, result:
org.apache.cloudstack.api.response.UserVmResponse/null/{"securitygroup":[],"nic":[],"tags":[],"affinitygroup":[]}
2015-09-25 08:35:45,445 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Publish async
job-1014 complete on message bus
2015-09-25 08:35:45,445 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Wake up jobs
related to job-1014
2015-09-25 08:35:45,445 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Update db
status for job-1014
2015-09-25 08:35:45,445 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014 ctx-85a6409c) Wake up jobs
joined with job-1014 and disjoin all subjobs created from job- 1014
2015-09-25 08:35:45,523 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-55:ctx-2e9eb037 job-1014) Done executing
org.apache.cloudstack.api.command.admin.vm.DestroyVMCmdByAdmin for
job-1014
2015-09-25 08:35:45,523 INFO [o.a.c.f.j.i.AsyncJobMonitor]
(API-Job-Executor-55:ctx-2e9eb037 job-1014) Remove job-1014 from job
monitoring
2015-09-25 08:35:46,231 DEBUG [c.c.a.ApiServlet]
(catalina-exec-3:ctx-51adc6ae) ===START=== 10.0.8.6 -- GET
command=queryAsyncJobResult&jobId=605e1db3-f3a6-4b7e-b62c-00ccc65944df&response=json&_=1443184546234
2015-09-25 08:35:46,326 DEBUG [c.c.a.ApiServlet]
(catalina-exec-3:ctx-51adc6ae ctx-eafb69bc) ===END=== 10.0.8.6 -- GET
command=queryAsyncJobResult&jobId=605e1db3-f3a6-4b7e-b62c-00ccc65944df&response=json&_=1443184546234
2015-09-25 08:35:46,570 DEBUG [c.c.a.m.AgentManagerImpl]
(AgentManager-Handler-3:null) Ping from 1
2015-09-25 08:35:46,571 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) Process host VM state report from ping
process. host: 1
2015-09-25 08:35:46,579 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) Process VM state report. host: 1, number
of records in report: 10
2015-09-25 08:35:46,579 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM state report. host: 1, vm id: 1, power
state: PowerOn
2015-09-25 08:35:46,580 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM power state does not change, skip DB
writing. vm id: 1
2015-09-25 08:35:46,580 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM state report. host: 1, vm id: 71,
power state: PowerOn
2015-09-25 08:35:46,581 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM power state does not change, skip DB
writing. vm id: 71
2015-09-25 08:35:46,581 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM state report. host: 1, vm id: 81,
power state: PowerOn
2015-09-25 08:35:46,582 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM power state does not change, skip DB
writing. vm id: 81
2015-09-25 08:35:46,582 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM state report. host: 1, vm id: 4, power
state: PowerOn
2015-09-25 08:35:46,583 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM power state does not change, skip DB
writing. vm id: 4
2015-09-25 08:35:46,583 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM state report. host: 1, vm id: 6, power
state: PowerOn
2015-09-25 08:35:46,584 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM power state does not change, skip DB
writing. vm id: 6
2015-09-25 08:35:46,584 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM state report. host: 1, vm id: 7, power
state: PowerOn
2015-09-25 08:35:46,584 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM power state does not change, skip DB
writing. vm id: 7
2015-09-25 08:35:46,584 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM state report. host: 1, vm id: 78,
power state: PowerOn
2015-09-25 08:35:46,585 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM power state does not change, skip DB
writing. vm id: 78
2015-09-25 08:35:46,585 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM state report. host: 1, vm id: 12,
power state: PowerOn
2015-09-25 08:35:46,586 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM power state does not change, skip DB
writing. vm id: 12
2015-09-25 08:35:46,586 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM state report. host: 1, vm id: 62,
power state: PowerOn
2015-09-25 08:35:46,587 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM power state does not change, skip DB
writing. vm id: 62
2015-09-25 08:35:46,587 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM state report. host: 1, vm id: 13,
power state: PowerOn
2015-09-25 08:35:46,588 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) VM power state does not change, skip DB
writing. vm id: 13
2015-09-25 08:35:46,591 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl]
(AgentManager-Handler-3:null) Done with process of VM state report.
host: 1
2015-09-25 08:35:47,275 DEBUG [c.c.a.m.AgentManagerImpl]
(AgentManager-Handler-11:null) SeqA 2-245136: Processing Seq 2-245136:
{ Cmd , MgmtId: -1, via: 2, Ver: v1, Flags: 11,
[{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":1,"_loadInfo":"{\n
\"connections\": []\n}","wait":0}}] }
2015-09-25 08:35:47,369 DEBUG [c.c.a.m.AgentManagerImpl]
(AgentManager-Handler-11:null) SeqA 2-245136: Sending Seq 2-245136: {
Ans: , MgmtId: 191612618435011, via: 2, Ver: v1, Flags: 100010,
[{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
2015-09-25 08:35:53,871 DEBUG [c.c.c.ConsoleProxyManagerImpl]
(consoleproxy-1:ctx-30000194) Zone 1 is ready to launch console proxy
2015-09-25 08:35:54,006 DEBUG [o.a.c.s.SecondaryStorageManagerImpl]
(secstorage-1:ctx-84615df7) Zone 1 is ready to launch secondary storage
VM
2015-09-25 08:35:55,409 INFO [o.a.c.f.j.i.AsyncJobManagerImpl]
(AsyncJobMgr-Heartbeat-1:ctx-a4c24e78) Begin cleanup expired async-jobs
2015-09-25 08:35:55,411 INFO [o.a.c.f.j.i.AsyncJobManagerImpl]
(AsyncJobMgr-Heartbeat-1:ctx-a4c24e78) End cleanup expired async-jobs
2015-09-25 08:35:57,276 DEBUG [c.c.a.m.AgentManagerImpl]
(AgentManager-Handler-6:null) SeqA 2-245137: Processing Seq 2-245137: {
Cmd , MgmtId: -1, via: 2, Ver: v1, Flags: 11,
[{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":1,"_loadInfo":"{\n
\"connections\": []\n}","wait":0}}] }
2015-09-25 08:35:57,385 DEBUG [c.c.a.m.AgentManagerImpl]
(AgentManager-Handler-6:null) SeqA 2-245137: Sending Seq 2-245137: {
Ans: , MgmtId: 191612618435011, via: 2, Ver: v1, Flags: 100010,
[{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
2015-09-25 08:35:58,316 DEBUG [c.c.s.StatsCollector]
(StatsCollector-2:ctx-c7f1171e) VmStatsCollector is running...
2015-09-25 08:35:58,493 DEBUG [c.c.a.t.Request]
(StatsCollector-2:ctx-c7f1171e) Seq 1-2274317811822224502: Received: {
Ans: , MgmtId: 191612618435011, via: 1, Ver: v1, Flags: 10, {
GetVmStatsAnswer } }
2015-09-25 08:35:58,505 DEBUG [c.c.s.StatsCollector]
(StatsCollector-4:ctx-499745dc) StorageCollector is running...
2015-09-25 08:35:58,553 DEBUG [c.c.a.t.Request]
(StatsCollector-4:ctx-499745dc) Seq 5-87538717757044704: Received: {
Ans: , MgmtId: 191612618435011, via: 5, Ver: v1, Flags: 10, {
GetStorageStatsAnswer } }
2015-09-25 08:35:58,581 DEBUG [c.c.a.t.Request]
(StatsCollector-4:ctx-499745dc) Seq 1-2274317811822224503: Received: {
Ans: , MgmtId: 191612618435011, via: 1, Ver: v1, Flags: 10, {
GetStorageStatsAnswer } }
2015-09-25 08:35:58,667 DEBUG [c.c.a.t.Request]
(StatsCollector-4:ctx-499745dc) Seq 1-2274317811822224504: Received: {
Ans: , MgmtId: 191612618435011, via: 1, Ver: v1, Flags: 10, {
GetStorageStatsAnswer } }
-------------------------
FROM: Vadim Kimlaychuk <[email protected]>
TO: Cloudstack Users <[email protected]>
CC: Luis <[email protected]>
SENT: Friday, September 25, 2015 3:08 AM
SUBJECT: Re: Expunge
Luis,
Do you use shared mount point with your KVM hypervisor or local
NFS server? I have tested "Expunge" at my CS 4.5.3 and when I delete VM
with "expunge" option -- Cloudstack deletes it from the storage
immediately. Can you do the same test and look into management log file
? It would be nice to see trace from what is going on. Or do you have
problem only with delayed "expunge" ?
Vadim.
On 2015-09-24 21:04, Luis wrote:
Hi
Thank you for your answer.
I am using CL 4.5.2 with KVM in Raid 5, after a month the space was
not
freed up, previously i changes the expunge to 100
-------------------------
FROM: Vadim Kimlaychuk <[email protected]>
TO: Luis <[email protected]>
CC: [email protected]
SENT: Thursday, September 24, 2015 1:33 PM
SUBJECT: Re: Expunge
Hello Luis,
If I remember correctly - you don't get it freed immediately.
There is some background process that cleans up. You can probably get
better response if write down what version of CS, hypervisor and
storage
type are you using.
Regards,
Vadim
On 2015-09-24 14:25, Luis wrote:
Hi
I notice that when I Expunge VM the hard disk space is not returned,
I
still have the same use, I had to deleted manually from the raid, is
this a bug or I am missing a comfiguration?