Ok so a couple things I've noted.
[root@Flex-Xen5 5]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 4.0G 3.2G 648M 84% /
none 1.9G 132K 1.9G 1% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
56M 56M 0 100% /var/xen/xc-install
10.90.2.51:/ha/c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7
14G 2.6G 11G 20%
/var/run/sr-mount/c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7
secstor.flexhost.local:/mnt/Volume_0/NFS/
11T 3.9T 6.8T 37%
/var/cloud_mount/87156045-e430-3fe3-aa4b-3d41c1af8df2
I see my secstor mount but sr-mount just shows the HA mount not my primary
storage luns?
Results from the DB query show storage pools uuid and the cloustack names to
be correct.
SELECT id,name,uuid FROM cloud.storage_pool;
'5', 'RSFD-P01-C01-PRI3', 'FlexSAN2-LUN0'
'6', 'RSFD-P01-C01-PRI4', 'FlexSAN2-LUN1'
'7', 'RSFD-P01-C01-PRI2', 'FlexSAN1-LUN1'
'8', 'RSFD-P01-C01-PRI1', 'FlexSAN1-LUN0'
Where as the
xe sr-list params=uuid,name-label
uuid ( RO) : befd4536-fdf1-6ab6-0adb-19ae532e0ee8
name-label ( RW): FlexSAN1-LUN1
...
uuid ( RO) : 469b6dcd-8466-3d03-de0e-cc3983e1b6e2
name-label ( RW): FlexSAN2-LUN1
...
uuid ( RO) : 94d4494c-1317-4ffc-f0e6-a9210b0a0daf
name-label ( RW): FlexSAN2-LUN0
...
uuid ( RO) : 2a00a50b-764b-ce7f-589c-c67b353957da
name-label ( RW): FlexSAN1-LUN0
Show the uuid's are names same as above. So that tells me that cloudstack and
xenserver know of the storage.
I can launch VM's from secondary storage to primary find as noted below.
I can deploy VM's from templates and ISO's so that tells me I can access
secondary storage.
This all goes back to my job call and that the "path" is saying uuid
d4085d91-22fa-4965-bfa7-d1a1800f6aa7 and apparently that is invalid. I still
have no idea what that uuid is. I was thinking primary storage uuid? I was
thinking template uuid? I've rebooted all the hosts. I've rebooted the
management server. I've validated secondary storage is mounted. I've applied
updates to xenserver. I've restarted cloudstack-management. I've deployed from
template. I've deployed from iso. Everything looks clean except those stupid
system vm's keep recycling. And all they ever do us complain uuid invalid.
And then InsufficentServerCapacityException
2017-06-26 08:14:19,231 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Seq
1-6981705322331213134: Sending { Cmd , MgmtId: 345050411715, via:
1(Flex-Xen2.flexhost.local), Ver: v1, Flags: 100111,
[{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":
{"path":"d4085d91-22fa-4965-bfa7-d1a1800f6aa7"
,"origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM
Template
(XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"6547906c-c0c6-408f-9d9a-44d8250305b4","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"ROOT-29510","size":2689602048,"volumeId":34429,"vmName":"s-29510-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":34429,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}]
}
2017-06-26 08:14:19,231 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Seq
1-6981705322331213134: Executing: { Cmd , MgmtId: 345050411715, via:
1(Flex-Xen2.flexhost.local), Ver: v1, Flags: 100111,
[{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"d4085d91-22fa-4965-bfa7-d1a1800f6aa7","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM
Template
(XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"6547906c-c0c6-408f-9d9a-44d8250305b4","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"ROOT-29510","size":2689602048,"volumeId":34429,"vmName":"s-29510-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":34429,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}]
}
2017-06-26 08:14:19,231 DEBUG [c.c.a.m.DirectAgentAttache]
(DirectAgent-310:ctx-b7042ce7) Seq 1-6981705322331213134: Executing request
2017-06-26 08:14:19,236 DEBUG [c.c.h.x.r.XenServerStorageProcessor]
(DirectAgent-310:ctx-b7042ce7) Catch Exception
com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
d4085d91-22fa-4965-bfa7-d1a1800f6aa7 failed due to The uuid you supplied was
invalid.
2017-06-26 08:14:19,236 WARN [c.c.h.x.r.XenServerStorageProcessor]
(DirectAgent-310:ctx-b7042ce7) Unable to create volume;
Pool=volumeTO[uuid=6547906c-c0c6-408f-9d9a-44d8250305b4|path=null|datastore=PrimaryDataStoreTO[uuid=FlexSAN1-LUN0|name=null|id=8|pooltype=PreSetup]];
Disk:
com.cloud.utils.exception.CloudRuntimeException: Catch Exception
com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
d4085d91-22fa-4965-bfa7-d1a1800f6aa7 failed due to The uuid you supplied was
invalid.
2017-06-26 08:14:19,354 ERROR [c.c.v.VmWorkJobDispatcher]
(Work-Job-Executor-30:ctx-00f1af56 job-342/job-183150) Unable to complete
AsyncJobVO {id:183150, userId: 1, accountId: 1, instanceType: null, instanceId:
null, cmd: com.cloud.vm.VmWorkStart, cmdInfo:
rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAABAAAAAAAAAAEAAAAAAABXi3QAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAHBwcHBwcHBwcHA,
cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result:
null, initMsid: 345050411715, completeMsid: null, lastUpdated: null,
lastPolled: null, created: Mon Jun 26 08:14:15 CDT 2017}, job origin:342
com.cloud.exception.InsufficientServerCapacityException: Unable to create a
deployment for VM[ConsoleProxy|v-22411-VM]Scope=interface
com.cloud.dc.DataCenter; id=1
So that makes me look at the management-server.log for a job lets find one
quickly.
Cat /var/log/cloudstack/management/management-server.log | grep ctx-302e3fd5
And I see the following.
2017-06-26 08:14:19,370 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator]
(Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Found
pools matching tags: [Pool[5|PreSetup], Pool[6|PreSetup], Pool[7|PreSetup],
Pool[8|PreSetup]]
2017-06-26 08:14:19,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
(Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Checking
if storage pool is suitable, name: null ,poolId: 5
2017-06-26 08:14:19,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
(Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1)
StoragePool is in avoid set, skipping this pool
2017-06-26 08:14:19,372 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
(Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Checking
if storage pool is suitable, name: null ,poolId: 6
2017-06-26 08:14:19,372 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
(Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1)
StoragePool is in avoid set, skipping this pool
2017-06-26 08:14:19,373 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
(Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Checking
if storage pool is suitable, name: null ,poolId: 7
2017-06-26 08:14:19,373 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
(Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1)
StoragePool is in avoid set, skipping this pool
2017-06-26 08:14:19,373 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
(Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Checking
if storage pool is suitable, name: null ,poolId: 8
2017-06-26 08:14:19,373 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
(Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1)
StoragePool is in avoid set, skipping this pool
2017-06-26 08:14:19,373 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator]
(Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1)
ClusterScopeStoragePoolAllocator returning 0 suitable storage pools
It again sees my PreSetup for 5 6 7 8 same as my LUN information from the DB
But why does it say avoid set?
Is that my problem? SystemVM's are checking avoid set and then saying nope
nope nope and throwing insufficient resources?
I think I'm right here.
Any idea where the avoid set option is and how to clear that? Or reset it to
no for 5 6 7 8?
Jeremy
-----Original Message-----
From: Jeremy Peterson [mailto:[email protected]]
Sent: Sunday, June 25, 2017 10:56 PM
To: [email protected]
Subject: Re: Recreating SystemVM's
if i am ssh'ed into the xenservers and i do a df -h i do not see those folders
mounted but again i'll check when i get in the morning.
Jeremy
________________________________________
From: Jeremy Peterson <[email protected]>
Sent: Sunday, June 25, 2017 10:15 PM
To: [email protected]
Subject: Re: Recreating SystemVM's
Thank you for the exact information provided I'll check this out in the morning.
Jeremy
Sent from my Verizon, Samsung Galaxy smartphone
-------- Original message --------
From: Dag Sonstebo <[email protected]>
Date: 6/25/17 5:47 PM (GMT-06:00)
To: [email protected]
Subject: Re: Recreating SystemVM's
Mount points are in /var/run/sr-mount/.
Find your primary pool name-labels with "SELECT id,name,uuid FROM
cloud.storage_pool;"
Match the pool name label to XenServer mount point with "xe sr-list
params=uuid,name-label" on the xenservers.
>From that you should find /var/run/sr-mount/<xe provided UUID of storage pool
>here>/
Under this path you should find the system VM template entries - which are the
same as the "local_path" from template_spool_ref.
Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue
On 25/06/2017, 22:17, "Jeremy Peterson" <[email protected]> wrote:
I'm having issues looking for primary storage mount in xenserver. I see the
lv but no mount points to navigate to discover files.
Also I didn't see actual pathes in that db query. Did you?
Sent from my Verizon, Samsung Galaxy smartphone
-------- Original message --------
From: Dag Sonstebo <[email protected]>
Date: 6/25/17 4:30 AM (GMT-06:00)
To: [email protected]
Subject: Re: Recreating SystemVM's
As per previous email - you need to check that the path you have for your
system templates in template_spool_ref exists on your primary storage. You have
admitted primary storage was tidied up ungracefully so you are trying to work
out if the templates are actually still on primary storage like your DB thinks
they are.
Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue
On 23/06/2017, 20:09, "Jeremy Peterson" <[email protected]> wrote:
Ok so Primary storage.
Since I am able to deploy new VM's from ISO and Template that means
access to the secondary storage is good.
Ok so my XenServers show my Storage LUN's have no failures.
Select * from cloud.template_store_ref;
https://drive.google.com/open?id=0B5IXhrpPAT9qRVRGVmY3TkR3ZGM
Now what gets me is the last_updated is the same day that my problems
started. When my storage PIF was calling errors. I lost network connectivity
to my iscsi primary storage and all of my VM's dropped and came back online.
What should I validate in this query as being correct because all of
the instance VM's came back (except the 1 that we talked about yesterday) but
my system vm's are still down.
Jeremy
-----Original Message-----
From: Dag Sonstebo [mailto:[email protected]]
Sent: Thursday, June 22, 2017 11:50 AM
To: [email protected]
Subject: Re: Recreating SystemVM's
In short there's no reason for CloudStack to download the system VM
template from secondary to primary again if it's there and working - hence your
2015 dates. Template_spool_ref shows the download state to primary,
template_store_ref shows status to secondary.
You can access your primary storage directly from command line on your
XenServers - so can you check all those paths on your primary storage pools?
Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue
On 22/06/2017, 17:22, "Jeremy Peterson" <[email protected]> wrote:
1. I downloaded the system template because my system
template vm's were not launching so I downloaded it thinking something might be
off.
2. '1', 'routing-1', 'SystemVM Template (XenServer)',
'8a4039f2-bb71-11e4-8c76-0050569b1662', '0', '0', 'SYSTEM', '0', '64',
'http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2',
'VHD', '2015-02-23 09:35:05', NULL, '1', '2b15ab4401c2d655264732d3fc600241',
'SystemVM Template (XenServer)', '0', '0', '184', '1', '0', '1', '0',
'XenServer', NULL, NULL, '0', '2689602048', 'Active', '0', NULL, '0'
a. Ok so that tells me that my template id is 1
3.
a. '53', '5', '1', '2015-04-13 12:50:37', NULL, NULL, '100',
'DOWNLOADED', NULL, 'ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf',
'ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf', '0', '0', 'Ready', '2', '2015-04-13
13:16:11'
b. '52', '6', '1', '2015-04-13 12:50:31', NULL, NULL, '100',
'DOWNLOADED', NULL, 'bed64043-2208-415c-ad32-02ffeb4802d7',
'bed64043-2208-415c-ad32-02ffeb4802d7', '0', '0', 'Ready', '2', '2015-04-13
13:14:51'
c. '57', '7', '1', '2015-04-21 22:21:44', NULL, NULL, '100',
'DOWNLOADED', NULL, 'f2bbd9ea-3237-4119-8c03-8c0c570d153b',
'f2bbd9ea-3237-4119-8c03-8c0c570d153b', '0', '0', 'Ready', '2', '2015-04-21
22:22:40'
d. '86', '8', '1', '2015-06-25 18:52:57', NULL, NULL, '100',
'DOWNLOADED', NULL, 'd4085d91-22fa-4965-bfa7-d1a1800f6aa7',
'd4085d91-22fa-4965-bfa7-d1a1800f6aa7', '0', '0', 'Ready', '2', '2015-06-25
18:54:05'
i.
This is from select * from cloud.template_spool_ref where template_id=1;
ii.
In my logs I can see the third template.
2017-06-22 09:23:49,103 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-117:ctx-03e815cd job-1042/job-154364 ctx-0c08652e) Seq
18-3646226848309841098: Sending { Cmd , MgmtId: 345050411715, via:
18(Flex-Xen5.flexhost.local), Ver: v1, Flags: 100111,
[{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"f2bbd9ea-3237-4119-8c03-8c0c570d153b","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM
Template
(XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN1","id":7,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN1","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN1/?ROLE=Primary&STOREUUID=FlexSAN1-LUN1"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"7726194f-821c-4df2-90cf-f61ac06a362d","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN1","id":7,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN1","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN1/?ROLE=Primary&STOREUUID=FlexSAN1-LUN1"}},"name":"ROOT-23819","size":2689602048,"volumeId":28738,"vmName":"s-23819-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":28738,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}]
}
2017-06-22 09:23:49,103 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-117:ctx-03e815cd job-1042/job-154364 ctx-0c08652e) Seq
18-3646226848309841098: Executing: { Cmd , MgmtId: 345050411715, via:
18(Flex-Xen5.flexhost.local), Ver: v1, Flags: 100111,
[{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"f2bbd9ea-3237-4119-8c03-8c0c570d153b","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM
Template
(XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN1","id":7,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN1","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN1/?ROLE=Primary&STOREUUID=FlexSAN1-LUN1"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"7726194f-821c-4df2-90cf-f61ac06a362d","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN1","id":7,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN1","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN1/?ROLE=Primary&STOREUUID=FlexSAN1-LUN1"}},"name":"ROOT-23819","size":2689602048,"volumeId":28738,"vmName":"s-23819-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":28738,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}]
}
2017-06-22 09:23:49,104 DEBUG [c.c.a.m.DirectAgentAttache]
(DirectAgent-124:ctx-ab245ccb) Seq 18-3646226848309841098: Executing request
2017-06-22 09:23:49,109 DEBUG [c.c.h.x.r.XenServerStorageProcessor]
(DirectAgent-124:ctx-ab245ccb) Catch Exception
com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
f2bbd9ea-3237-4119-8c03-8c0c570d153b failed due to The uuid you supplied was
invalid.
2017-06-22 09:23:49,110 WARN [c.c.h.x.r.XenServerStorageProcessor]
(DirectAgent-124:ctx-ab245ccb) Unable to create volume;
Pool=volumeTO[uuid=7726194f-821c-4df2-90cf-f61ac06a362d|path=null|datastore=PrimaryDataStoreTO[uuid=FlexSAN1-LUN1|name=null|id=7|pooltype=PreSetup]];
Disk:
com.cloud.utils.exception.CloudRuntimeException: Catch Exception
com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
f2bbd9ea-3237-4119-8c03-8c0c570d153b failed due to The uuid you supplied was
invalid.
And again I see another deployment of a vm
2017-06-22 09:23:49,918 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-125:ctx-7dda0875 job-342/job-154365 ctx-79440317) Seq
19-2522578741280910778: Sending { Cmd , MgmtId: 345050411715, via:
19(Flex-Xen1.flexhost.local), Ver: v1, Flags: 100111,
[{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"d4085d91-22fa-4965-bfa7-d1a1800f6aa7","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM
Template
(XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"a2456229-2942-4d9b-9bff-c6d9ea004fbd","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"ROOT-22411","size":2689602048,"volumeId":27330,"vmName":"v-22411-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":27330,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}]
}
2017-06-22 09:23:49,918 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-125:ctx-7dda0875 job-342/job-154365 ctx-79440317) Seq
19-2522578741280910778: Executing: { Cmd , MgmtId: 345050411715, via:
19(Flex-Xen1.flexhost.local), Ver: v1, Flags: 100111,
[{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"d4085d91-22fa-4965-bfa7-d1a1800f6aa7","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM
Template
(XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"a2456229-2942-4d9b-9bff-c6d9ea004fbd","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"ROOT-22411","size":2689602048,"volumeId":27330,"vmName":"v-22411-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":27330,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}]
}
2017-06-22 09:23:49,918 DEBUG [c.c.a.m.DirectAgentAttache]
(DirectAgent-4:ctx-8db8a7ec) Seq 19-2522578741280910778: Executing request
2017-06-22 09:23:49,925 DEBUG [c.c.h.x.r.XenServerStorageProcessor]
(DirectAgent-4:ctx-8db8a7ec) Catch Exception
com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
d4085d91-22fa-4965-bfa7-d1a1800f6aa7 failed due to The uuid you supplied was
invalid.
2017-06-22 09:23:49,925 WARN [c.c.h.x.r.XenServerStorageProcessor]
(DirectAgent-4:ctx-8db8a7ec) Unable to create volume;
Pool=volumeTO[uuid=a2456229-2942-4d9b-9bff-c6d9ea004fbd|path=null|datastore=PrimaryDataStoreTO[uuid=FlexSAN1-LUN0|name=null|id=8|pooltype=PreSetup]];
Disk:
com.cloud.utils.exception.CloudRuntimeException: Catch Exception
com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
d4085d91-22fa-4965-bfa7-d1a1800f6aa7 failed due to The uuid you supplied was
invalid.
4. Download_state shows DOWNLOADED and download_pct is 100 on
all of my id's
5. My storage is firmware based dell 3200 so there is no OS
to log into to view the install_path. How do I validate that?
Now is it weird that my systemvm install date is 2015 but yet the
command I used above completed successfully?
Jeremy
-----Original Message-----
From: Dag Sonstebo [mailto:[email protected]]
Sent: Thursday, June 22, 2017 9:58 AM
To: [email protected]
Subject: Re: Recreating SystemVM's
1) You've not told us why you chose to redownload the system VM
template - can you elaborate?
2) Can you run: "SELECT * FROM cloud.vm_template where name like
'%system%' and hypervisor_type='XenServer';"
3) "So after cleaning up the 5TB worth of data (last Friday).." -
did you check all your disk chains to ensure you didn't wipe a base disk? If
not then chances are you wiped a template disk CloudStack now thinks is there.
Check this in template_spool_ref - work out from point 2) above
what your template ID is, as well as what your primary storage pool ID is,
something like this: "SELECT * FROM cloud.template_spool_ref where
template_id=XYZ and pool_id=12345;"
What is the downloaded state?
Check the install_path on your primary storage - does it exist?
Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue
On 22/06/2017, 15:14, "Jeremy Peterson"
<[email protected]<mailto:[email protected]>> wrote:
Sorry I am using 4.5.0 I mistyped my versions.
http://prntscr.com/fmuluj
My router.template.xenserver shows
SystemVM Template (XenServer)
If I go Templates -> SystemVM Template (XenServer) the Status
is Download Complete and Read shows Yes
This is the command I ran on my mananagement server to
redownload systemvm template.
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
-m /secondary -u
http://cloudstack.apt-get.eu/systemvm/4.5/systemvm64template-4.5-xen.vhd.bz2 -h
xenserver -F
i-153-446-VM was a working VM that powered off during this
whole set of problems and had not been able to power back online since. The
oddity is I get the error visually "insufficient capacity" yes during the call
to start that VM it finds a valid host for Memory and CPU but then errors with
UUID is invalid. I am more focused on SSVM and ConsoleProxyVM not starting at
this time. As I have replaced i-153-446 with a new VM. Now this is puzzling.
I can launch new VM's for ISO's and templates.
Display name test-launch-from-template
Name test-launch-from-template
State Running
Template CentOS 7 40GB
Dynamically Scalable Yes
OS Type CentOS 7
Hypervisor XenServer
Attached ISO
Compute offering 2vCPU,4GB RAM,HA
# of CPU Cores 2
CPU (in MHz) 2000
Memory (in MB) 4096
VGPU
HA Enabled Yes
Group
Zone name Rushford
Host Flex-Xen4.flexhost.local
Domain ROOT
Account admin
Created 22 Jun 2017 08:33:30
I suspected storage a while back and noticed that the SSVM was
recreating its 2.5GB disk over and over and over on all of my storage luns. So
after cleaning up the 5TB worth of data (last Friday) I don't see a storage
issue with my SAN iscsi connections.
http://prntscr.com/fmus2i
Again thanks Dag for your response here's to hoping some of
that helps track down what's broke.
Whats killing me is the amount of logs. It seems like its
creating multiple system vm's at the same time
Jeremy
-----Original Message-----
From: Dag Sonstebo [mailto:[email protected]]
Sent: Thursday, June 22, 2017 6:49 AM
To:
[email protected]<mailto:[email protected]>
Subject: Re: Recreating SystemVM's
OK, you seem to have a handful of issues here.
1) You have stated at the start of this thread you are using
CloudStack 4.9.0 and XS6.5.
In this log dump - https://pastebin.com/2DhzFVDZ - all your
downloads are for 4.5 system VM templates, e.g.
2017-06-21 10:46:16,440 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-39:ctx-ca13c13b job-1042/job-147411 ctx-ebfa1fb6) Seq
15-7914231920173516250: Executing: { Cmd , MgmtId: 345050411715, via:
15(Flex-Xen3.flexhost.local), Ver: v1, Flags: 100111,
[{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid"
.
Your MySQL query confirms this:
- - -
SELECT * FROM cloud.vm_template where type='SYSTEM';
1 routing-1 SystemVM
Template (XenServer) 8a4039f2-bb71-11e4-8c76-0050569b1662
0 0 SYSTEM 0 64
http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2
VHD 2015-02-23 09:35:05 1
2b15ab4401c2d655264732d3fc600241 SystemVM Template (XenServer) 0
0 184 1 0 1
0 XenServer 0
2689602048 Active 0 0
3 routing-3 SystemVM
Template (KVM) 8a46062a-bb71-11e4-8c76-0050569b1662 0
0 SYSTEM 0 64
http://download.cloud.com/templates/4.5/systemvm64template-4.5-kvm.qcow2.bz2
QCOW2 2015-02-23 09:35:05 1
aa9f501fecd3de1daeb9e2f357f6f002 SystemVM Template (KVM)
0 0 15 1 0 1
0 KVM 0
Active 0 0
8 routing-8 SystemVM
Template (vSphere) 8a4e70c6-bb71-11e4-8c76-0050569b1662
0 0 SYSTEM 0 64
http://download.cloud.com/templates/4.5/systemvm64template-4.5-vmware.ova
OVA 2015-02-23 09:35:05 1
3106a79a4ce66cd7f6a7c50e93f2db57 SystemVM Template (vSphere)
0 0 15 1 0 1
0 VMware 0
Active 0 1
9 routing-9 SystemVM
Template (HyperV) 8a5184e6-bb71-11e4-8c76-0050569b1662 0
0 SYSTEM 0 64
http://download.cloud.com/templates/4.5/systemvm64template-4.5-hyperv.vhd.zip
VHD 2015-02-23 09:35:05 1
70bd30ea02ee9ed67d2c6b85c179cee9 SystemVM Template (HyperV) 0
0 15 1 0 1
0 Hyperv 0
Active 0 0
10 routing-10 SystemVM Template
(LXC) 5bb9e71c-bb72-11e4-8c76-0050569b1662 0
0 SYSTEM 0 64
http://download.cloud.com/templates/4.5/systemvm64template-4.5-kvm.qcow2.bz2
QCOW2 2015-02-23 09:40:56 1
aa9f501fecd3de1daeb9e2f357f6f002 SystemVM Template (LXC)
0 0 15 1 0 1
0 LXC 0
Active 0 0
- - -
In addition you have also stated "I redeployed
systemcl64template-5.6-xen.vhd.bz2 last week does that not recreated the uuid ?"
So the questions here are:
- why are you using 4.5 templates with 4.9? Did you recently
upgrade or was this put in wrong to start off with?
- what are you trying to do with
"systemcl64template-5.6-xen.vhd.bz2"? My guess is this is a typo? If you were
trying to install the 4.6 template what process did you follow?
- following on from this can you do a MySQL query listing the
uploaded template? Can you also check what the status is of this in your GUI -
is it uploaded to the zone in question and in a READY state? You can also check
this in the template_store_ref table.
- what is your global setting for "router.template.xenserver"
currently set to?
I get the impression your environment is possibly managing to
limp along using 4.5 system VM templates - if so I'm surprised if anything is
working. For 4.9 you should be using 4.6 templates (e.g.
http://packages.shapeblue.com.s3-eu-west-1.amazonaws.com/systemvmtemplate/4.6/new/systemvm64template-4.6-xen.vhd.bz2
) - although I think maybe this is what you are trying to achieve?
2) VM i-153-446 - as you can see from the logs there's not a
log to go by: "Unable to start i-153-446-VM due to ". However - you haven't
told us if this is a new VM or existing? If it's new it won't necessarily be
able to start until you have the SSVM sorted in your Rushford zone. For further
troubleshooting you should also check the logs on the XS host where this VM was
trying to start.
3) Your issues could be storage related - do all SRs (like
FlexSAN2-LUN0) show as connected to your XS hosts in XenCenter? If not can you
repair them from XenCenter?
Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue
On 21/06/2017, 19:39, "Jeremy Peterson"
<[email protected]<mailto:[email protected]>> wrote:
And combing through the logs I see that one of my VM's is
trying to launch i-153-446 its passed all the cpu and memory checks and found
primary storage but then when it goes to deploy. I am getting a catch
exception insufficient storage.
2017-06-21 13:10:19,219 DEBUG
[c.c.h.x.r.CitrixResourceBase] (DirectAgent-252:ctx-717133c7) The VM is in
stopped state, detected problem during startup : i-153-446-VM
2017-06-21 13:10:19,219 DEBUG [c.c.a.m.DirectAgentAttache]
(DirectAgent-252:ctx-717133c7) Seq 19-2522578741280901601: Response Received:
2017-06-21 13:10:19,219 DEBUG [c.c.a.t.Request]
(DirectAgent-252:ctx-717133c7) Seq 19-2522578741280901601: Processing: { Ans:
, MgmtId: 345050411715, via: 19, Ver: v1, Flags: 10,
[{"com.cloud.agent.api.StartAnswer":{"vm":{"id":446,"name":"i-153-446-VM","bootloader":"PyGrub","type":"User","cpus":4,"minSpeed":500,"maxSpeed":2000,"minRam":3221225472,"maxRam":12884901888,"arch":"x86_64","os":"Windows
Server 2012 R2 (64-bit)","platformEmulator":"Windows Server 2012 R2
(64-bit)","bootArgs":"","enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"a9QSgiSW/+6iG3aaGfgaJw==","params":{"memoryOvercommitRatio":"4","platform":"viridian:true;acpi:1;apic:true;viridian_reference_tsc:true;viridian_time_ref_count:true;pae:true;videoram:8;device_id:0002;nx:true;vga:std","Message.ReservedCapacityFreed.Flag":"true","cpuOvercommitRatio":"4","hypervisortoolsversion":"xenserver61"},"uuid":"896c1d67-3f1d-4f0b-bd2c-1548cd637faf","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"69eba19d-6f5d-4f5b-94eb-69cf467ab25e","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN2-LUN0","id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},"name":"ROOT-446","size":85899345920,"path":"8a64ab1c-3528-4b8a-bb9a-ef164c7f6385","volumeId":537,"vmName":"i-153-446-VM","accountId":153,"format":"VHD","provisioningType":"THIN","id":537,"deviceId":0,"hypervisorType":"XenServer"}},"diskSeq":0,"path":"8a64ab1c-3528-4b8a-bb9a-ef164c7f6385","type":"ROOT","_details":{"managed":"false","storagePort":"0","storageHost":"localhost","volumeSize":"85899345920"}},{"data":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"id":0,"format":"ISO","accountId":0,"hvm":false}},"diskSeq":3,"type":"ISO"}],"nics":[{"deviceId":0,"networkRateMbps":1000,"defaultNic":true,"pxeDisable":false,"nicUuid":"8ecdff5f-2e67-406c-8145-71a437b35ccb","uuid":"d6010ef5-ae7e-4b5a-a8fe-b33d701742dd","ip":"192.168.211.211","netmask":"255.255.255.0","gateway":"192.168.211.1","mac":"02:00:20:1c:00:05","dns1":"208.74.240.5","dns2":"208.74.247.245","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://1611","isolationUri":"vlan://1611","isSecurityGroupEnabled":false,"name":"GUEST-PUB"}],"vcpuMaxLimit":16},"_iqnToPath":{},"result":false,"details":"Unable
to start i-153-446-VM due to ","wait":0}}] }
2017-06-21 13:10:19,219 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Seq
19-2522578741280901601: Received: { Ans: , MgmtId: 345050411715, via: 19, Ver:
v1, Flags: 10, { StartAnswer } }
2017-06-21 13:10:19,230 INFO
[c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) Unable to start VM on Host[-19-Routing] due
to Unable to start i-153-446-VM due to
2017-06-21 13:10:19,237 DEBUG
[o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-95:ctx-0d080cef
job-1042/job-148142) Done executing com.cloud.vm.VmWorkStart for job-148142
2017-06-21 13:10:19,240 DEBUG
[o.a.c.f.j.i.SyncQueueManagerImpl] (Work-Job-Executor-95:ctx-0d080cef
job-1042/job-148142) Sync queue (128149) is currently empty
2017-06-21 13:10:19,240 INFO [o.a.c.f.j.i.AsyncJobMonitor]
(Work-Job-Executor-95:ctx-0d080cef job-1042/job-148142) Remove job-148142 from
job monitoring
2017-06-21 13:10:19,245 DEBUG
[c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) Cleaning up resources for the vm
VM[User|i-153-446-VM] in Starting state
2017-06-21 13:10:19,246 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Seq
19-2522578741280901602: Sending { Cmd , MgmtId: 345050411715, via:
19(Flex-Xen1.flexhost.local), Ver: v1, Flags: 100011,
[{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":false,"vmName":"i-153-446-VM","wait":0}}]
}
2017-06-21 13:10:19,247 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Seq
19-2522578741280901602: Executing: { Cmd , MgmtId: 345050411715, via:
19(Flex-Xen1.flexhost.local), Ver: v1, Flags: 100011,
[{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":false,"vmName":"i-153-446-VM","wait":0}}]
}
2017-06-21 13:10:19,247 DEBUG [c.c.a.m.DirectAgentAttache]
(DirectAgent-82:ctx-9ee1039e) Seq 19-2522578741280901602: Executing request
2017-06-21 13:10:19,250 DEBUG [c.c.a.m.DirectAgentAttache]
(DirectAgent-82:ctx-9ee1039e) Seq 19-2522578741280901602: Response Received:
2017-06-21 13:10:19,250 DEBUG [c.c.a.t.Request]
(DirectAgent-82:ctx-9ee1039e) Seq 19-2522578741280901602: Processing: { Ans: ,
MgmtId: 345050411715, via: 19, Ver: v1, Flags: 10,
[{"com.cloud.agent.api.StopAnswer":{"result":true,"details":"VM does not
exist","wait":0}}] }
2017-06-21 13:10:19,250 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Seq
19-2522578741280901602: Received: { Ans: , MgmtId: 345050411715, via: 19, Ver:
v1, Flags: 10, { StopAnswer } }
2017-06-21 13:10:19,254 DEBUG [c.c.n.NetworkModelImpl]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Service
SecurityGroup is not supported in the network id=298
2017-06-21 13:10:19,256 DEBUG
[o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) Changing active number of nics for network
id=298 on -1
2017-06-21 13:10:19,257 WARN
[o.a.c.s.SecondaryStorageManagerImpl] (secstorage-1:ctx-b13acfc7) Exception
while trying to start secondary storage vm
com.cloud.exception.InsufficientServerCapacityException:
Unable to create a deployment for
VM[SecondaryStorageVm|s-22605-VM]Scope=interface com.cloud.dc.DataCenter; id=1
at
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:941)
at
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4471)
at
sun.reflect.GeneratedMethodAccessor246.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at
com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4627)
at
com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)
at
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:536)
at
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:493)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at
java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2017-06-21 13:10:19,259 INFO
[o.a.c.s.SecondaryStorageManagerImpl] (secstorage-1:ctx-b13acfc7) Unable to
start secondary storage vm for standby capacity, secStorageVm vm Id : 22605,
will recycle it and start a new one
2017-06-21 13:10:19,259 DEBUG
[c.c.a.SecondaryStorageVmAlertAdapter] (secstorage-1:ctx-b13acfc7) received
secondary storage vm alert
2017-06-21 13:10:19,259 DEBUG
[c.c.a.SecondaryStorageVmAlertAdapter] (secstorage-1:ctx-b13acfc7) Secondary
Storage Vm creation failure, zone: Rushford
2017-06-21 13:10:19,260 WARN [o.a.c.alerts]
(secstorage-1:ctx-b13acfc7) alertType:: 19 // dataCenterId:: 1 // podId:: null
// clusterId:: null // message:: Secondary Storage Vm creation failure. zone:
Rushford, error details: null
2017-06-21 13:10:19,267 DEBUG
[o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) Asking VpcVirtualRouter to release
NicProfile[1052-446-987e8cca-ec26-46cd-aec2-a1f4b2283dff-192.168.211.211-null
2017-06-21 13:10:19,267 DEBUG
[c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) Successfully released network resources for
the vm VM[User|i-153-446-VM]
2017-06-21 13:10:19,267 DEBUG
[c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) Successfully cleanued up resources for the
vm VM[User|i-153-446-VM] in Starting state
2017-06-21 13:10:19,269 DEBUG
[c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) Root volume is ready, need to place VM in
volume's cluster
2017-06-21 13:10:19,274 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) Deploy avoids pods: [], clusters: [],
hosts: [19]
2017-06-21 13:10:19,274 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) DeploymentPlanner allocation algorithm:
com.cloud.deploy.UserDispersingPlanner@4cafa203<mailto:com.cloud.deploy.UserDispersingPlanner@4cafa203>
2017-06-21 13:10:19,276 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) Trying to allocate a host and storage pools
from dc:1, pod:1,cluster:1, requested cpu: 8000, requested ram: 12884901888
2017-06-21 13:10:19,276 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) Is ROOT volume READY (pool already
allocated)?: Yes
2017-06-21 13:10:19,276 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) DeploymentPlan has host_id specified,
choosing this host and making no checks on this host: 19
2017-06-21 13:10:19,276 INFO
[o.a.c.s.PremiumSecondaryStorageManagerImpl] (secstorage-1:ctx-b13acfc7)
Primary secondary storage is not even started, wait until next turn
2017-06-21 13:10:19,276 ERROR [c.c.a.AlertManagerImpl]
(Email-Alerts-Sender-25:null) Failed to send email alert
javax.mail.MessagingException: Could not connect to SMTP host:
spam.acentek.net, port: 465 (java.net.ConnectException: Connection refused)
2017-06-21 13:10:19,276 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) The specified host is in avoid set
2017-06-21 13:10:19,276 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898
job-148144/job-148145 ctx-f018393f) Cannnot deploy to specified host, returning.
2017-06-21 13:10:19,297 DEBUG [c.c.c.CapacityManagerImpl]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) VM
state transitted from :Starting to Stopped with event: OperationFailedvm's
original host id: 1 new host id: null host id before state transition: 19
2017-06-21 13:10:19,300 DEBUG [c.c.c.CapacityManagerImpl]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Hosts's
actual total CPU: 48000 and CPU after applying overprovisioning: 192000
2017-06-21 13:10:19,300 DEBUG [c.c.c.CapacityManagerImpl]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Hosts's
actual total RAM: 128790209280 and RAM after applying overprovisioning:
515160834048
2017-06-21 13:10:19,300 DEBUG [c.c.c.CapacityManagerImpl]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) release
cpu from host: 19, old used: 24000,reserved: 0, actual total: 48000, total with
overprovisioning: 192000; new used: 16000,reserved:0; movedfromreserved:
false,moveToReserveredfalse
2017-06-21 13:10:19,300 DEBUG [c.c.c.CapacityManagerImpl]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) release
mem from host: 19, old used: 38654705664,reserved: 0, total: 515160834048; new
used: 25769803776,reserved:0; movedfromreserved: false,moveToReserveredfalse
2017-06-21 13:10:19,332 ERROR [c.c.v.VmWorkJobHandlerProxy]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f)
Invocation exception, caused by:
com.cloud.exception.InsufficientServerCapacityException: Unable to create a
deployment for VM[User|i-153-446-VM]Scope=interface com.cloud.dc.DataCenter;
id=1
2017-06-21 13:10:19,332 INFO [c.c.v.VmWorkJobHandlerProxy]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Rethrow
exception com.cloud.exception.InsufficientServerCapacityException: Unable to
create a deployment for VM[User|i-153-446-VM]Scope=interface
com.cloud.dc.DataCenter; id=1
2017-06-21 13:10:19,332 DEBUG [c.c.v.VmWorkJobDispatcher]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145) Done with run of VM
work job: com.cloud.vm.VmWorkStart for VM 446, job origin: 148144
2017-06-21 13:10:19,332 ERROR [c.c.v.VmWorkJobDispatcher]
(Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145) Unable to complete
AsyncJobVO {id:148145, userId: 2, accountId: 2, instanceType: null, instanceId:
null, cmd: com.cloud.vm.VmWorkStart, cmdInfo:
rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAABvnQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFzcQB-AAgAAAAAAAAAE3BwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA,
cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result:
null, initMsid: 345050411715, completeMsid: null, lastUpdated: null,
lastPolled: null, created: Wed Jun 21 13:10:16 CDT 2017}, job origin:148144
com.cloud.exception.InsufficientServerCapacityException:
Unable to create a deployment for VM[User|i-153-446-VM]Scope=interface
com.cloud.dc.DataCenter; id=1
My logs are just rolling with these errors.
Jeremy
-----Original Message-----
From: Jeremy Peterson [mailto:[email protected]]
Sent: Wednesday, June 21, 2017 1:10 PM
To:
[email protected]<mailto:[email protected]>; S. Brüseke -
proIO GmbH <[email protected]<mailto:[email protected]>>
Subject: RE: Recreating SystemVM's
Why is my DEBUG show uuid of
8a4039f2-bb71-11e4-8c76-0050569b1662 but below in my catch exception it shows
uuid ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf.
See below.
2017-06-21 10:46:16,431 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-45:ctx-7cdfe536 job-342/job-147412 ctx-39b2bc63) Seq
1-6981705322331112844:
Sending { Cmd , MgmtId: 345050411715, via:
1(Flex-Xen2.flexhost.local), Ver: v1,
Flags: 100111,
[{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":
{"path":"ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2",
"uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,
"displayText":"SystemVM Template
(XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN2-LUN0",
"id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},
"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{
"uuid":"a2456229-2942-4d9b-9bff-c6d9ea004fbd","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{
"uuid":"FlexSAN2-LUN0","id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":
"PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},"name":"ROOT-22411","size":2689602048,"volumeId":27330,"vmName":
"v-22411-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":27330,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}]
}
2017-06-21 10:46:16,431 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-45:ctx-7cdfe536 job-342/job-147412 ctx-39b2bc63) Seq
1-6981705322331112844:
2017-06-21 10:46:16,444 WARN
[c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-152:ctx-385c99e9) Unable to
create volume;
Pool=volumeTO[uuid=a2456229-2942-4d9b-9bff-c6d9ea004fbd|path=null|datastore=PrimaryDataStoreTO[uuid=FlexSAN2-LUN0|name=null|id=5|pooltype=PreSetup]];
Disk:com.cloud.utils.exception.CloudRuntimeException: Catch
Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for
uuid: ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf failed due to
The uuid you supplied was invalid.
Now if I check the DB to find out what my templates uuid
should be.
SELECT * FROM cloud.vm_template where type='SYSTEM';
1 routing-1 SystemVM
Template (XenServer) 8a4039f2-bb71-11e4-8c76-0050569b1662
0 0 SYSTEM 0 64
http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2
VHD 2015-02-23 09:35:05 1
2b15ab4401c2d655264732d3fc600241 SystemVM Template (XenServer) 0
0 184 1 0 1
0 XenServer 0
2689602048 Active 0 0
3 routing-3 SystemVM
Template (KVM) 8a46062a-bb71-11e4-8c76-0050569b1662 0
0 SYSTEM 0 64
http://download.cloud.com/templates/4.5/systemvm64template-4.5-kvm.qcow2.bz2
QCOW2 2015-02-23 09:35:05 1
aa9f501fecd3de1daeb9e2f357f6f002 SystemVM Template (KVM)
0 0 15 1 0 1
0 KVM 0
Active 0 0
8 routing-8 SystemVM
Template (vSphere) 8a4e70c6-bb71-11e4-8c76-0050569b1662
0 0 SYSTEM 0 64
http://download.cloud.com/templates/4.5/systemvm64template-4.5-vmware.ova
OVA 2015-02-23 09:35:05 1
3106a79a4ce66cd7f6a7c50e93f2db57 SystemVM Template (vSphere)
0 0 15 1 0 1
0 VMware 0
Active 0 1
9 routing-9 SystemVM
Template (HyperV) 8a5184e6-bb71-11e4-8c76-0050569b1662 0
0 SYSTEM 0 64
http://download.cloud.com/templates/4.5/systemvm64template-4.5-hyperv.vhd.zip
VHD 2015-02-23 09:35:05 1
70bd30ea02ee9ed67d2c6b85c179cee9 SystemVM Template (HyperV) 0
0 15 1 0 1
0 Hyperv 0
Active 0 0
10 routing-10 SystemVM Template
(LXC) 5bb9e71c-bb72-11e4-8c76-0050569b1662 0
0 SYSTEM 0 64
http://download.cloud.com/templates/4.5/systemvm64template-4.5-kvm.qcow2.bz2
QCOW2 2015-02-23 09:40:56 1
aa9f501fecd3de1daeb9e2f357f6f002 SystemVM Template (LXC)
0 0 15 1 0 1
0 LXC 0
Active 0 0
Ok so that shows me my system templates UUID is
8a4039f2-bb71-11e4-8c76-0050569b1662 and that lines up correctly with my debug
command uuid 8a4039f2-bb71-11e4-8c76-0050569b1662.
Suggestions ? Ideas? Thoughts?
Thank you.
Jeremy
-----Original Message-----
From: Jeremy Peterson [mailto:[email protected]]
Sent: Wednesday, June 21, 2017 11:58 AM
To:
[email protected]<mailto:[email protected]>; S. Brüseke -
proIO GmbH <[email protected]<mailto:[email protected]>>
Subject: RE: Recreating SystemVM's
You are correct I had 2 hosts disabled when I tried to
launch that VM. But my hosts all show state Up.
Heres Flex-Xen1.flexhost.local
http://prntscr.com/fmi0tw
Heres the info page of the host
http://prntscr.com/fmi16g
Resource state: Enabled
State up: Up
I did a force reconnect on all hosts and that cleared the
avoid set error.
But now I am getting UUID invalid when trying to launch a
VM. This is whats happening to the system VM's.
https://pastebin.com/2DhzFVDZ
You see it errors : The uuid you supplied was invalid.
Now I see the above command declared the host and storage
but the UUID is "uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662"
How can I see what that ties to ?
I redeployed systemcl64template-5.6-xen.vhd.bz2 last week
does that not recreated the uuid ?
Jeremy
-----Original Message-----
From: Dag Sonstebo [mailto:[email protected]]
Sent: Wednesday, June 21, 2017 11:12 AM
To:
[email protected]<mailto:[email protected]>; S. Brüseke -
proIO GmbH <[email protected]<mailto:[email protected]>>
Subject: Re: Recreating SystemVM's
Hi Jeremy,
You have 6 hosts: "List of hosts in ascending order of
number of VMs: [15, 17, 19, 1, 16, 18]" - my guess is you have disabled hosts
16+18 for their reboot.
You immediately have the rest of the hosts in an avoid set:
"Deploy avoids pods: [], clusters: [], hosts: [17, 1, 19, 15]".
So you need to work out why those hosts are considered
non-valid. Do they show up as live in your CloudStack GUI? Are they all enabled
as well as out of maintenance mode?
Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue
On 21/06/2017, 15:13, "Jeremy Peterson"
<[email protected]<mailto:[email protected]>> wrote:
So this morning I reconnected all hosts.
I also disabled my two hosts that need to reboot and
powered on a VM and now I am getting a Insufficient Resources.
Whats odd is that Host Allocator returning 0 suitable
hosts?
2017-06-21 08:43:53,695 DEBUG
[c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) Root volume is ready, need to place VM in
volume's cluster
2017-06-21 08:43:53,695 DEBUG
[c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) Vol[537|vm=446|ROOT] is READY, changing
deployment plan to use this pool's dcId: 1 , podId: 1 , and clusterId: 1
2017-06-21 08:43:53,702 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) Deploy avoids pods: [], clusters: [],
hosts: [17, 1, 19, 15]
2017-06-21 08:43:53,703 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) DeploymentPlanner allocation algorithm:
com.cloud.deploy.UserDispersingPlanner@4cafa203<mailto:com.cloud.deploy.UserDispersingPlanner@4cafa203>
2017-06-21 08:43:53,703 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) Trying to allocate a host and storage pools
from dc:1, pod:1,cluster:1, requested cpu: 8000, requested ram: 12884901888
2017-06-21 08:43:53,703 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) Is ROOT volume READY (pool already
allocated)?: Yes
2017-06-21 08:43:53,703 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) This VM has last host_id specified, trying
to choose the same host: 1
2017-06-21 08:43:53,704 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) The last host of this VM is in avoid set
2017-06-21 08:43:53,704 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) Cannot choose the last host to deploy this
VM
2017-06-21 08:43:53,704 DEBUG [c.c.d.FirstFitPlanner]
(Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348)
Searching resources only under specified Cluster: 1
2017-06-21 08:43:53,714 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) Checking resources in Cluster: 1 under Pod:
1
2017-06-21 08:43:53,714 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Looking for hosts
in dc: 1 pod:1 cluster:1
2017-06-21 08:43:53,718 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) List of hosts in
ascending order of number of VMs: [15, 17, 19, 1, 16, 18]
2017-06-21 08:43:53,718 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) FirstFitAllocator
has 4 hosts to check for allocation: [Host[-15-Routing], Host[-17-Routing],
Host[-19-Routing], Host[-1-Routing]]
2017-06-21 08:43:53,727 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Found 4 hosts for
allocation after prioritization: [Host[-15-Routing], Host[-17-Routing],
Host[-19-Routing], Host[-1-Routing]]
2017-06-21 08:43:53,727 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Looking for
speed=8000Mhz, Ram=12288
2017-06-21 08:43:53,727 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Host name:
Flex-Xen3.flexhost.local, hostId: 15 is in avoid set, skipping this and trying
other available hosts
2017-06-21 08:43:53,727 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Host name:
Flex-Xen4.flexhost.local, hostId: 17 is in avoid set, skipping this and trying
other available hosts
2017-06-21 08:43:53,727 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Host name:
Flex-Xen1.flexhost.local, hostId: 19 is in avoid set, skipping this and trying
other available hosts
2017-06-21 08:43:53,727 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Host name:
Flex-Xen2.flexhost.local, hostId: 1 is in avoid set, skipping this and trying
other available hosts
2017-06-21 08:43:53,727 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Host Allocator
returning 0 suitable hosts
2017-06-21 08:43:53,727 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) No suitable hosts found
2017-06-21 08:43:53,727 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) No suitable hosts found under this Cluster:
1
2017-06-21 08:43:53,728 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) Could not find suitable Deployment
Destination for this VM under any clusters, returning.
2017-06-21 08:43:53,728 DEBUG [c.c.d.FirstFitPlanner]
(Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348)
Searching resources only under specified Cluster: 1
2017-06-21 08:43:53,729 DEBUG [c.c.d.FirstFitPlanner]
(Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) The
specified cluster is in avoid set, returning.
2017-06-21 08:43:53,736 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) Deploy avoids pods: [], clusters: [1],
hosts: [17, 1, 19, 15]
2017-06-21 08:43:53,737 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) DeploymentPlanner allocation algorithm:
com.cloud.deploy.UserDispersingPlanner@4cafa203<mailto:com.cloud.deploy.UserDispersingPlanner@4cafa203>
2017-06-21 08:43:53,737 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) Trying to allocate a host and storage pools
from dc:1, pod:1,cluster:null, requested cpu: 8000, requested ram: 12884901888
2017-06-21 08:43:53,737 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) Is ROOT volume READY (pool already
allocated)?: No
2017-06-21 08:43:53,737 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) This VM has last host_id specified, trying
to choose the same host: 1
2017-06-21 08:43:53,739 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9
job-146764/job-146768 ctx-66c78348) The last host of this VM is in avoid set
All oddities.
So I did a force reconnect on all 6 hosts and enabled
the two hosts that were pending updates.
Jeremy
-----Original Message-----
From: Jeremy Peterson [mailto:[email protected]]
Sent: Tuesday, June 20, 2017 12:33 PM
To:
[email protected]<mailto:[email protected]>; S. Brüseke -
proIO GmbH <[email protected]<mailto:[email protected]>>
Subject: RE: Recreating SystemVM's
Ok so my issues have not gone away.
I have two hosts that have not rebooted yet tonight I
will be maintenancing those hosts out and migrating vm's away from those hosts
and then performing a reboot of the host and installing a couple xenserver
updates.
One thing is I am not getting the CANNOT ATTACH NETWORK
error anymore which is cool but.
https://drive.google.com/open?id=0B5IXhrpPAT9qQ0FFUmRyRjN4NlE
Take a look at creation of VM 20685
2017-06-20 12:15:48,083 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-82:ctx-c39fa1f8
job-1042/job-138603 ctx-c17ce6fc) Found a potential host id: 1 name:
Flex-Xen2.flexhost.local and associated storage pools for this VM
2017-06-20 12:15:48,084 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-82:ctx-c39fa1f8
job-1042/job-138603 ctx-c17ce6fc) Returning Deployment Destination:
Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))]
: Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(25604|ROOT-->Pool(5))]
2017-06-20 12:15:48,084 DEBUG
[c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-82:ctx-c39fa1f8
job-1042/job-138603 ctx-c17ce6fc) Deployment found -
P0=VM[SecondaryStorageVm|s-20685-VM],
P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))]
: Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(25604|ROOT-->Pool(5))]
So it found a host and storage pool
Networks were already created on line 482-484
But then look it fails on create volume UUID is
invalid???
2017-06-20 12:15:48,262 DEBUG
[c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-88:ctx-c51dafa0
job-342/job-138604 ctx-75edebb0) VM is being created in podId: 1
2017-06-20 12:15:48,264 DEBUG
[o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-88:ctx-c51dafa0
job-342/job-138604 ctx-75edebb0) Network id=200 is already implemented
2017-06-20 12:15:48,269 DEBUG
[c.c.n.g.PodBasedNetworkGuru] (Work-Job-Executor-82:ctx-c39fa1f8
job-1042/job-138603 ctx-c17ce6fc) Allocated a nic
NicProfile[81905-20685-0493941d-d193-4325-84bc-d325a8900332-10.90.2.207-null
for VM[SecondaryStorageVm|s-20685-VM]
2017-06-20 12:15:48,280 DEBUG
[o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-82:ctx-c39fa1f8
job-1042/job-138603 ctx-c17ce6fc) Network id=203 is already implemented
2017-06-20 12:15:48,290 DEBUG
[o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-88:ctx-c51dafa0
job-342/job-138604 ctx-75edebb0) Network id=202 is already implemented
2017-06-20 12:15:48,316 DEBUG
[c.c.n.g.StorageNetworkGuru] (Work-Job-Executor-82:ctx-c39fa1f8
job-1042/job-138603 ctx-c17ce6fc) Allocated a storage nic
NicProfile[81906-20685-0493941d-d193-4325-84bc-d325a8900332-10.83.2.205-null
for VM[SecondaryStorageVm|s-20685-VM]
2017-06-20 12:15:48,336 DEBUG
[o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-82:ctx-c39fa1f8
job-1042/job-138603 ctx-c17ce6fc) Checking if we need to prepare 1 volumes for
VM[SecondaryStorageVm|s-20685-VM]
2017-06-20 12:15:48,342 DEBUG
[o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-82:ctx-c39fa1f8
job-1042/job-138603 ctx-c17ce6fc) template 1 is already in store:5, type:Image
2017-06-20 12:15:48,344 DEBUG
[o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-82:ctx-c39fa1f8
job-1042/job-138603 ctx-c17ce6fc) template 1 is already in store:5, type:Primary
2017-06-20 12:15:48,346 DEBUG
[o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-88:ctx-c51dafa0
job-342/job-138604 ctx-75edebb0) Network id=201 is already implemented
2017-06-20 12:15:48,372 DEBUG
[c.c.d.d.DataCenterIpAddressDaoImpl] (Work-Job-Executor-88:ctx-c51dafa0
job-342/job-138604 ctx-75edebb0) Releasing ip address for instance=49817
2017-06-20 12:15:48,381 DEBUG
[o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-82:ctx-c39fa1f8
job-1042/job-138603 ctx-c17ce6fc) copyAsync inspecting src type TEMPLATE
copyAsync inspecting dest type VOLUME
2017-06-20 12:15:48,386 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) Seq
16-3622864425242874354: Sending { Cmd , MgmtId: 345050411715, via:
16(Flex-Xen6.flexhost.local), Ver: v1, Flags: 100111,
[{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM
Template
(XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN2-LUN0","id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"4dba9def-2657-430e-8cd8-9369aebcaa25","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN2-LUN0","id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},"name":"ROOT-20685","size":2689602048,"volumeId":25604,"vmName":"s-20685-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":25604,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}]
}
2017-06-20 12:15:48,386 DEBUG [c.c.a.t.Request]
(Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) Seq
16-3622864425242874354: Executing: { Cmd , MgmtId: 345050411715, via:
16(Flex-Xen6.flexhost.local), Ver: v1, Flags: 100111,
[{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM
Template
(XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN2-LUN0","id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"4dba9def-2657-430e-8cd8-9369aebcaa25","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN2-LUN0","id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},"name":"ROOT-20685","size":2689602048,"volumeId":25604,"vmName":"s-20685-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":25604,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}]
}
2017-06-20 12:15:48,386 DEBUG
[c.c.a.m.DirectAgentAttache] (DirectAgent-74:ctx-0acdd419) Seq
16-3622864425242874354: Executing request
2017-06-20 12:15:48,387 DEBUG
[c.c.n.g.PodBasedNetworkGuru] (Work-Job-Executor-88:ctx-c51dafa0
job-342/job-138604 ctx-75edebb0) Allocated a nic
NicProfile[49817-12662-629b85e7-ce19-4568-9df7-143c76d24300-10.90.2.204-null
for VM[ConsoleProxy|v-12662-VM]
So how do I check UUID's to validate that they are
correct ?
2017-06-20 12:15:48,391 DEBUG
[c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-74:ctx-0acdd419) Catch
Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf failed due to The uuid you supplied was
invalid.
2017-06-20 12:15:48,391 WARN
[c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-74:ctx-0acdd419) Unable to
create volume;
Pool=volumeTO[uuid=4dba9def-2657-430e-8cd8-9369aebcaa25|path=null|datastore=PrimaryDataStoreTO[uuid=FlexSAN2-LUN0|name=null|id=5|pooltype=PreSetup]];
Disk:
com.cloud.utils.exception.CloudRuntimeException: Catch
Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf failed due to The uuid you supplied was
invalid.
Jeremy
-----Original Message-----
From: Jeremy Peterson [mailto:[email protected]]
Sent: Thursday, June 15, 2017 4:20 PM
To:
[email protected]<mailto:[email protected]>; S. Brüseke -
proIO GmbH <[email protected]<mailto:[email protected]>>
Subject: RE: Recreating SystemVM's
What type of networking are you using on the XenServers?
XenServers are connected with 6 nic's per host
connected to separate nexus 5k switches
NIC 0 and NIC 1 are Bond 0+1 10Gb nics
NIC 2 and NIC 3 are Bond 2+3 10Gb nics
NIC 4 and NIC 5 are Bond 4+5 2Gb nics
Cloudstack is running Advanced networking
Bond 0+1 is primary storage
Bond 2+3 is secondary storage
Bond 4+5 is Management
What version of os does the ms run on?
CentOS release 6.9 (Final)
What are the systemvm templates defined in your env?
http://cloudstack.apt-get.eu/systemvm/4.5/systemvm64template-4.5-xen.vhd.bz2
What is the version of the systemvm.iso?
Successfully installed system VM template to
/secondary/template/tmpl/1/1/
I just reinstalled systemvm's from the above
4.5-xen.vhd What is the capacity you have in your (test) environment?
This is a production enviroment and currently
cloudstack shows the following.
Public IP Addresses 61%
VLAN 35%
Management IP Addresses 20%
Primary Storage 44%
CPU 21%
Memory 5%
Of cource Secondary Storage shows 0%
What is the host os version for the hypervisors?
XenServer 6.5 SP1
What is the management network range?
management.network.cidr 10.90.1.0/24
What are the other physical networks?
?? Not sure what more you need
What storage do you use?
Primary - ISCSI
Secondary - NFS
Is it reachable from the systemvm?
All of my CS management servers have internet
access Is the big bad internet reachable for your SSVM's public interface?
My SSVM does not go online but yes the public
network is the same as the VR public vlan and all instances behind VR's are
connected to the internet at this time
Jeremy
-----Original Message-----
From: Daan Hoogland [mailto:[email protected]]
Sent: Thursday, June 15, 2017 9:34 AM
To: [email protected]; S. Brüseke - proIO
GmbH <[email protected]>
Subject: Re: Recreating SystemVM's
Your problem might be like what Swen says, Jeremy but
also a wrong systemvm offering or a fault in your management network definition.
I am going to sum up some trivialities so bear with me;
What type of networking are you using on the XenServers?
What version of os does the ms run on?
What are the systemvm templates defined in your env?
What is the version of the systemvm.iso?
What is the capacity you have in your (test)
environment?
What is the host os version for the hypervisors?
What is the management network range?
What are the other physical networks?
What storage do you use?
Is it reachable from the systemvm?
Is the big bad internet reachable for your SSVM's
public interface?
And of course,
How is the weather, where you are at?
I am not sure any of these question is going to lead
you in the right direction but one of them should.
On 15/06/17 13:56, "S. Brüseke - proIO GmbH"
<[email protected]> wrote:
I once did have some similar problem with my
systemvms and my root cause was that in the global settings it referred to the
wrong systemvm template. I am not sure if this helps you, but wanted to tell
you.
Mit freundlichen Grüßen / With kind regards,
Swen
-----Ursprüngliche Nachricht-----
Von: Jeremy Peterson [mailto:[email protected]]
Gesendet: Donnerstag, 15. Juni 2017 01:55
An: [email protected]
Betreff: RE: Recreating SystemVM's
Hahaha. The best response ever.
I dug through these emails and someone had soft of
the same log messages cannot attach network and blamed xenserver. Ok I'm cool
with that but why oh why is it only system vms?
Jeremy
________________________________________
From: Imran Ahmed [[email protected]]
Sent: Wednesday, June 14, 2017 6:22 PM
To: [email protected]
Subject: RE: Recreating SystemVM's
Yes,
-----Original Message-----
From: Jeremy Peterson [mailto:[email protected]]
Sent: Wednesday, June 14, 2017 9:59 PM
To: [email protected]
Subject: RE: Recreating SystemVM's
Is there anyone out there reading these messages?
Am I just not seeing responses?
Jeremy
-----Original Message-----
From: Jeremy Peterson [mailto:[email protected]]
Sent: Wednesday, June 14, 2017 8:12 AM
To: [email protected]
Subject: RE: Recreating SystemVM's
I opened an issue since this is still an issue.
CLOUDSTACK-9960
Jeremy
-----Original Message-----
From: Jeremy Peterson [mailto:[email protected]]
Sent: Sunday, June 11, 2017 9:10 AM
To: [email protected]
Subject: Re: Recreating SystemVM's
Any other suggestions?
I am going to be scheduling to run XenServer
updates. But this all points back to CANNOT_ATTACH_NETWORk.
I've verified nothing is active on the Public IP
space that those two VM's were living on.
Jeremy
________________________________________
From: Jeremy Peterson <[email protected]>
Sent: Friday, June 9, 2017 9:58 AM
To: [email protected]
Subject: RE: Recreating SystemVM's
I see the vm's try to create on a host that I just
removed from maintenance mode to install updates and here are the logs
I don't see anything that sticks out to me as a
failure message.
Jun 9 09:53:54 Xen3 SM: [13068] ['ip', 'route',
'del', '169.254.0.0/16']
Jun 9 09:53:54 Xen3 SM: [13068] pread SUCCESS
Jun 9 09:53:54 Xen3 SM: [13068] ['ifconfig',
'xapi12', '169.254.0.1', 'netmask', '255.255.0.0']
Jun 9 09:53:54 Xen3 SM: [13068] pread SUCCESS
Jun 9 09:53:54 Xen3 SM: [13068] ['ip', 'route',
'add', '169.254.0.0/16', 'dev', 'xapi12', 'src', '169.254.0.1']
Jun 9 09:53:54 Xen3 SM: [13068] pread SUCCESS
Jun 9 09:53:54 Xen3 SM: [13071] ['ip', 'route',
'del', '169.254.0.0/16']
Jun 9 09:53:54 Xen3 SM: [13071] pread SUCCESS
Jun 9 09:53:54 Xen3 SM: [13071] ['ifconfig',
'xapi12', '169.254.0.1', 'netmask', '255.255.0.0']
Jun 9 09:53:54 Xen3 SM: [13071] pread SUCCESS
Jun 9 09:53:54 Xen3 SM: [13071] ['ip', 'route',
'add', '169.254.0.0/16', 'dev', 'xapi12', 'src', '169.254.0.1']
Jun 9 09:53:54 Xen3 SM: [13071] pread SUCCESS
Jun 9 09:54:00 Xen3 SM: [13115] on-slave.multi:
{'vgName':
'VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2', 'lvName1':
'VHD-633338a7-6c40-4aa6-b88e-c798b6fdc04d',
'action1':
'deactivateNoRefcount', 'action2': 'cleanupLock',
'uuid2':
'633338a7-6c40-4aa6-b88e-c798b6fdc04d', 'ns2':
'lvm-469b6dcd-8466-3d03-de0e-cc3983e1b6e2'}
Jun 9 09:54:00 Xen3 SM: [13115] LVMCache created
for
VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2
Jun 9 09:54:00 Xen3 SM: [13115] on-slave.action 1:
deactivateNoRefcount Jun
9 09:54:00 Xen3 SM: [13115] LVMCache: will
initialize now Jun 9 09:54:00
Xen3 SM: [13115] LVMCache: refreshing Jun 9
09:54:00 Xen3 SM: [13115] ['/usr/sbin/lvs', '--noheadings', '--units', 'b',
'-o', '+lv_tags', '/dev/VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2']
Jun 9 09:54:00 Xen3 SM: [13115] pread SUCCESS
Jun 9 09:54:00 Xen3 SM: [13115]
['/usr/sbin/lvchange', '-an',
'/dev/VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2/VHD-633338a7-6c40-4
aa6-b88e-c798b6fdc04d']
Jun 9 09:54:00 Xen3 SM: [13115] pread SUCCESS
Jun 9 09:54:00 Xen3 SM: [13115] ['/sbin/dmsetup',
'status',
'VG_XenStorage--469b6dcd--8466--3d03--de0e--cc3983e1b6e2-VHD--633338a7--6c40
--4aa6--b88e--c798b6fdc04d']
Jun 9 09:54:00 Xen3 SM: [13115] pread SUCCESS
Jun 9 09:54:00 Xen3 SM: [13115] on-slave.action 2:
cleanupLock
Jun 9 09:54:16 Xen3 SM: [13230] ['ip', 'route',
'del', '169.254.0.0/16']
Jun 9 09:54:16 Xen3 SM: [13230] pread SUCCESS
Jun 9 09:54:16 Xen3 SM: [13230] ['ifconfig',
'xapi12', '169.254.0.1', 'netmask', '255.255.0.0']
Jun 9 09:54:16 Xen3 SM: [13230] pread SUCCESS
Jun 9 09:54:16 Xen3 SM: [13230] ['ip', 'route',
'add', '169.254.0.0/16', 'dev', 'xapi12', 'src', '169.254.0.1']
Jun 9 09:54:16 Xen3 SM: [13230] pread SUCCESS
Jun 9 09:54:19 Xen3 updatempppathd: [15446] The
garbage collection routine
returned: 0 Jun 9 09:54:23 Xen3 SM: [13277] ['ip',
'route', 'del', '169.254.0.0/16']
Jun 9 09:54:23 Xen3 SM: [13277] pread SUCCESS
Jun 9 09:54:23 Xen3 SM: [13277] ['ifconfig',
'xapi12', '169.254.0.1', 'netmask', '255.255.0.0']
Jun 9 09:54:23 Xen3 SM: [13277] pread SUCCESS
Jun 9 09:54:23 Xen3 SM: [13277] ['ip', 'route',
'add', '169.254.0.0/16', 'dev', 'xapi12', 'src', '169.254.0.1']
Jun 9 09:54:23 Xen3 SM: [13277] pread SUCCESS
Jeremy
-----Original Message-----
From: Jeremy Peterson [mailto:[email protected]]
Sent: Friday, June 9, 2017 9:53 AM
To: [email protected]
Subject: RE: Recreating SystemVM's
I am checking SMlog now on all hosts.
Jeremy
-----Original Message-----
From: Rajani Karuturi [mailto:[email protected]]
Sent: Friday, June 9, 2017 9:00 AM
To: Users <[email protected]>
Subject: Re: Recreating SystemVM's
on xenserver log, did you check what is causing "
HOST_CANNOT_ATTACH_NETWORK"?
~Rajani
http://cloudplatform.accelerite.com/
On Fri, Jun 9, 2017 at 7:00 PM, Jeremy Peterson
<[email protected]>
wrote:
> 08:28:43 select * from vm_instance where
name like 's-%' limit
> 10000 7481 row(s) returned 0.000 sec /
0.032 sec
>
> All vm's 'state' returned Destoryed outside of
the current vm 7873
> which is in a Stopped state but that goes
Destroyed and a new get created.
>
> Any other suggestions?
>
> Jeremy
>
>
> -----Original Message-----
> From: Jeremy Peterson
[mailto:[email protected]]
> Sent: Thursday, June 8, 2017 12:47 AM
> To: [email protected]
> Subject: Re: Recreating SystemVM's
>
> I'll make that change in the am.
>
> Today I put a host in maintence and rebooted
because proxy and
> secstore vm were constantly being created on that
host and still no
change.
>
> Let you know tomorrow.
>
> Jeremy
>
>
> Sent from my Verizon, Samsung Galaxy smartphone
>
>
> -------- Original message --------
> From: Rajani Karuturi <[email protected]>
> Date: 6/8/17 12:07 AM (GMT-06:00)
> To: Users <[email protected]>
> Subject: Re: Recreating SystemVM's
>
> Did you check SMLog on xenserver?
> unable to destroy
task(com.xensource.xenapi.Task@256829a8) on
> host(b34f086e-fabf-471e-9feb-8f54362d7d0f) due to
You gave an invalid
> object reference. The object may have recently
been deleted. The
> class parameter gives the type of reference
given, and the handle
> parameter echoes the bad value given.
>
> Looks like Destroy of SSVM failed. What state is
SSVM in? mark it as
> Destroyed in cloud DB and wait for cloudstack to
create a new SSVM.
>
> ~Rajani
> http://cloudplatform.accelerite.com/
>
> On Thu, Jun 8, 2017 at 1:11 AM, Jeremy Peterson
> <[email protected]>
> wrote:
>
> > Probably agreed.
> >
> > But I ran toolstack restart on all hypervisors
and v-3193 just tried
> > to create and fail along with s-5398.
> >
> > The PIF error went away. But VM's are still
recreating
> >
> > https://pastebin.com/4n4xBgMT
> >
> > New log from this afternoon.
> >
> > My catalina.out is over 4GB
> >
> > Jeremy
> >
> >
> > -----Original Message-----
> > From: Makrand [mailto:[email protected]]
> > Sent: Wednesday, June 7, 2017 12:52 AM
> > To: [email protected]
> > Subject: Re: Recreating SystemVM's
> >
> > Hi there,
> >
> > Looks more like hypervisor issue.
> >
> > Just run *xe-toolstack-restart* on hosts where
these VMs are trying
> > to start or if you don't have too many hosts,
better run on all
> > members including master. most of i/o related
issues squared off by
> > toolstack bounce.
> >
> > --
> > Makrand
> >
> >
> > On Wed, Jun 7, 2017 at 3:01 AM, Jeremy Peterson
> > <[email protected]>
> > wrote:
> >
> > > Ok so I pulled this from Sunday morning.
> > >
> > > https://pastebin.com/nCETw1sC
> > >
> > >
> > > errorInfo: [HOST_CANNOT_ATTACH_NETWORK,
> > >
OpaqueRef:65d0c844-bd70-81e9-4518-8809e1dc0ee7,
> > >
OpaqueRef:0093ac3f-9f3a-37e1-9cdb-581398d27ba2]
> > >
> > > XenServer error.
> > >
> > > Now this still gets me because all of the
other VM's launched just
> fine.
> > >
> > > Going into XenCenter I see an error at the
bottom This PIF is a
> > > bond slave and cannot be plugged.
> > >
> > > ???
> > >
> > > If I go to networking on the hosts I see the
storage vlans and
> > > bonds are all there.
> > >
> > > I see my GUEST-PUB bond is there and LACP is
setup correct.
> > >
> > > Any suggestions ?
> > >
> > >
> > > Jeremy
> > >
> > >
> > > -----Original Message-----
> > > From: Jeremy Peterson
[mailto:[email protected]]
> > > Sent: Tuesday, June 6, 2017 9:23 AM
> > > To: [email protected]
> > > Subject: RE: Recreating SystemVM's
> > >
> > > Thank you all for those responses.
> > >
> > > I'll comb through my management-server.log
and post a pastebin if
> > > I'm scratching my head.
> > >
> > > Jeremy
> > >
> > > -----Original Message-----
> > > From: Rajani Karuturi
[mailto:[email protected]]
> > > Sent: Tuesday, June 6, 2017 6:53 AM
> > > To: [email protected]
> > > Subject: Re: Recreating SystemVM's
> > >
> > > If the zone is enabled, cloudstack should
recreate them automatically.
> > >
> > > ~ Rajani
> > >
> > > http://cloudplatform.accelerite.com/
> > >
> > > On June 6, 2017 at 11:37 AM, Erik Weber
([email protected])
> > > wrote:
> > >
> > > CloudStack should recreate automatically,
check the mgmt server
> > > logs for hints of why it doesn't happen.
> > >
> > > --
> > > Erik
> > >
> > > tir. 6. jun. 2017 kl. 04.29 skrev Jeremy
Peterson
> > > <[email protected]>:
> > >
> > > I had an issue Sunday morning with cloudstack
4.9.0 and xenserver
> 6.5.0.
> > > My hosts stop sending LACP PDU's and caused a
network drop to
> > > iSCSI primary storage.
> > >
> > > So all my instances recovered via HA enabled.
> > >
> > > But my console proxy and secondary storage
system VM's got stuck
> > > in a boot state that would not power on.
> > >
> > > At this time they are expunged and gone.
> > >
> > > How do I tell cloudstack-management to
recreate system VM's?
> > >
> > > I'm drawing a blank since deploying CS two
years ago and just
> > > keeping things running and adding hosts and
more storage
> > > everything has been so stable.
> > >
> > > Jeremy
> > >
> >
>
- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main
USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239
Diese E-Mail enthält vertrauliche und/oder
rechtlich geschützte Informationen.
Wenn Sie nicht der richtige Adressat sind oder
diese E-Mail irrtümlich erhalten haben,
informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail.
Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail sind nicht gestattet.
This e-mail may contain confidential and/or
privileged information.
If you are not the intended recipient (or have
received this e-mail in error) please notify
the sender immediately and destroy this e-mail.
Any unauthorized copying, disclosure or
distribution of the material in this e-mail is strictly forbidden.
[email protected]
www.shapeblue.com<http://www.shapeblue.com>
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue
[email protected]
www.shapeblue.com<http://www.shapeblue.com>
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue
[email protected]
www.shapeblue.com<http://www.shapeblue.com>
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue
[email protected]
www.shapeblue.com<http://www.shapeblue.com>
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue
[email protected]
www.shapeblue.com<http://www.shapeblue.com>
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue
[email protected]
www.shapeblue.com<http://www.shapeblue.com>
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue
[email protected]
www.shapeblue.com<http://www.shapeblue.com>
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue