I succeded to re-install the System template

despite the fact i declared a new template with the GUI, set type = SYSTEM,
installed with cloud-install-sys-tmplt the template on NFS Sec storage
I had still errors when system trying to provisioning the vm template on
Storage Pool
The log of mgmt says that no storage pool were available although i have
sufficient space and no blocking tags on Storage Pool

The new template was still in state "allocated"
I edited the following data like

state = Ready, (was allocated)
install_path = template/tmpl/1/225/5951639d-e36b-494a-9e49-6a9ce2f3542c.vhd
(was empty),
download_state = DOWNLOADED (was empty),
physical_size = size of template id1 (was empry),
size = size of template id1, (was empty)
downloaded_pct = 100 (was empty)


Then system successfully installed the new system template on my 2
StoragePool and CPVM and SSVM have been successfully reinstalled

Have you got some ideas of something i would have skipped ?
Is there another way (more conventional) of what i did to recover my system
template ?

Best Regards, Benoit

Le jeu. 28 oct. 2021 à 10:49, benoit lair <kurushi4...@gmail.com> a écrit :

> I tried also to recreate another system vm template with GUI
> I followed this link :
> https://docs.cloudstack.apache.org/en/latest/adminguide/systemvm.html#changing-the-default-system-vm-template
> I changed the value in Database with type SYSTEM for the ne entry in
> templates
> I changed the router.template.xenserver value with the name of the new
> template
> I launched on ACS mgmt server : cloud-install-sys-tmplt, it created the
> directory with id 225 in tmpl/1/225 and downloaded the vhd template file
> into it
> But the template is still not available in GUI and Database
>
> How could i restore system vm template ?
>
> Best, Benoit
>
> Le jeu. 28 oct. 2021 à 00:51, benoit lair <kurushi4...@gmail.com> a
> écrit :
>
>> I tried to free my SR of tags
>> I restarted ACS
>>
>> Here is the log generated about systems vms after the reboot :
>>
>> https://pastebin.com/xJNfA23u
>>
>> The parts of the log which are curious for me :
>>
>> 2021-10-28 00:31:04,462 DEBUG [c.c.h.x.r.XenServerStorageProcessor]
>> (DirectAgent-14:ctx-3eaf758f) (logid:cc3c4e1e) Catch Exception
>> com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
>> 159e620a-575d-43a8-9a57-f3c7f57a1c8a failed due to The uuid you supplied
>> was invalid.
>> 2021-10-28 00:31:04,462 WARN  [c.c.h.x.r.XenServerStorageProcessor]
>> (DirectAgent-14:ctx-3eaf758f) (logid:cc3c4e1e) Unable to create volume;
>> Pool=volumeTO[uuid=e4347562-9454-453d-be04-29dc746aaf33|path=null|datastore=PrimaryDataStoreTO[uuid=fbbf2bf0-ccc8-4df3-9794-c914f418a9d9|name=null|id=2|pooltype=PreSetup]];
>> Disk:
>> com.cloud.utils.exception.CloudRuntimeException: Catch Exception
>> com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
>> 159e620a-575d-43a8-9a57-f3c7f57a1c8a failed due to The uuid you supplied
>> was invalid.
>>         at
>> com.cloud.hypervisor.xenserver.resource.XenServerStorageProcessor.getVDIbyUuid(XenServerStorageProcessor.java:655)
>>         at
>> com.cloud.hypervisor.xenserver.resource.XenServerStorageProcessor.cloneVolumeFromBaseTemplate(XenServerStorageProcessor.java:843)
>>         at
>> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:99)
>>         at
>> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:59)
>>         at
>> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStorageSubSystemCommandWrapper.execute(CitrixStorageSubSystemCommandWrapper.java:36)
>>         at
>> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStorageSubSystemCommandWrapper.execute(CitrixStorageSubSystemCommandWrapper.java:30)
>>         at
>> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
>>         at
>> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1763)
>>         at
>> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
>>         at
>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48)
>>         at
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
>>         at
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
>>         at
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
>>         at
>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:45)
>>         at
>> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>>         at
>> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>>         at
>> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>>         at
>> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>>         at
>> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>>         at java.base/java.lang.Thread.run(Thread.java:829)
>> Caused by: The uuid you supplied was invalid.
>>         at com.xensource.xenapi.Types.checkResponse(Types.java:1491)
>>         at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
>>         at
>> com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
>>         ... 21 more
>>
>> Is this normal to have this : Unable to create volume;
>> Pool=volumeTO[uuid=e4347562-9454-453d-be04-29dc746aaf33|path=null|datastore=PrimaryDataStoreTO[uuid=fbbf2bf0-ccc8-4df3-9794-c914f418a9d9|name=null|id=2|pooltype=PreSetup]]
>> with values null ?
>>
>> Best, Benoit
>>
>> Le jeu. 28 oct. 2021 à 00:46, benoit lair <kurushi4...@gmail.com> a
>> écrit :
>>
>>> Hello Andrija,
>>>
>>> Well seen :)
>>>
>>> 2021-10-27 17:59:22,100 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
>>> FirstFitRoutingAllocator) (logid:ce3ac740) Host name: xcp-cluster1-01,
>>> hostId: 1 is in avoid set, skipping this and trying other available hosts
>>> 2021-10-27 17:59:22,109 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
>>> FirstFitRoutingAllocator) (logid:ce3ac740) Host: 3 has cpu capability
>>> (cpu:48, speed:2593) to support requested CPU: 1 and requested speed: 500
>>> 2021-10-27 17:59:22,109 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
>>> FirstFitRoutingAllocator) (logid:ce3ac740) Checking if host: 3 has enough
>>> capacity for requested CPU: 500 and requested RAM: (512.00 MB) 536870912 ,
>>> cpuOverprovisioningFactor: 1.0
>>> 2021-10-27 17:59:22,112 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
>>> FirstFitRoutingAllocator) (logid:ce3ac740) Hosts's actual total CPU: 124464
>>> and CPU after applying overprovisioning: 124464
>>> 2021-10-27 17:59:22,112 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
>>> FirstFitRoutingAllocator) (logid:ce3ac740) Free CPU: 74700 , Requested CPU:
>>> 500
>>> 2021-10-27 17:59:22,112 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
>>> FirstFitRoutingAllocator) (logid:ce3ac740) Free RAM: (391.87 GB)
>>> 420762157056 , Requested RAM: (512.00 MB) 536870912
>>> 2021-10-27 17:59:22,112 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
>>> FirstFitRoutingAllocator) (logid:ce3ac740) Host has enough CPU and RAM
>>> available
>>> 2021-10-27 17:59:22,112 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
>>> FirstFitRoutingAllocator) (logid:ce3ac740) STATS: Can alloc CPU from host:
>>> 3, used: 49764, reserved: 0, actual total: 124464, total with
>>> overprovisioning: 124464; requested cpu:500,alloc_from_last_host?:false
>>> ,considerReservedCapacity?: true
>>> 2021-10-27 17:59:22,112 DEBUG [c.c.c.CapacityManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
>>> FirstFitRoutingAllocator) (logid:ce3ac740) STATS: Can alloc MEM from host:
>>> 3, used: (33.50 GB) 35970351104, reserved: (0 bytes) 0, total: (425.37 GB)
>>> 456732508160; requested mem: (512.00 MB) 536870912, alloc_from_last_host?:
>>> false , considerReservedCapacity?: true
>>> 2021-10-27 17:59:22,112 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
>>> FirstFitRoutingAllocator) (logid:ce3ac740) Found a suitable host, adding to
>>> list: 3
>>> 2021-10-27 17:59:22,112 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8
>>> FirstFitRoutingAllocator) (logid:ce3ac740) Host Allocator returning 1
>>> suitable hosts
>>> 2021-10-27 17:59:22,115 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) Checking suitable pools for volume (Id, Type): (211,ROOT)
>>> 2021-10-27 17:59:22,115 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) We need to allocate new storagepool for this volume
>>> 2021-10-27 17:59:22,116 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) Calling StoragePoolAllocators to find suitable pools
>>> 2021-10-27 17:59:22,121 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) System VMs will use shared storage for zone id=1
>>> 2021-10-27 17:59:22,121 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) LocalStoragePoolAllocator trying to find storage pool to
>>> fit the vm
>>> 2021-10-27 17:59:22,121 DEBUG
>>> [o.a.c.s.a.ClusterScopeStoragePoolAllocator]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) ClusterScopeStoragePoolAllocator looking for storage pool
>>> 2021-10-27 17:59:22,121 DEBUG
>>> [o.a.c.s.a.ClusterScopeStoragePoolAllocator]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) Looking for pools in dc: 1  pod:1  cluster:1. Disabled
>>> pools will be ignored.
>>> 2021-10-27 17:59:22,122 DEBUG
>>> [o.a.c.s.a.ClusterScopeStoragePoolAllocator]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) Found pools matching tags: [Pool[1|PreSetup],
>>> Pool[2|PreSetup]]
>>> 2021-10-27 17:59:22,124 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) Checking if storage pool is suitable, name: null ,poolId: 1
>>> 2021-10-27 17:59:22,124 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) StoragePool is in avoid set, skipping this pool
>>> 2021-10-27 17:59:22,125 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) Checking if storage pool is suitable, name: null ,poolId: 2
>>> 2021-10-27 17:59:22,125 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) StoragePool is in avoid set, skipping this pool
>>> 2021-10-27 17:59:22,125 DEBUG
>>> [o.a.c.s.a.ClusterScopeStoragePoolAllocator]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) ClusterScopeStoragePoolAllocator returning 0 suitable
>>> storage pools
>>> 2021-10-27 17:59:22,125 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) ZoneWideStoragePoolAllocator to find storage pool
>>> 2021-10-27 17:59:22,128 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) No suitable pools found for volume: Vol[211|vm=206|ROOT]
>>> under cluster: 1
>>> 2021-10-27 17:59:22,128 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) No suitable pools found
>>> 2021-10-27 17:59:22,128 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>> (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>> (logid:ce3ac740) No suitable storagePools found under this Cluster: 1
>>>
>>> It says that there is any StoragePool available
>>> However, i have enough space (under the 0.85 value with a
>>> overprovisionning factor of 1.0), i have enough Cpu and Ram
>>> I do not understand what is blocking the provisionning for the system Vms
>>>
>>> Best regards
>>> Benoit
>>>
>>> Le mer. 27 oct. 2021 à 18:19, Andrija Panic <andrija.pa...@gmail.com> a
>>> écrit :
>>>
>>>>  No suitable storagePools found under this Cluster: 1
>>>>
>>>> Can you check the mgmt log lines BEFORE this line above - there should
>>>> be
>>>> clear indication WHY no suitable storage pools are found (this is
>>>> Primary
>>>> Storage pool)
>>>>
>>>> Best,
>>>>
>>>> On Wed, 27 Oct 2021 at 18:04, benoit lair <kurushi4...@gmail.com>
>>>> wrote:
>>>>
>>>> > Hello guys,
>>>> >
>>>> > I have a important issue with secondary storage
>>>> >
>>>> > I have 2 nfs secondary storage and a ACS Mgmt server
>>>> > I lost the system template vm id1 on both of Nfs sec storage servers
>>>> > The ssvm and cpvm are destroyed
>>>> > The template routing-1 has been deleted on all SR of hypervisors
>>>> (xcp-ng)
>>>> >
>>>> > I am trying to recover the ACS system template workflow
>>>> >
>>>> > I have tried to reinstall the system vm template from ACS Mgmt server
>>>> with
>>>> > :
>>>> >
>>>> >
>>>> >
>>>> /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
>>>> > -m /mnt/secondary -u
>>>> >
>>>> >
>>>> https://download.cloudstack.org/systemvm/4.15/systemvmtemplate-4.15.1-xen.vhd.bz2
>>>> > -h
>>>> > <
>>>> https://download.cloudstack.org/systemvm/4.15/systemvmtemplate-4.15.1-xen.vhd.bz2-h
>>>> >
>>>> > xenserver -s <optional-management-server-secret-key> -F
>>>> >
>>>> > It has recreated on NFS1 the directory tmpl/1/1 and uploaded the vhd
>>>> file
>>>> > and created the template.properties file
>>>> >
>>>> > I made the same on NFS2
>>>> > on ACS Gui, it says me the template SystemVM Template (XenServer)  is
>>>> ready
>>>> > On nfs the vhd is present
>>>> > But even after restarting the ACS mgmt server, it fails to restart the
>>>> > system vm template with the following error in mgmt log file :
>>>> >
>>>> > 2021-10-27 17:59:22,128 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>>> > (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>>> > (logid:ce3ac740) No suitable storagePools found under this Cluster: 1
>>>> > 2021-10-27 17:59:22,129 DEBUG [c.c.a.t.Request]
>>>> > (Work-Job-Executor-94:ctx-58cb275b job-2553/job-2649 ctx-fa7b1ea6)
>>>> > (logid:02bb9549) Seq 1-8737827702028894444: Executing:  { Cmd ,
>>>> MgmtId:
>>>> > 161064792470736, via: 1(xcp-cluster1-01), Ver: v1, Flags: 100111,
>>>> >
>>>> >
>>>> [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.
>>>> > cloudstack.storage.to
>>>> >
>>>> .TemplateObjectTO":{"path":"159e620a-575d-43a8-9a57-f3c7f57a1c8a","origUrl":"
>>>> >
>>>> >
>>>> https://download.cloudstack.org/systemvm/4.15/systemvmtemplate-4.15.1-xen.vhd.bz2
>>>> >
>>>> ","uuid":"a9151f22-f4bb-4f7a-983e-c8abd01f745b","id":"1","format":"VHD","accountId":"1","checksum":"{MD5}86373992740b1eca8aff8b08ebf3aea5","hvm":"false","displayText":"SystemVM
>>>> > Template
>>>> >
>>>> >
>>>> (XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"fbbf2bf0-ccc8-4df3-9794-c914f418a9d9","id":"2","poolType":"PreSetup","host":"localhost","path":"/fbbf2bf0-ccc8-4df3-9794-c914f418a9d9","port":"0","url":"PreSetup://localhost/fbbf2bf0-ccc8-4df3-9794-c914f418a9d9/?ROLE=Primary&STOREUUID=fbbf2bf0-ccc8-4df3-9794-c914f418a9d9","isManaged":"false"}},"name":"routing-1","size":"(2.44
>>>> > GB)
>>>> >
>>>> >
>>>> 2621440000","hypervisorType":"XenServer","bootable":"false","uniqueName":"routing-1","directDownload":"false","deployAsIs":"false"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"edb85ea0-d786-44f3-901b-e530bb2e6030","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"fbbf2bf0-ccc8-4df3-9794-c914f418a9d9","id":"2","poolType":"PreSetup","host":"localhost","path":"/fbbf2bf0-ccc8-4df3-9794-c914f418a9d9","port":"0","url":"PreSetup://localhost/fbbf2bf0-ccc8-4df3-9794-c914f418a9d9/?ROLE=Primary&STOREUUID=fbbf2bf0-ccc8-4df3-9794-c914f418a9d9","isManaged":"false"}},"name":"ROOT-207","size":"(2.45
>>>> > GB)
>>>> >
>>>> >
>>>> 2626564608","volumeId":"212","vmName":"v-207-VM","accountId":"1","format":"VHD","provisioningType":"THIN","id":"212","deviceId":"0","hypervisorType":"XenServer","directDownload":"false","deployAsIs":"false"}},"executeInSequence":"true","options":{},"options2":{},"wait":"0","bypassHostMaintenance":"false"}}]
>>>> > }
>>>> > 2021-10-27 17:59:22,129 DEBUG [c.c.a.m.DirectAgentAttache]
>>>> > (DirectAgent-221:ctx-737e97d0) (logid:7a1a71eb) Seq
>>>> 1-8737827702028894444:
>>>> > Executing request
>>>> > 2021-10-27 17:59:22,132 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
>>>> > (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>>> > (logid:ce3ac740) Could not find suitable Deployment Destination for
>>>> this VM
>>>> > under any clusters, returning.
>>>> > 2021-10-27 17:59:22,133 DEBUG [c.c.d.FirstFitPlanner]
>>>> > (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>>> > (logid:ce3ac740) Searching all possible resources under this Zone: 1
>>>> > 2021-10-27 17:59:22,134 DEBUG [c.c.d.FirstFitPlanner]
>>>> > (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>>> > (logid:ce3ac740) Listing clusters in order of aggregate capacity,
>>>> that have
>>>> > (at least one host with) enough CPU and RAM capacity under this Zone:
>>>> 1
>>>> > 2021-10-27 17:59:22,137 DEBUG [c.c.d.FirstFitPlanner]
>>>> > (Work-Job-Executor-93:ctx-30ef4f6b job-2552/job-2648 ctx-d1d9ade8)
>>>> > (logid:ce3ac740) Removing from the clusterId list these clusters from
>>>> avoid
>>>> > set: [1]
>>>> > 2021-10-27 17:59:22,138 DEBUG [c.c.h.x.r.XenServerStorageProcessor]
>>>> > (DirectAgent-221:ctx-737e97d0) (logid:02bb9549) Catch Exception
>>>> > com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
>>>> > 159e620a-575d-43a8-9a57-f3c7f57a1c8a failed due to The uuid you
>>>> supplied
>>>> > was invalid.
>>>> > 2021-10-27 17:59:22,138 WARN  [c.c.h.x.r.XenServerStorageProcessor]
>>>> > (DirectAgent-221:ctx-737e97d0) (logid:02bb9549) Unable to create
>>>> volume;
>>>> >
>>>> >
>>>> Pool=volumeTO[uuid=edb85ea0-d786-44f3-901b-e530bb2e6030|path=null|datastore=PrimaryDataStoreTO[uuid=fbbf2bf0-ccc8-4df3-9794-c914f418a9d9|name=null|id=2|pooltype=PreSetup]];
>>>> > Disk:
>>>> > com.cloud.utils.exception.CloudRuntimeException: Catch Exception
>>>> > com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid:
>>>> > 159e620a-575d-43a8-9a57-f3c7f57a1c8a failed due to The uuid you
>>>> supplied
>>>> > was invalid.
>>>> >         at
>>>> >
>>>> >
>>>> com.cloud.hypervisor.xenserver.resource.XenServerStorageProcessor.getVDIbyUuid(XenServerStorageProcessor.java:655)
>>>> >         at
>>>> >
>>>> >
>>>> com.cloud.hypervisor.xenserver.resource.XenServerStorageProcessor.cloneVolumeFromBaseTemplate(XenServerStorageProcessor.java:843)
>>>> >         at
>>>> >
>>>> >
>>>> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:99)
>>>> >         at
>>>> >
>>>> >
>>>> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:59)
>>>> >         at
>>>> >
>>>> >
>>>> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStorageSubSystemCommandWrapper.execute(CitrixStorageSubSystemCommandWrapper.java:36)
>>>> >         at
>>>> >
>>>> >
>>>> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStorageSubSystemCommandWrapper.execute(CitrixStorageSubSystemCommandWrapper.java:30)
>>>> >         at
>>>> >
>>>> >
>>>> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
>>>> >         at
>>>> >
>>>> >
>>>> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1763)
>>>> >         at
>>>> >
>>>> >
>>>> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
>>>> >         at
>>>> >
>>>> >
>>>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48)
>>>> >         at
>>>> >
>>>> >
>>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
>>>> >         at
>>>> >
>>>> >
>>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
>>>> >         at
>>>> >
>>>> >
>>>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
>>>> >         at
>>>> >
>>>> >
>>>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:45)
>>>> >         at
>>>> >
>>>> >
>>>> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>>>> >         at
>>>> > java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>>>> >         at
>>>> >
>>>> >
>>>> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>>>> >         at
>>>> >
>>>> >
>>>> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>>>> >         at
>>>> >
>>>> >
>>>> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>>>> >         at java.base/java.lang.Thread.run(Thread.java:829)
>>>> > Caused by: The uuid you supplied was invalid.
>>>> >         at com.xensource.xenapi.Types.checkResponse(Types.java:1491)
>>>> >
>>>> >
>>>> > Have you got a tip to get back working my system vms ?
>>>> >
>>>> > Thanks a lot
>>>> >
>>>> > Benoit
>>>> >
>>>>
>>>>
>>>> --
>>>>
>>>> Andrija Panić
>>>>
>>>

Reply via email to