sometimes when i have bad luck spinning vms from the webpage , i try to
force cloudmonkey to create vms for me. it always works.
to create vms using cloudmonkey run the below commands with all ur options
:
*deploy virtualmachine domainid= zoneid= displayname= name= tempalteid=
serviceofferingid= hostid= projectid= ipaddress= *
*cloudmonkey > !for i in {317..326} ; do cloudmonkey deploy virtualmachine
domainid=ca46xxxxxxxx7-11e3-9556-9a21ee1e2575
zoneid=52xxxxxx0dc-b9ce-c907750e0d61 displayname=hadoopvm-p$i
name=hadoop-p$i templateid=0a73a869-3aa6-4d69-94c7-4e9cd9231b73
serviceofferingid=94798ce4-45cc-429b-85f3-b168fadac6bc
networkids=85f3780a-1c26-4f76-9680-26786da626b9 *
*hostid=17b841d3-55c9-4f28-a94e-da20486881a8
**projectid=5dc4f1b8-9c14-47d4-8d0a-6dba9e6d6825
; done*
thanks
prashant
On Thu, Aug 27, 2015 at 12:00 PM, Prashant s <[email protected]> wrote:
> i always use cloudmonkey to spin vm on a particular host
>
> *cloudmonkey > !for i in {317..326} ; do cloudmonkey deploy virtualmachine
> domainid=ca46xxxxxxxx7-11e3-9556-9a21ee1e2575
> zoneid=52xxxxxx0dc-b9ce-c907750e0d61 displayname=hadoopvm-p$i
> name=hadoop-p$i templateid=0a73a869-3aa6-4d69-94c7-4e9cd9231b73
> serviceofferingid=94798ce4-45cc-429b-85f3-b168fadac6bc
> networkids=85f3780a-1c26-4f76-9680-26786da626b9 *
> *hostid=17b841d3-55c9-4f28-a94e-da20486881a8
> **projectid=5dc4f1b8-9c14-47d4-8d0a-6dba9e6d6825
> ; done*
>
>
> *cloudmonkey deploy virtualmachine domainid= zoneid= displayname= name=
> tempalteid= serviceofferingid= hostid= projectid= ipaddress= *
>
>
> *thanks *
> *prashant *
>
> On Thu, Aug 27, 2015 at 4:01 AM, Martin Emrich <[email protected]>
> wrote:
>
>> Hi!
>>
>> I still have this problem. We added an additional server to this cluster
>> last week, but it is not being used. ACS tries to start the VM on the first
>> (full) server and aborts, while the new empty server is being ignored.
>> Now I'll migrate some VMs manually to the new server, but I assume that's
>> not the way it is meant to be...
>>
>> Ciao
>>
>> Martin
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Somesh Naidu [mailto:[email protected]]
>> Gesendet: Montag, 27. Juli 2015 18:40
>> An: [email protected]
>> Betreff: RE: Deployment failed on XenServer due to capacity miscalculation
>>
>> This is mostly due to incorrect calculation of XS memory overhead
>> calculation by cloudstack. However, it is expected that the VMs launch is
>> retried on other available host in cluster.
>>
>> Regards,
>> Somesh
>>
>>
>> -----Original Message-----
>> From: Martin Emrich [mailto:[email protected]]
>> Sent: Monday, July 27, 2015 9:55 AM
>> To: [email protected]
>> Subject: AW: Deployment failed on XenServer due to capacity miscalculation
>>
>> Hi!
>>
>> (sorry for the delay, was on vacation).
>>
>> I have never heard of XenServer having this limitation, and have also
>> never experienced it (We use XenServer w/o ACS heavily for several years
>> now, but I never ran into this issue). I also found nothing conclusive...
>> can you provide some documentation on this limitation?
>>
>> I'll try to evacuate and reboot the host, that might "reset" the stats
>> and help.
>>
>> Thanks,
>>
>> Martin
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Stephan Seitz [mailto:[email protected]]
>> Gesendet: Sonntag, 12. Juli 2015 22:52
>> An: [email protected]
>> Betreff: Re: Deployment failed on XenServer due to capacity miscalculation
>>
>> Hi there,
>>
>> despite not reading the whole thread, I'ld assume that there's simple no
>> single memory segment of the requested size available at your particular
>> xenserver.
>> Just keep in mind, that Xen partitions memory and - after long run -
>> could not assign a contiguous block, even if the sum of all segemented
>> blocks is greater.
>> It depends on the version (and if you're brave on different tmem
>> settings) how reorganization is managed.
>> Long story short: put the particular host in maintenance, reboot it and
>> get it back into your ACS.
>>
>> cheers,
>>
>> Stephan
>>
>> Am Freitag, den 10.07.2015, 18:02 +0200 schrieb Martin Emrich:
>> > Hi!
>> >
>> > Am 10.07.2015 um 16:42 schrieb Timothy Lothering:
>> > > Hi Martin,
>> > >
>> > > From the logs it seems that ACS has found that the host has
>> sufficient memory capacity, but when it deploys it, it seems there is not
>> enough. It could be a bug whereby technically the system has enough
>> capacity, but during the provisioning stage, it suddenly does not.
>> > >
>> > > errorInfo: [HOST_NOT_ENOUGH_FREE_MEMORY, 4447010816, 1744826368]
>> > >
>> >
>> > I read this message as [..., Requested Memory, Available Memory ] on
>> > the XenServer.
>> >
>> > > From the logs it seems you are also using Local Storage (vs Shared),
>> so initially it finds that host 335 has enough memory (albeit ~7MB left)
>> and tries to deploy. The deploy fails and it tries to redeploy the VM using
>> Host 335's storage, which is inaccessible.
>> > >
>> > > 1. Have you tried to deploy a 2GB memory Machine on this host?
>> >
>> > Yes, won't work either, as the XenServer just had 1,7GB free.
>> > But I could create a 512MB VM as expected.
>> >
>> > Now ACS thinks the host has 3,507 GB free, while XenServer reports
>> > 1,2GB free. So the gap between what is really free and what ACS thinks
>> > is free remains the same.
>> >
>> > > 2. Do both hosts have the same CPU and memory configuration?
>> >
>> > yes, absolutely identical.
>> >
>> > > 3. Try to the following:
>> > >
>> > > a. Increase the cluster.memory.allocated.capacity.disablethreshold
>> > > from 0.85 to 0.90 and restart MS - Test redeploy b. Decrease the
>> > > cluster.memory.allocated.capacity.disablethreshold from 0.85 to 0.80
>> > > and restart MS - Test redeploy
>> > >
>> > > The above two tests should get your Host a bit more manoeuvrability
>> and see what happens in the MS Logs.
>> >
>> > No effect, as these options refer to a complete cluster, not a single
>> > host. After changing them, ACS still tries to deploy a new 2GB VM on
>> > the full host.
>> >
>> > I think the key is to somehow force ACS to _ask_ XenServer how much
>> > memory is really free, instead of doing it's own calculations.
>> >
>> > Ciao
>> >
>> > Martin
>>
>>
>