#scopecreep Paul ;)

But I think the problem you identify is related to selection of the
deployment planner strategy, globally define or at the compute offering.
You can select how cloudstack choose the host to deploy a new vm.

But even then, like marcus stated, if you add a node to a full cluster, all
new vm will be created on that node.

So if nobody have WIP around post deployment orchestration, I'll work on
the feature spec with the university, with objective in mind to easy
hypervisor maintenances, better distribution of workload.

I would not expect PR before ~6months, but will have some actions around it
very soon I hope.



Le ven. 7 sept. 2018 09 h 05, Marc-Andre Jutras <mar...@marcuspocus.com> a
écrit :

> I agree, it is affecting all hypervisor... I basically had to migrate a
> bunch of vm manually to re-balance a cluster after an upgrade or even
> after re-adding new host to a cluster.
>
> Personally, I think Cloudstack should be able to handles this <auto-re
> balancing> of resource, example: having a piece of code somewhere that
> can run every hours or on demand to re-calculate and re-balance
> resources across hosts within a cluster...
>
> Even the deployment planner is not really relevant here: this process
> will basically balance new VM creation through different clusters of a
> POD, not between hosts within a cluster and it's also becoming a
> nightmare when you start to do cross-cluster migration...
>
> Sum of all : The deployment strategies planner should be re-worked a bit...
>
> +1 on #scoopcreep ;)
>
> Marcus ( mjut...@cloudops.com )
>
> On 2018-09-07 6:01 AM, Paul Angus wrote:
> > I think that this affects all hypervisors as CloudStack's deployment
> strategies are generally sub-optimal to say the least.
> >  From what our devs have told me, a large part of the problem is that
> capacity/usage and suitability due to tags is calculated by multiple parts
> of the code independently, there is no central method, which will give a
> consistent answer.
> >
> > In Trillian we take a micro-management approach and have a custom module
> which will return the least used cluster, the least used host or the least
> used host in a given cluster.  With that info we place VMs on a specific
> hosts - keeping virtualised hypervisors in the same cluster (least used) so
> that processor types match, and all other VMs on the least used hosts.
> >
> > For cross-cluster migrations (VMs and/or storage) I think that most
> times people want to move from cluster A to the least used
> (cluster/storage) in cluster B - making them choose which host/pool is
> actually unhelpful.
> >
> > #scopecreep - sorry Pierre-Luc
> >
> > Kind regards,
> >
> > Paul Angus
> >
> > paul.an...@shapeblue.com
> > www.shapeblue.com
> > Amadeus House, Floral Street, London  WC2E 9DPUK
> > @shapeblue
> >
> >
> >
> >
> > -----Original Message-----
> > From: Will Stevens <wstev...@cloudops.com>
> > Sent: 06 September 2018 19:45
> > To: dev@cloudstack.apache.org; Marc-Andre Jutras <mjut...@cloudops.com>
> > Subject: Re: [DISCUSS] deployment planner improvement
> >
> > If I remember correctly, we see similar issues on VMware.  Marcus, have
> you seen similar behavior on VMware?  I think I remember us having to
> manually vMotion a lot of VMs very often...
> >
> > *Will Stevens*
> > Chief Technology Officer
> > c 514.826.0190
> >
> > <https://goo.gl/NYZ8KK>
> >
> >
> > On Thu, Sep 6, 2018 at 2:34 PM Pierre-Luc Dion <pd...@cloudops.com>
> wrote:
> >
> >> Hi,
> >>
> >> I'm working with a University in Montreal and we are looking at
> >> working together to improve the deployment planner. Mainly for post
> >> VM.CREATE tasks.
> >> Because  what we observed with cloudstack, in our case with XenServer,
> >> overtime, a cluster will become unbalanced in therm of workload, vm HA
> >> will move VMs all over the the cluster which cause hotspot inside a
> cluster.
> >> Also, when performing maintenance  xenmotion of VM spread them in the
> >> cluster but does not consider host usage and at the end of a
> >> maintenance it require manual operation to repopulate VMs on the last
> >> host updated.  OS preference not taken into account  except for
> VM.CREATE.
> >>
> >> So,
> >> I'd like to work on improving VMs dispersion during and post outage
> >> and maintenances. when a cluster resources are added or removed.
> >>
> >> Would you have any more requirement, we will document a feature spec
> >> in the wiki which I believe it's still a requirement ?
> >>
> >> Does using KVM have similar issues over time?
> >>
> >> I don't think it would make sense to cloudstack to automatically take
> >> decision on moving VMs but for now create report of recommended action
> >> to do and provide steps to do them. tbd.
> >>
> >> Cheers,
> >>
> >> PL
> >>
>
>

Reply via email to