Re: Kubernetes cloud provider

2018-09-07 Thread Sebastien Goasguen
Right,

So you need to engage in this PR:
https://github.com/kubernetes/kubernetes/pull/68199

or send me email seb...@cloudtank.ch


On Thu, Sep 6, 2018 at 9:20 AM Wei ZHOU  wrote:

> Hi Sebastien,
>
> We (as Leaseweb) are using CloudStack plugin in our kubernetes clusters
> [1][2].
> The main usage are
> (1) CloudStack will acquire a public IP and create LB rules when we create
> a LoadBalancer service in kubernetes
> (2) CloudStack will create/remove load balancer rules when we scale out/in
> a cluster.
>
> If the plugin is removed from kubernetes because nobody works on making it
> external, we have to maintain IP and LB rules manually.
>
>
> [1]
> https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
> [2]
> https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers/cloudstack
>
> Kind regards,
> Wei Zhou
>
> Sebastien Goasguen  于2018年9月4日周二 上午8:18写道:
>
>> Hi all, it has been a while.
>>
>> Please see this:
>>
>> https://github.com/kubernetes/kubernetes/pull/68199
>>
>> I don't know if anyone is using it but if we don't reply the cloudstack
>> controller in kubernetes might get removed from the upstream.
>>
>> -seb
>>
>


Re: Kubernetes cloud provider

2018-09-07 Thread Rohit Yadav
Let me engage on the PR, apart from what Wei has shared we may also need to
add some additional support/features in future.

- Rohit

On Thu, Sep 6, 2018 at 12:50 PM, Wei ZHOU  wrote:

> Hi Sebastien,
>
> We (as Leaseweb) are using CloudStack plugin in our kubernetes clusters
> [1][2].
> The main usage are
> (1) CloudStack will acquire a public IP and create LB rules when we create
> a LoadBalancer service in kubernetes
> (2) CloudStack will create/remove load balancer rules when we scale out/in
> a cluster.
>
> If the plugin is removed from kubernetes because nobody works on making it
> external, we have to maintain IP and LB rules manually.
>
>
> [1]https://kubernetes.io/docs/tasks/access-application-
> cluster/create-external-load-balancer/
> [2]https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/
> providers/cloudstack
>
> Kind regards,
> Wei Zhou
>
> Sebastien Goasguen  于2018年9月4日周二 上午8:18写道:
>
>> Hi all, it has been a while.
>>
>> Please see this:
>>
>> https://github.com/kubernetes/kubernetes/pull/68199
>>
>> I don't know if anyone is using it but if we don't reply the cloudstack
>> controller in kubernetes might get removed from the upstream.
>>
>> -seb
>>
>


RE: [DISCUSS] deployment planner improvement

2018-09-07 Thread Paul Angus
I think that this affects all hypervisors as CloudStack's deployment strategies 
are generally sub-optimal to say the least.
From what our devs have told me, a large part of the problem is that 
capacity/usage and suitability due to tags is calculated by multiple parts of 
the code independently, there is no central method, which will give a 
consistent answer.  

In Trillian we take a micro-management approach and have a custom module which 
will return the least used cluster, the least used host or the least used host 
in a given cluster.  With that info we place VMs on a specific hosts - keeping 
virtualised hypervisors in the same cluster (least used) so that processor 
types match, and all other VMs on the least used hosts.

For cross-cluster migrations (VMs and/or storage) I think that most times 
people want to move from cluster A to the least used (cluster/storage) in 
cluster B - making them choose which host/pool is actually unhelpful.

#scopecreep - sorry Pierre-Luc

Kind regards,

Paul Angus

paul.an...@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 


-Original Message-
From: Will Stevens  
Sent: 06 September 2018 19:45
To: dev@cloudstack.apache.org; Marc-Andre Jutras 
Subject: Re: [DISCUSS] deployment planner improvement

If I remember correctly, we see similar issues on VMware.  Marcus, have you 
seen similar behavior on VMware?  I think I remember us having to manually 
vMotion a lot of VMs very often...

*Will Stevens*
Chief Technology Officer
c 514.826.0190




On Thu, Sep 6, 2018 at 2:34 PM Pierre-Luc Dion  wrote:

> Hi,
>
> I'm working with a University in Montreal and we are looking at 
> working together to improve the deployment planner. Mainly for post 
> VM.CREATE tasks.
> Because  what we observed with cloudstack, in our case with XenServer, 
> overtime, a cluster will become unbalanced in therm of workload, vm HA 
> will move VMs all over the the cluster which cause hotspot inside a cluster.
> Also, when performing maintenance  xenmotion of VM spread them in the 
> cluster but does not consider host usage and at the end of a 
> maintenance it require manual operation to repopulate VMs on the last 
> host updated.  OS preference not taken into account  except for VM.CREATE.
>
> So,
> I'd like to work on improving VMs dispersion during and post outage 
> and maintenances. when a cluster resources are added or removed.
>
> Would you have any more requirement, we will document a feature spec 
> in the wiki which I believe it's still a requirement ?
>
> Does using KVM have similar issues over time?
>
> I don't think it would make sense to cloudstack to automatically take 
> decision on moving VMs but for now create report of recommended action 
> to do and provide steps to do them. tbd.
>
> Cheers,
>
> PL
>


Re: [DISCUSS] Removing IAM services

2018-09-07 Thread Rohit Yadav
The (dynamic) roles feature is separate from the incomplete IAM feature that is 
being removed.


- Rohit






From: Pierre-Luc Dion 
Sent: Friday, September 7, 2018 12:05:43 AM
To: us...@cloudstack.apache.org
Cc: dev
Subject: Re: [DISCUSS] Removing IAM services

No chances it's in used with Roles stuff ?




On Wed, Aug 22, 2018 at 8:19 AM Gabriel Beims Bräscher 
wrote:

> I am +1 on removing it as well.
>
> Em qua, 22 de ago de 2018 às 07:49, Rohit Yadav  >
> escreveu:
>
> > +1 it's about time
> >
> >
> >
> > - Rohit
> >
> > 
> >
> >
> >
> > 
> > From: Daan Hoogland 
> > Sent: Tuesday, August 21, 2018 9:02:58 PM
> > To: dev
> > Cc: users
> > Subject: [DISCUSS] Removing IAM services
> >
> > i'm +1 on this it has not been developed for the longest while. (changed
> > the title to indicate _all_ people are supposed to have an opinion.)
> >
> > On Tue, Aug 21, 2018 at 4:32 PM, Khosrow Moossavi <
> kmooss...@cloudops.com>
> > wrote:
> >
> > > Hello Community
> > >
> > > Following up after PR#2613[1] to cleanup POMs across the whole
> > repository,
> > > now in a new
> > > PR #2817 we are going to remove services/iam projects which seems to be
> > > disabled and
> > > not maintained since May 2014.
> > >
> > > Moving forward with the PR, corresponding tables will be deleted from
> > > database too.
> > >
> > > Please let us know if you have any questions, comments, concerns about
> > this
> > > removal or if
> > > you are willing to revive the project in some shape or form.
> > >
> > > [1]: https://github.com/apache/cloudstack/pull/2613
> > > [2]: https://github.com/apache/cloudstack/pull/2817
> > >
> > > Khosrow Moossavi
> > >
> > > Cloud Infrastructure Developer
> > >
> > > 
> > >
> >
> >
> >
> > --
> > Daan
> >
> > rohit.ya...@shapeblue.com
> > www.shapeblue.com
> > Amadeus House, Floral Street, London  WC2E 9DPUK
> > @shapeblue
> >
> >
> >
> >
>

rohit.ya...@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 



[GitHub] rhtyd commented on issue #3: Livirt hook script

2018-09-07 Thread GitBox
rhtyd commented on issue #3: Livirt hook script
URL: 
https://github.com/apache/cloudstack-documentation/pull/3#issuecomment-419412543
 
 
   @PaulAngus can you review and merge this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


Re: [DISCUSS] deployment planner improvement

2018-09-07 Thread Marc-Andre Jutras
I agree, it is affecting all hypervisor... I basically had to migrate a 
bunch of vm manually to re-balance a cluster after an upgrade or even 
after re-adding new host to a cluster.


Personally, I think Cloudstack should be able to handles this balancing> of resource, example: having a piece of code somewhere that 
can run every hours or on demand to re-calculate and re-balance 
resources across hosts within a cluster...


Even the deployment planner is not really relevant here: this process 
will basically balance new VM creation through different clusters of a 
POD, not between hosts within a cluster and it's also becoming a 
nightmare when you start to do cross-cluster migration...


Sum of all : The deployment strategies planner should be re-worked a bit...

+1 on #scoopcreep ;)

Marcus ( mjut...@cloudops.com )

On 2018-09-07 6:01 AM, Paul Angus wrote:

I think that this affects all hypervisors as CloudStack's deployment strategies 
are generally sub-optimal to say the least.
 From what our devs have told me, a large part of the problem is that 
capacity/usage and suitability due to tags is calculated by multiple parts of 
the code independently, there is no central method, which will give a 
consistent answer.

In Trillian we take a micro-management approach and have a custom module which 
will return the least used cluster, the least used host or the least used host 
in a given cluster.  With that info we place VMs on a specific hosts - keeping 
virtualised hypervisors in the same cluster (least used) so that processor 
types match, and all other VMs on the least used hosts.

For cross-cluster migrations (VMs and/or storage) I think that most times 
people want to move from cluster A to the least used (cluster/storage) in 
cluster B - making them choose which host/pool is actually unhelpful.

#scopecreep - sorry Pierre-Luc

Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
   
  



-Original Message-
From: Will Stevens 
Sent: 06 September 2018 19:45
To: dev@cloudstack.apache.org; Marc-Andre Jutras 
Subject: Re: [DISCUSS] deployment planner improvement

If I remember correctly, we see similar issues on VMware.  Marcus, have you 
seen similar behavior on VMware?  I think I remember us having to manually 
vMotion a lot of VMs very often...

*Will Stevens*
Chief Technology Officer
c 514.826.0190




On Thu, Sep 6, 2018 at 2:34 PM Pierre-Luc Dion  wrote:


Hi,

I'm working with a University in Montreal and we are looking at
working together to improve the deployment planner. Mainly for post
VM.CREATE tasks.
Because  what we observed with cloudstack, in our case with XenServer,
overtime, a cluster will become unbalanced in therm of workload, vm HA
will move VMs all over the the cluster which cause hotspot inside a cluster.
Also, when performing maintenance  xenmotion of VM spread them in the
cluster but does not consider host usage and at the end of a
maintenance it require manual operation to repopulate VMs on the last
host updated.  OS preference not taken into account  except for VM.CREATE.

So,
I'd like to work on improving VMs dispersion during and post outage
and maintenances. when a cluster resources are added or removed.

Would you have any more requirement, we will document a feature spec
in the wiki which I believe it's still a requirement ?

Does using KVM have similar issues over time?

I don't think it would make sense to cloudstack to automatically take
decision on moving VMs but for now create report of recommended action
to do and provide steps to do them. tbd.

Cheers,

PL





Re: [DISCUSS] deployment planner improvement

2018-09-07 Thread Pierre-Luc Dion
#scopecreep Paul ;)

But I think the problem you identify is related to selection of the
deployment planner strategy, globally define or at the compute offering.
You can select how cloudstack choose the host to deploy a new vm.

But even then, like marcus stated, if you add a node to a full cluster, all
new vm will be created on that node.

So if nobody have WIP around post deployment orchestration, I'll work on
the feature spec with the university, with objective in mind to easy
hypervisor maintenances, better distribution of workload.

I would not expect PR before ~6months, but will have some actions around it
very soon I hope.



Le ven. 7 sept. 2018 09 h 05, Marc-Andre Jutras  a
écrit :

> I agree, it is affecting all hypervisor... I basically had to migrate a
> bunch of vm manually to re-balance a cluster after an upgrade or even
> after re-adding new host to a cluster.
>
> Personally, I think Cloudstack should be able to handles this  balancing> of resource, example: having a piece of code somewhere that
> can run every hours or on demand to re-calculate and re-balance
> resources across hosts within a cluster...
>
> Even the deployment planner is not really relevant here: this process
> will basically balance new VM creation through different clusters of a
> POD, not between hosts within a cluster and it's also becoming a
> nightmare when you start to do cross-cluster migration...
>
> Sum of all : The deployment strategies planner should be re-worked a bit...
>
> +1 on #scoopcreep ;)
>
> Marcus ( mjut...@cloudops.com )
>
> On 2018-09-07 6:01 AM, Paul Angus wrote:
> > I think that this affects all hypervisors as CloudStack's deployment
> strategies are generally sub-optimal to say the least.
> >  From what our devs have told me, a large part of the problem is that
> capacity/usage and suitability due to tags is calculated by multiple parts
> of the code independently, there is no central method, which will give a
> consistent answer.
> >
> > In Trillian we take a micro-management approach and have a custom module
> which will return the least used cluster, the least used host or the least
> used host in a given cluster.  With that info we place VMs on a specific
> hosts - keeping virtualised hypervisors in the same cluster (least used) so
> that processor types match, and all other VMs on the least used hosts.
> >
> > For cross-cluster migrations (VMs and/or storage) I think that most
> times people want to move from cluster A to the least used
> (cluster/storage) in cluster B - making them choose which host/pool is
> actually unhelpful.
> >
> > #scopecreep - sorry Pierre-Luc
> >
> > Kind regards,
> >
> > Paul Angus
> >
> > paul.an...@shapeblue.com
> > www.shapeblue.com
> > Amadeus House, Floral Street, London  WC2E 9DPUK
> > @shapeblue
> >
> >
> >
> >
> > -Original Message-
> > From: Will Stevens 
> > Sent: 06 September 2018 19:45
> > To: dev@cloudstack.apache.org; Marc-Andre Jutras 
> > Subject: Re: [DISCUSS] deployment planner improvement
> >
> > If I remember correctly, we see similar issues on VMware.  Marcus, have
> you seen similar behavior on VMware?  I think I remember us having to
> manually vMotion a lot of VMs very often...
> >
> > *Will Stevens*
> > Chief Technology Officer
> > c 514.826.0190
> >
> > 
> >
> >
> > On Thu, Sep 6, 2018 at 2:34 PM Pierre-Luc Dion 
> wrote:
> >
> >> Hi,
> >>
> >> I'm working with a University in Montreal and we are looking at
> >> working together to improve the deployment planner. Mainly for post
> >> VM.CREATE tasks.
> >> Because  what we observed with cloudstack, in our case with XenServer,
> >> overtime, a cluster will become unbalanced in therm of workload, vm HA
> >> will move VMs all over the the cluster which cause hotspot inside a
> cluster.
> >> Also, when performing maintenance  xenmotion of VM spread them in the
> >> cluster but does not consider host usage and at the end of a
> >> maintenance it require manual operation to repopulate VMs on the last
> >> host updated.  OS preference not taken into account  except for
> VM.CREATE.
> >>
> >> So,
> >> I'd like to work on improving VMs dispersion during and post outage
> >> and maintenances. when a cluster resources are added or removed.
> >>
> >> Would you have any more requirement, we will document a feature spec
> >> in the wiki which I believe it's still a requirement ?
> >>
> >> Does using KVM have similar issues over time?
> >>
> >> I don't think it would make sense to cloudstack to automatically take
> >> decision on moving VMs but for now create report of recommended action
> >> to do and provide steps to do them. tbd.
> >>
> >> Cheers,
> >>
> >> PL
> >>
>
>