Re: Advice on converting zone-wide to cluster-wide storage

2017-10-02 Thread Andrija Panic
Hi guys,

Thanks a lot for the good info!
Will take look on this soon!

Cheers,
Andrija

On Sep 30, 2017 14:26, "Tutkowski, Mike"  wrote:

> Good points, Sateesh! Thanks for chiming in. :)
>
> On Sep 30, 2017, at 4:03 AM, Sateesh Chodapuneedi  accelerite.com> wrote:
>
> Hi Andrija,
> I’ve converted cluster-wide NFS based storage pools to zone-wide in the
> past.
>
> Basically there are 2 steps for NFS and Ceph,
> 1. DB update
> 2. If there are more than 1 cluster in that zone, then do un-manage &
> manage all the clusters except the original cluster
>
> In addition to Mike’s suggestion, you need to do following,
> • Set ‘scope’ of the storage pool to ‘ZONE’ in `cloud`.`storage_pool` table
>
> Example SQL looks like below, given that the hypervisor in my setup is
> VMware.
> mysql> update storage_pool set scope='ZONE', cluster_id=NULL, pod_id=NULL,
> hypervisor='VMware' where id=;
>
> With DB update, the changes would be reflected in UI as well.
>
> Post the DB update, it is important to un-manage, followed by manage
> clusters (except the original cluster to which this storage pool belongs
> to) so that all hosts in other clusters also to connect to this storage
> pool, making this pool as a full-fledged zone wide storage pool.
>
> Hope this helps you!
>
> Regards,
> Sateesh Ch,
> CloudStack Development, Accelerite,
> www.accelerite.com
> @accelerite
>
>
> -Original Message-
> From: "Tutkowski, Mike"  mike.tutkow...@netapp.com>>
> Reply-To: "dev@cloudstack.apache.org" <
> dev@cloudstack.apache.org>
> Date: Friday, 29 September 2017 at 6:57 PM
> To: "dev@cloudstack.apache.org" <
> dev@cloudstack.apache.org>, "
> us...@cloudstack.apache.org" <
> us...@cloudstack.apache.org>
> Subject: Re: Advice on converting zone-wide to cluster-wide storage
>
>Hi Andrija,
>
>I just took a look at the SolidFire logic around adding primary storage
> at the zone level versus the cluster scope.
>
>I recommend you try this in development prior to production, but it
> looks like you can make the following changes for SolidFire:
>
>• In cloud.storage_pool, enter the applicable value for pod_id (this
> should be null when being used as zone-wide storage and an integer when
> being used as cluster-scoped storage).
>• In cloud.storage_pool, enter the applicable value for cluster_id
> (this should be null when being used as zone-wide storage and an integer
> when being used as cluster-scoped storage).
>• In cloud.storage_pool, change the hypervisor_type from Any to (in
> your case) KVM.
>
>Talk to you later!
>Mike
>
>On 9/29/17, 5:18 AM, "Andrija Panic"  andrija.pa...@gmail.com>> wrote:
>
>Hi all,
>
>I was wondering if anyone have experience hacking DB and converting
>zone-wide primary storage to cluster-wide.
>
>We have:
>1 x NFS primary storage, zone-wide
>1 x CEPH primary storage, zone-wide
>1 x SOLIDFIRE orimary storage, zone-wide
>1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular
> secondary
>storage (SS not relevant here).
>
>I'm assuming few DB changes would do it  - storage_pool table /
> scope,
>cluster_id, pod_id fileds), but have not yet had time to play with
> it
>really.
>
>Any advice if this is OK to be done in production environment,
> would be
>very much appreciated.
>
>We plan to expand to many more racks, so we might move from
>single-everything (pod/cluster) to multiple PODs/clusters etc, and
> thus
>design Primary Storage accordingly.
>
>Thanks !
>
>--
>
>Andrija Panić
>
>
>
>
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which is
> the property of Accelerite, a Persistent Systems business. It is intended
> only for the use of the individual or entity to which it is addressed. If
> you are not the intended recipient, you are not authorized to read, retain,
> copy, print, distribute or use this message. If you have received this
> communication in error, please notify the sender and delete all copies of
> this message. Accelerite, a Persistent Systems business does not accept any
> liability for virus infected mails.
>


Advise on multiple PODs network design

2017-10-02 Thread Andrija Panic
Hi guys,

Sorry for long post below...

I was wondering if someone could bring some light for me for multiple PODs
networking design (L2 vs L3) - idea is to make smaller L2 broadcast domains
(any other reason?)

We might decide to transition from current single pod, single cluster
(single zone) to multiple PODs design (or not...) - we will eventually grow
to over 50 racks worth of KVM hosts (1000+ hosts) so Im trying to
understand best options to avoid having insanely huge L2 broadcast
domains...

Mgmt network is routed between pods, that is clear.

We have dedicated primary storage network and Secondary Storage networks
(vlan interfaces configured locally on all KVM hosts, providing direct L2
connection obviously, not shared with mgmt.network), and same for Public
and Guest networks... (Advanced networking in zone, Vxlan used as isolation)

Now with multiple PODs, since Public Network and Guest network is defined
per Zone level (not POD level), and currently same zone-wide setup for
Primary Storage... what would be the best way to make this traffic stay
inside PODs as much as possible and is this possible at all? Perhaps I
would need to look into multiple zones, not PODs.

My humble conclusion, based on having all dedicated networks, is that I
need to strech (L2 attach as vlan interface) primary and secondary storage
network across all racks/PODs, and also need to strech Guest vlan (that
carry all Guest VXLAN tunnels), and again same for Public Network...and
this again makes huge broadcast domains and doesn't solve my issue...
Don't see other option in my head to make networking work across PODs.

Any suggestion is most welcome (and if of any use as info - we dont plan
for any Xen, VmWare etc, will stay purely with KVM).

Thanks
Andrija


Re: Does browser-based template or volume upload work?

2017-10-02 Thread Andrija Panic
Doesn't also fully work on ACS 4.8, at least some issues definitively (we
are using domain name as supposed to do, for SSVM, with proper SSL), but
there is some timeout, that kicks in very soon, or similar.
I can dig up/test again, if anyone needs info.

CHeers

On 27 September 2017 at 15:34, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Never mind folks,
>
> I found the configuration. It was in a file called /etc/apache2/cors.conf.
> I did not check that file before because I do not believe this kind of
> configuration/operation is a CORS (Cross-origin resource sharing) one.
> Anyways, I also did not find the “include” for the “cors.conf” file while
> using grep because it is written as follows:Include
> /etc/apache2/[cC][oO][rR][sS].conf
>
> :(
>
>
>
> Also, if anybody runs into the same problem, the local upload was not
> working here because we were not using domains for System VMs, then the
> certificate used to secure the HTTPS has a common name different from the
> "name" of the SSVM that is being accessed using IPs (HTTPS://
> /upload.).
>
> On Tue, Sep 26, 2017 at 4:00 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
>
> > Hey folks,
> >
> > Has anybody else here used the “Upload from local” feature in ACS?
> >
> > It seems that it does not work (at least in ACS 4.9.2.0). I receive the
> > following metadata to execute the POST request and send the template
> > binaries:
> > {"postuploadtemplateresponse":{"getuploadparams":{"id":" > template>","postURL"
> > :"https://URLSSVM/upload/","metadata
> > ….
> > …
> > ..
> >
> > The problem is that the request is aborted. I accessed the SSVM and
> > checked what application is configured to receive the request listening
> to
> > the port 443.  The Apache HTTD is configured to listen this port.
> However,
> > I did not see anything specific to handle the “/upload” context. Am I
> > missing something!?
> >
> > BTW: While looking the HTTPD of SSVM I found out that SSVMs are enabling
> > anybody to access “/cgi-bin/ipcalc”. Why is this application there?! I
> know
> > organizations that need to expose these systems to the Internet, and that
> > is accessible to everybody.
> >
> > --
> > Rafael Weingärtner
> >
>
>
>
> --
> Rafael Weingärtner
>



-- 

Andrija Panić


Re: Cluster anti-affinity

2017-10-02 Thread Andrija Panic
We are using "User-dispersing" deployment algorithm in Compute Offerings,
which should place VM (but doesn't guaranties... = I guess same as with
anti-afinity rules) on different hosts. Not sure if this takes cluster
into consideration though., For cluster anti-afinity - for  i.e. 10 VMs,
that means trying to deploy VM in 10 Clusters, not sure if this is
realistic for most customers...but nice to have definitively!

Best
Andrija

On 26 September 2017 at 23:57, Pierre-Luc Dion  wrote:

> Hi Ivan,
> I don't think cloudstack offer cluster anti affinity and i'm sure i would
> be in favor of introducing another anti affinity level, because it would
> expose cluster notion to your cloud user.
>
> Although, look at the vm provisionning strategy config, their should be a
> deployment strategy that would spread account vms across pods or clusters,
> or prefer pod/cluster proximity. I think this could help you.
>
> Regards,
>
> Le 19 sept. 2017 01 h 57, "Ivan Kudryavtsev"  a
> écrit :
>
> > Hello, community. Right now cloudstack has affinity implementation for
> host
> > anti-affinity and it's great and useful, but since the storage is often
> > defined for a cluster (unless it's local or clustered like Ceph), it
> > defined a failure domain. Does anybody experienced "cluster
> anti-affinity"
> > implementation. Is it useful or were declined in the past by dev team?
> Any
> > thoughts?
> >
> > How you tackle with VMs which should be completely independent and fault
> > tolerant with shared storage (not Ceph)? I see that zone-level approach
> > works for sure, but if the requirement is for intra-zone, I don't see the
> > way to implement it, any thoughts?
> >
> > --
> > With best regards, Ivan Kudryavtsev
> > Bitworks Software, Ltd.
> > Cell: +7-923-414-1515
> > WWW: http://bitworks.software/ 
> >
>



-- 

Andrija Panić


Re: Does browser-based template or volume upload work?

2017-10-02 Thread Wei ZHOU
We are using 4.7.1
It works fine after some changes

for example:
1. set max.account.secondary.storage in global setting to another value
other than -1.
2. increase upload.operation.timeout, from default 10 min to larger value,
eg 120.


-Wei


2017-10-02 12:25 GMT+02:00 Andrija Panic :

> Doesn't also fully work on ACS 4.8, at least some issues definitively (we
> are using domain name as supposed to do, for SSVM, with proper SSL), but
> there is some timeout, that kicks in very soon, or similar.
> I can dig up/test again, if anyone needs info.
>
> CHeers
>
> On 27 September 2017 at 15:34, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
>
> > Never mind folks,
> >
> > I found the configuration. It was in a file called
> /etc/apache2/cors.conf.
> > I did not check that file before because I do not believe this kind of
> > configuration/operation is a CORS (Cross-origin resource sharing) one.
> > Anyways, I also did not find the “include” for the “cors.conf” file while
> > using grep because it is written as follows:Include
> > /etc/apache2/[cC][oO][rR][sS].conf
> >
> > :(
> >
> >
> >
> > Also, if anybody runs into the same problem, the local upload was not
> > working here because we were not using domains for System VMs, then the
> > certificate used to secure the HTTPS has a common name different from the
> > "name" of the SSVM that is being accessed using IPs (HTTPS://
> > /upload.).
> >
> > On Tue, Sep 26, 2017 at 4:00 PM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > Hey folks,
> > >
> > > Has anybody else here used the “Upload from local” feature in ACS?
> > >
> > > It seems that it does not work (at least in ACS 4.9.2.0). I receive the
> > > following metadata to execute the POST request and send the template
> > > binaries:
> > > {"postuploadtemplateresponse":{"getuploadparams":{"id":" > > template>","postURL"
> > > :"https://URLSSVM/upload/","metadata
> > > ….
> > > …
> > > ..
> > >
> > > The problem is that the request is aborted. I accessed the SSVM and
> > > checked what application is configured to receive the request listening
> > to
> > > the port 443.  The Apache HTTD is configured to listen this port.
> > However,
> > > I did not see anything specific to handle the “/upload” context. Am I
> > > missing something!?
> > >
> > > BTW: While looking the HTTPD of SSVM I found out that SSVMs are
> enabling
> > > anybody to access “/cgi-bin/ipcalc”. Why is this application there?! I
> > know
> > > organizations that need to expose these systems to the Internet, and
> that
> > > is accessible to everybody.
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
> >
> >
> > --
> > Rafael Weingärtner
> >
>
>
>
> --
>
> Andrija Panić
>


Re: CLOUDSTACK-8663 and CLOUDSTACK-4858

2017-10-02 Thread Andrija Panic
Hi Andrei,

though I can not comment on the particular tickets you mentioned, we also
had identical problems with using CEPH (imagine hourly snaps, which not
deleted properly for some months...:) )

We have internally updated ACS code to actually "really" remove snapshots
on CEPH when snap deleted via ACS. and some more improvements with CEPH.
Unfortunately, we haven't committed this yet to community...
If you are interested in those, please ping me, and I might connect you
with our developers.

Cheers,
Andrija

On 19 September 2017 at 12:02, Andrei Mikhailovsky <
and...@arhont.com.invalid> wrote:

> Hello guys,
>
> I have a question on CLOUDSTACK-4858 and CLOUDSTACK-8663 issues that were
> fixed in the recent 4.9.3.0 release.
>
> First of all, big up for addressing issue 4858 after about 3+ years of it
> being 'cooked' in the oven. This issue alone will save so much time and
> network traffic for many of us I am sure. This leads me to the question on
> pruning old snapshots on the ceph storage.
>
> I am currently running 4.9.2.0 and for ages I've been having a problem
> with cloudstack leaving disk snapshots on the primary storage after they
> are being copied to the secondary storage. When I realised this issue, I
> had over 4000 snapshots on ceph. So, now I am running a small script that
> clears the clutter left by cloudstack's snapshotting process. So, if I were
> to use primary storage exclusively for keeping the snapshots, would my old
> snapshots be removed according to the snapshot schedule? Or has this
> function been missed out?
>
> Thanks
>
> Andrei
>



-- 

Andrija Panić


Re: Need to ask for help again (Migration in cloudstack)

2017-10-02 Thread Andrija Panic
A bit late, and not directly related with original question - if you are
doing any kind of KVM live migration (ACS or not), make sure you are using
qemu 2.5 and libvirt 1.3+, to support
dynamic auto-convergence (regular auto-convergence, almost useless,
available from qemu 1.6+) - becase live migration works well, until you hit
busy production VM, where there is hi RAM change rate, then nothing helps
except mentioned qemu 2.5+ dynamic autoconvergence (and even this takes
ages to completely allow some very busy VMs to finish migration...).

On 5 September 2017 at 22:52, ilya  wrote:

> Personal experience with KVM (not cloudstack related) and non-shared
> storage migration - works most of the time - but can be very slow - even
> with 10G backplane.
>
> On 9/5/17 6:27 AM, Marc-Aurèle Brothier wrote:
> > Hi Dimitriy,
> >
> > I wrote the PR for the live migration in cloudstack (PR 1709). We're
> using
> > an older version than upstream so it's hard for me to fix the integration
> > tests errors. All I can tell you, is that you should first configure
> > libvirt correctly for migration. You can play with it by manually running
> > virsh commands to initiate the migration. The networking part will not
> work
> > after the VM being on the other machine if down manually.
> >
> > Marc-Aurèle
> >
> > On Tue, Sep 5, 2017 at 2:07 PM, Dmitriy Kaluzhniy <
> > dmitriy.kaluzh...@gmail.com> wrote:
> >
> >> Hello,
> >> That's what I want, thank you!
> >> I want to have Live migration on KVM with non-shared storages.
> >> As I understood, migration is performed by LibVirt.
> >>
> >> 2017-09-01 17:04 GMT+03:00 Simon Weller :
> >>
> >>> Dmitriy,
> >>>
> >>> Can you give us a bit more information about what you're trying to do?
> >>> If you're looking for live migration on non shared storage with KVM,
> >> there
> >>> is an outstanding PR  in the works to support that:
> >>>
> >>> https://github.com/apache/cloudstack/pull/1709
> >>>
> >>> - Si
> >>>
> >>>
> >>> 
> >>> From: Rajani Karuturi 
> >>> Sent: Friday, September 1, 2017 4:07 AM
> >>> To: dev@cloudstack.apache.org
> >>> Subject: Re: Need to ask for help again (Migration in cloudstack)
> >>>
> >>> You might start with this commit
> >>> https://github.com/apache/cloudstack/commit/
> >> 21ce3befc8ea9e1a6de449a21499a5
> >>> 0ff141a183
> >>>
> >>>
> >>> and storage_motion_supported column in hypervisor_capabilities
> >>> table.
> >>>
> >>> Thanks,
> >>>
> >>> ~ Rajani
> >>>
> >>> http://cloudplatform.accelerite.com/
> >>>
> >>> On August 31, 2017 at 6:29 PM, Dmitriy Kaluzhniy
> >>> (dmitriy.kaluzh...@gmail.com) wrote:
> >>>
> >>> Hello!
> >>> I contacted this mail before, but I wasn't subscribed to mailing
> >>> list.
> >>> The reason I'm contacting you - I need advise.
> >>> During last week I was learning cloudstack code to find where is
> >>> implemented logic of this statements I found in cloudstack
> >>> documentation:
> >>> "(KVM) The VM must not be using local disk storage. (On
> >>> XenServer and
> >>> VMware, VM live migration with local disk is enabled by
> >>> CloudStack support
> >>> for XenMotion and vMotion.)
> >>>
> >>> (KVM) The destination host must be in the same cluster as the
> >>> original
> >>> host. (On XenServer and VMware, VM live migration from one
> >>> cluster to
> >>> another is enabled by CloudStack support for XenMotion and
> >>> vMotion.)"
> >>>
> >>> I made up a long road through source code but still can't see
> >>> it. If you
> >>> can give me any advise - it will be amazing.
> >>> Anyway, thank you.
> >>>
> >>> --
> >>>
> >>> *Best regards,Dmitriy Kaluzhniy+38 (073) 101 14 73*
> >>>
> >>
> >>
> >>
> >> --
> >>
> >>
> >>
> >> *--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*
> >>
> >
>



-- 

Andrija Panić


Re: Need to ask for help again (Migration in cloudstack)

2017-10-02 Thread Ivan Kudryavtsev
AFAIK ACS has VM suspend parameter in KVM agent which acts when ACS is
unable to migrate successfully. Also, I almost have no problem with
8core/16GB migration over 10G, but you are right. Sometimes it doesn't work
as expected without autoconvergence and new Qemu/KVM does the work.

2017-10-02 17:44 GMT+07:00 Andrija Panic :

> A bit late, and not directly related with original question - if you are
> doing any kind of KVM live migration (ACS or not), make sure you are using
> qemu 2.5 and libvirt 1.3+, to support
> dynamic auto-convergence (regular auto-convergence, almost useless,
> available from qemu 1.6+) - becase live migration works well, until you hit
> busy production VM, where there is hi RAM change rate, then nothing helps
> except mentioned qemu 2.5+ dynamic autoconvergence (and even this takes
> ages to completely allow some very busy VMs to finish migration...).
>
> On 5 September 2017 at 22:52, ilya  wrote:
>
> > Personal experience with KVM (not cloudstack related) and non-shared
> > storage migration - works most of the time - but can be very slow - even
> > with 10G backplane.
> >
> > On 9/5/17 6:27 AM, Marc-Aurèle Brothier wrote:
> > > Hi Dimitriy,
> > >
> > > I wrote the PR for the live migration in cloudstack (PR 1709). We're
> > using
> > > an older version than upstream so it's hard for me to fix the
> integration
> > > tests errors. All I can tell you, is that you should first configure
> > > libvirt correctly for migration. You can play with it by manually
> running
> > > virsh commands to initiate the migration. The networking part will not
> > work
> > > after the VM being on the other machine if down manually.
> > >
> > > Marc-Aurèle
> > >
> > > On Tue, Sep 5, 2017 at 2:07 PM, Dmitriy Kaluzhniy <
> > > dmitriy.kaluzh...@gmail.com> wrote:
> > >
> > >> Hello,
> > >> That's what I want, thank you!
> > >> I want to have Live migration on KVM with non-shared storages.
> > >> As I understood, migration is performed by LibVirt.
> > >>
> > >> 2017-09-01 17:04 GMT+03:00 Simon Weller :
> > >>
> > >>> Dmitriy,
> > >>>
> > >>> Can you give us a bit more information about what you're trying to
> do?
> > >>> If you're looking for live migration on non shared storage with KVM,
> > >> there
> > >>> is an outstanding PR  in the works to support that:
> > >>>
> > >>> https://github.com/apache/cloudstack/pull/1709
> > >>>
> > >>> - Si
> > >>>
> > >>>
> > >>> 
> > >>> From: Rajani Karuturi 
> > >>> Sent: Friday, September 1, 2017 4:07 AM
> > >>> To: dev@cloudstack.apache.org
> > >>> Subject: Re: Need to ask for help again (Migration in cloudstack)
> > >>>
> > >>> You might start with this commit
> > >>> https://github.com/apache/cloudstack/commit/
> > >> 21ce3befc8ea9e1a6de449a21499a5
> > >>> 0ff141a183
> > >>>
> > >>>
> > >>> and storage_motion_supported column in hypervisor_capabilities
> > >>> table.
> > >>>
> > >>> Thanks,
> > >>>
> > >>> ~ Rajani
> > >>>
> > >>> http://cloudplatform.accelerite.com/
> > >>>
> > >>> On August 31, 2017 at 6:29 PM, Dmitriy Kaluzhniy
> > >>> (dmitriy.kaluzh...@gmail.com) wrote:
> > >>>
> > >>> Hello!
> > >>> I contacted this mail before, but I wasn't subscribed to mailing
> > >>> list.
> > >>> The reason I'm contacting you - I need advise.
> > >>> During last week I was learning cloudstack code to find where is
> > >>> implemented logic of this statements I found in cloudstack
> > >>> documentation:
> > >>> "(KVM) The VM must not be using local disk storage. (On
> > >>> XenServer and
> > >>> VMware, VM live migration with local disk is enabled by
> > >>> CloudStack support
> > >>> for XenMotion and vMotion.)
> > >>>
> > >>> (KVM) The destination host must be in the same cluster as the
> > >>> original
> > >>> host. (On XenServer and VMware, VM live migration from one
> > >>> cluster to
> > >>> another is enabled by CloudStack support for XenMotion and
> > >>> vMotion.)"
> > >>>
> > >>> I made up a long road through source code but still can't see
> > >>> it. If you
> > >>> can give me any advise - it will be amazing.
> > >>> Anyway, thank you.
> > >>>
> > >>> --
> > >>>
> > >>> *Best regards,Dmitriy Kaluzhniy+38 (073) 101 14 73*
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >>
> > >>
> > >>
> > >> *--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*
> > >>
> > >
> >
>
>
>
> --
>
> Andrija Panić
>



-- 
With best regards, Ivan Kudryavtsev
Bitworks Software, Ltd.
Cell: +7-923-414-1515
WWW: http://bitworks.software/ 


Re: Release packages for 4.9.3.0

2017-10-02 Thread Pierre-Luc Dion
I've updated jenkins and built packages for centos6 and Ubuntu. Packages
have been copied on cloudstack.apt-get.eu automatically. I haven't tested
them yet.


Le 28 sept. 2017 20 h 19, "Pierre-Luc Dion"  a écrit :

I'll work something up this weekend...

we need some work on 4.10 I think...

On Thu, Sep 28, 2017 at 3:14 AM, Wido den Hollander  wrote:

>
> > Op 27 september 2017 om 8:30 schreef Özhan Rüzgar Karaman <
> oruzgarkara...@gmail.com>:
> >
> >
> > Hi Wido;
> > I checked http://cloudstack.apt-get.eu/ web site and 4.9.3 packages are
> > only under /ubuntu/dists/xenial/4.9/pool/ directory, for trusty there are
> > no packages available for 4.9.3 .
> >
> > This directory(/ubuntu/dists/xenial/4.9/pool/) also have 4.10 packages
> as
> > well, which i think they should not be there.
> >
> > When you have suitable time could you check the packages and its script.
> >
>
> Let me check that! Something must have gone wrong with the build and
> upload system. I'll check.
>
> Wido
>
> > Thanks
> > Özhan
> >
> > On Mon, Sep 25, 2017 at 7:24 AM, Rohit Yadav 
> > wrote:
> >
> > > Thanks Wido, can you help building and uploading of rpms as well. Maybe
> > > Pierre-Luc can help?
> > >
> > >
> > > - Rohit
> > >
> > > 
> > > From: Wido den Hollander 
> > > Sent: Thursday, September 21, 2017 1:09:49 PM
> > > To: Rohit Yadav; dev@cloudstack.apache.org; Pierre-Luc Dion
> > > Subject: Re: Release packages for 4.9.3.0
> > >
> > > Ah, sorry! The DEB packages should have been uploaded already :-)
> > >
> > > Wido
> > >
> > > > Op 21 september 2017 om 7:33 schreef Rohit Yadav <
> > > rohit.ya...@shapeblue.com>:
> > > >
> > > >
> > > > Ping - Wido/PL?
> > > >
> > > >
> > > > - Rohit
> > > >
> > > > 
> > > > From: Rohit Yadav 
> > > > Sent: Tuesday, September 12, 2017 5:44:36 PM
> > > > To: Wido den Hollander; Pierre-Luc Dion
> > > > Cc: dev@cloudstack.apache.org
> > > > Subject: Release packages for 4.9.3.0
> > > >
> > > > Wido/PL/others,
> > > >
> > > >
> > > > Can you please help with building and publishing of 4.9.3.0 rpms/deb
> > > packages on the download.cloudstack.org repository? I've built and
> > > published the repos on packages.shapeblue.com now (
> shapeblue.com/packages
> > > for details).
> > > >
> > > >
> > > > Regards.
> > > >
> > > >
> > > > rohit.ya...@shapeblue.com
> > > > www.shapeblue.com
> > > > 53 Chandos Place, Covent Garden, London  WC2N
> 
> 4HSUK
> > > > @shapeblue
> > > >
> > > >
> > > >
> > > >
> > > > rohit.ya...@shapeblue.com
> > > > www.shapeblue.com
> > > > 53 Chandos Place, Covent Garden, London  WC2N
> 
> 4HSUK
> > > > @shapeblue
> > > >
> > > >
> > > >
> > >
> > > rohit.ya...@shapeblue.com
> > > www.shapeblue.com
> > > 53 Chandos Place, Covent Garden, London  WC2N
> 
> 4HSUK
> > > @shapeblue
> > >
> > >
> > >
> > >
>


Re: one question network survey

2017-10-02 Thread Andrija Panic
Hi Daan,

we have dedicated VLAN interface on all KVM hosts (bond0.XXX) which is used
as VTEP for our VxLANs - we are ACS advanced networking, 4.8, were used
also 4.5 previously).
MLAG configured from NIC1/NIC2 (bond0) to 2xTOR switches... pure (no OVS)
KVM/Ubuntu 14.04.

On the host side, we had to increase MTU for bond0.XXX since vxlan
interface gets 50bytes smaller MTU (..

If more help needed, please let me know.

Best,
Andrija

On 28 August 2017 at 16:01, Daan Hoogland 
wrote:

> H Imran,
> I am not sure I can get from your reply whether you configured anything
> for those vxlans inside cloudstack. It sounds like your just trunking
> upstream.
> If I am wrong (not uncommon), you are probably talking about the
> guestnetwork as it ties your hosts together, right?
>
> My question is mainly to what did you configure in cloudstack to use
> vxlans in your cloud.
>
> Thanks,
>
> On 2017/08/28 11:29, "Imran Ahmed"  wrote:
>
> Hi Daan,
>
> I use a separate trunk  (OVS or non OVS bonded with LACP ) connected
> to multiple switches (which are already configured into a switch stack).
> There can be multiple case scenarios but I am mentioning the most generic
> one .
>
> Hope that answers your question if I have correctly understood your
> question.
>
>
> Regards,
>
>
>
> -Original Message-
> From: Daan Hoogland [mailto:daan.hoogl...@shapeblue.com]
> Sent: Monday, August 28, 2017 12:20 PM
> To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
> Subject: one question network survey
>
> Devs and users,
>
> Can you all please tell me how you are using VxLan in your cloudstack
> environments?
>
> The reason behind this is that I am planning some refactoring in the
> networkgurus and I don’t want to break any running installations on
> upgrade. If you are not using vxlan but know of people that might not
> react, using it, please point me to them.
>
> Thanks,
>
> daan.hoogl...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
>
>
>
> daan.hoogl...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


-- 

Andrija Panić


Re: Need to ask for help again (Migration in cloudstack)

2017-10-02 Thread Dmitriy Kaluzhniy
Hello!
I want to say thanks to all!
Nowadays I had no time to work on this, but I hope I will setup some test
environment to try live migration + migration on non-shared.

2017-10-02 13:50 GMT+03:00 Ivan Kudryavtsev :

> AFAIK ACS has VM suspend parameter in KVM agent which acts when ACS is
> unable to migrate successfully. Also, I almost have no problem with
> 8core/16GB migration over 10G, but you are right. Sometimes it doesn't work
> as expected without autoconvergence and new Qemu/KVM does the work.
>
> 2017-10-02 17:44 GMT+07:00 Andrija Panic :
>
> > A bit late, and not directly related with original question - if you are
> > doing any kind of KVM live migration (ACS or not), make sure you are
> using
> > qemu 2.5 and libvirt 1.3+, to support
> > dynamic auto-convergence (regular auto-convergence, almost useless,
> > available from qemu 1.6+) - becase live migration works well, until you
> hit
> > busy production VM, where there is hi RAM change rate, then nothing helps
> > except mentioned qemu 2.5+ dynamic autoconvergence (and even this takes
> > ages to completely allow some very busy VMs to finish migration...).
> >
> > On 5 September 2017 at 22:52, ilya  wrote:
> >
> > > Personal experience with KVM (not cloudstack related) and non-shared
> > > storage migration - works most of the time - but can be very slow -
> even
> > > with 10G backplane.
> > >
> > > On 9/5/17 6:27 AM, Marc-Aurèle Brothier wrote:
> > > > Hi Dimitriy,
> > > >
> > > > I wrote the PR for the live migration in cloudstack (PR 1709). We're
> > > using
> > > > an older version than upstream so it's hard for me to fix the
> > integration
> > > > tests errors. All I can tell you, is that you should first configure
> > > > libvirt correctly for migration. You can play with it by manually
> > running
> > > > virsh commands to initiate the migration. The networking part will
> not
> > > work
> > > > after the VM being on the other machine if down manually.
> > > >
> > > > Marc-Aurèle
> > > >
> > > > On Tue, Sep 5, 2017 at 2:07 PM, Dmitriy Kaluzhniy <
> > > > dmitriy.kaluzh...@gmail.com> wrote:
> > > >
> > > >> Hello,
> > > >> That's what I want, thank you!
> > > >> I want to have Live migration on KVM with non-shared storages.
> > > >> As I understood, migration is performed by LibVirt.
> > > >>
> > > >> 2017-09-01 17:04 GMT+03:00 Simon Weller :
> > > >>
> > > >>> Dmitriy,
> > > >>>
> > > >>> Can you give us a bit more information about what you're trying to
> > do?
> > > >>> If you're looking for live migration on non shared storage with
> KVM,
> > > >> there
> > > >>> is an outstanding PR  in the works to support that:
> > > >>>
> > > >>> https://github.com/apache/cloudstack/pull/1709
> > > >>>
> > > >>> - Si
> > > >>>
> > > >>>
> > > >>> 
> > > >>> From: Rajani Karuturi 
> > > >>> Sent: Friday, September 1, 2017 4:07 AM
> > > >>> To: dev@cloudstack.apache.org
> > > >>> Subject: Re: Need to ask for help again (Migration in cloudstack)
> > > >>>
> > > >>> You might start with this commit
> > > >>> https://github.com/apache/cloudstack/commit/
> > > >> 21ce3befc8ea9e1a6de449a21499a5
> > > >>> 0ff141a183
> > > >>>
> > > >>>
> > > >>> and storage_motion_supported column in hypervisor_capabilities
> > > >>> table.
> > > >>>
> > > >>> Thanks,
> > > >>>
> > > >>> ~ Rajani
> > > >>>
> > > >>> http://cloudplatform.accelerite.com/
> > > >>>
> > > >>> On August 31, 2017 at 6:29 PM, Dmitriy Kaluzhniy
> > > >>> (dmitriy.kaluzh...@gmail.com) wrote:
> > > >>>
> > > >>> Hello!
> > > >>> I contacted this mail before, but I wasn't subscribed to mailing
> > > >>> list.
> > > >>> The reason I'm contacting you - I need advise.
> > > >>> During last week I was learning cloudstack code to find where is
> > > >>> implemented logic of this statements I found in cloudstack
> > > >>> documentation:
> > > >>> "(KVM) The VM must not be using local disk storage. (On
> > > >>> XenServer and
> > > >>> VMware, VM live migration with local disk is enabled by
> > > >>> CloudStack support
> > > >>> for XenMotion and vMotion.)
> > > >>>
> > > >>> (KVM) The destination host must be in the same cluster as the
> > > >>> original
> > > >>> host. (On XenServer and VMware, VM live migration from one
> > > >>> cluster to
> > > >>> another is enabled by CloudStack support for XenMotion and
> > > >>> vMotion.)"
> > > >>>
> > > >>> I made up a long road through source code but still can't see
> > > >>> it. If you
> > > >>> can give me any advise - it will be amazing.
> > > >>> Anyway, thank you.
> > > >>>
> > > >>> --
> > > >>>
> > > >>> *Best regards,Dmitriy Kaluzhniy+38 (073) 101 14 73*
> > > >>>
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >>
> > > >>
> > > >>
> > > >> *--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*
> > > >>
> > > >
> > >
> >
> >
> >
> > --
> >
> > Andrija Panić
> >
>
>
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks Software, Ltd.
> Cell: +7-923-414-1515
> WWW: http://bitworks.software/ 
>



-- 


Primary interface on Windows templates

2017-10-02 Thread Dmitriy Kaluzhniy
Hello,
I was working with templates and find out that Windows templates
automatically gets E1000 interface. Is there any way to change it to
Virtio?

-- 




*​Best regards,Dmitriy Kaluzhniy+38 (073) 101 14 73*


Re: Need to ask for help again (Migration in cloudstack)

2017-10-02 Thread Andrija Panic
Hi Ivan,

yes you are right, but it works like crap (from downtime perspective),
because when we could not live migrate "normally" one 64GB client VM, we
manually (instead of ACS doing it...) paused the VM via VIRSH, and then VM
was in pauses state for 15min (yes it was only 1GBps management network at
that time), so VM was down for 15min... and that is unacceptable for client.

So dynamic auto convergence will work in following way (based on my
experience monitoring migration cycles and CPU cycles with my 4 eyes :)  )
- it will slowly throttle CPU, more and more, but very gently... until it
decide, enough is enough, and then after i.e. 16-30 migration iterations
(of almost full RAM being migrated each iteration), it will throttle CPU
aggressively and let VM migration finish (without downtime except during
finall pause of few tens of miliseconds or less).

Again, just my experience, because we do have many "enterprise workload"
customers, and it was pain until we solved this to work fine (imagine host
maintenance mode also not working fully for those VMs...)

Cheers

On 2 October 2017 at 14:55, Dmitriy Kaluzhniy 
wrote:

> Hello!
> I want to say thanks to all!
> Nowadays I had no time to work on this, but I hope I will setup some test
> environment to try live migration + migration on non-shared.
>
> 2017-10-02 13:50 GMT+03:00 Ivan Kudryavtsev :
>
> > AFAIK ACS has VM suspend parameter in KVM agent which acts when ACS is
> > unable to migrate successfully. Also, I almost have no problem with
> > 8core/16GB migration over 10G, but you are right. Sometimes it doesn't
> work
> > as expected without autoconvergence and new Qemu/KVM does the work.
> >
> > 2017-10-02 17:44 GMT+07:00 Andrija Panic :
> >
> > > A bit late, and not directly related with original question - if you
> are
> > > doing any kind of KVM live migration (ACS or not), make sure you are
> > using
> > > qemu 2.5 and libvirt 1.3+, to support
> > > dynamic auto-convergence (regular auto-convergence, almost useless,
> > > available from qemu 1.6+) - becase live migration works well, until you
> > hit
> > > busy production VM, where there is hi RAM change rate, then nothing
> helps
> > > except mentioned qemu 2.5+ dynamic autoconvergence (and even this takes
> > > ages to completely allow some very busy VMs to finish migration...).
> > >
> > > On 5 September 2017 at 22:52, ilya 
> wrote:
> > >
> > > > Personal experience with KVM (not cloudstack related) and non-shared
> > > > storage migration - works most of the time - but can be very slow -
> > even
> > > > with 10G backplane.
> > > >
> > > > On 9/5/17 6:27 AM, Marc-Aurèle Brothier wrote:
> > > > > Hi Dimitriy,
> > > > >
> > > > > I wrote the PR for the live migration in cloudstack (PR 1709).
> We're
> > > > using
> > > > > an older version than upstream so it's hard for me to fix the
> > > integration
> > > > > tests errors. All I can tell you, is that you should first
> configure
> > > > > libvirt correctly for migration. You can play with it by manually
> > > running
> > > > > virsh commands to initiate the migration. The networking part will
> > not
> > > > work
> > > > > after the VM being on the other machine if down manually.
> > > > >
> > > > > Marc-Aurèle
> > > > >
> > > > > On Tue, Sep 5, 2017 at 2:07 PM, Dmitriy Kaluzhniy <
> > > > > dmitriy.kaluzh...@gmail.com> wrote:
> > > > >
> > > > >> Hello,
> > > > >> That's what I want, thank you!
> > > > >> I want to have Live migration on KVM with non-shared storages.
> > > > >> As I understood, migration is performed by LibVirt.
> > > > >>
> > > > >> 2017-09-01 17:04 GMT+03:00 Simon Weller  >:
> > > > >>
> > > > >>> Dmitriy,
> > > > >>>
> > > > >>> Can you give us a bit more information about what you're trying
> to
> > > do?
> > > > >>> If you're looking for live migration on non shared storage with
> > KVM,
> > > > >> there
> > > > >>> is an outstanding PR  in the works to support that:
> > > > >>>
> > > > >>> https://github.com/apache/cloudstack/pull/1709
> > > > >>>
> > > > >>> - Si
> > > > >>>
> > > > >>>
> > > > >>> 
> > > > >>> From: Rajani Karuturi 
> > > > >>> Sent: Friday, September 1, 2017 4:07 AM
> > > > >>> To: dev@cloudstack.apache.org
> > > > >>> Subject: Re: Need to ask for help again (Migration in cloudstack)
> > > > >>>
> > > > >>> You might start with this commit
> > > > >>> https://github.com/apache/cloudstack/commit/
> > > > >> 21ce3befc8ea9e1a6de449a21499a5
> > > > >>> 0ff141a183
> > > > >>>
> > > > >>>
> > > > >>> and storage_motion_supported column in hypervisor_capabilities
> > > > >>> table.
> > > > >>>
> > > > >>> Thanks,
> > > > >>>
> > > > >>> ~ Rajani
> > > > >>>
> > > > >>> http://cloudplatform.accelerite.com/
> > > > >>>
> > > > >>> On August 31, 2017 at 6:29 PM, Dmitriy Kaluzhniy
> > > > >>> (dmitriy.kaluzh...@gmail.com) wrote:
> > > > >>>
> > > > >>> Hello!
> > > > >>> I contacted this mail before, but I wasn't subscribed to mailing
> > > > >>> list.
> > > > >>> The reason I'm

Re: Need to ask for help again (Migration in cloudstack)

2017-10-02 Thread Andrija Panic
BTW, I went extreme and tested 24CPU/60GB busy VM migrate with dynamic
auto-convergence (qemu2.5/libvirt1.3.1 and a nice patch to activate
autoconverge flag inside ACS- thx to Mike Tutkowski !), where right after
first migration cycle of 58G ram is finished (58GB RAM = Prime95 workload
with all 24 CPUs) -  yet another 58GB of modified RAM needs to migrated :D

So it really works like a charm :)

On 2 October 2017 at 15:29, Andrija Panic  wrote:

> Hi Ivan,
>
> yes you are right, but it works like crap (from downtime perspective),
> because when we could not live migrate "normally" one 64GB client VM, we
> manually (instead of ACS doing it...) paused the VM via VIRSH, and then VM
> was in pauses state for 15min (yes it was only 1GBps management network at
> that time), so VM was down for 15min... and that is unacceptable for client.
>
> So dynamic auto convergence will work in following way (based on my
> experience monitoring migration cycles and CPU cycles with my 4 eyes :)  )
> - it will slowly throttle CPU, more and more, but very gently... until it
> decide, enough is enough, and then after i.e. 16-30 migration iterations
> (of almost full RAM being migrated each iteration), it will throttle CPU
> aggressively and let VM migration finish (without downtime except during
> finall pause of few tens of miliseconds or less).
>
> Again, just my experience, because we do have many "enterprise workload"
> customers, and it was pain until we solved this to work fine (imagine host
> maintenance mode also not working fully for those VMs...)
>
> Cheers
>
> On 2 October 2017 at 14:55, Dmitriy Kaluzhniy  > wrote:
>
>> Hello!
>> I want to say thanks to all!
>> Nowadays I had no time to work on this, but I hope I will setup some test
>> environment to try live migration + migration on non-shared.
>>
>> 2017-10-02 13:50 GMT+03:00 Ivan Kudryavtsev :
>>
>> > AFAIK ACS has VM suspend parameter in KVM agent which acts when ACS is
>> > unable to migrate successfully. Also, I almost have no problem with
>> > 8core/16GB migration over 10G, but you are right. Sometimes it doesn't
>> work
>> > as expected without autoconvergence and new Qemu/KVM does the work.
>> >
>> > 2017-10-02 17:44 GMT+07:00 Andrija Panic :
>> >
>> > > A bit late, and not directly related with original question - if you
>> are
>> > > doing any kind of KVM live migration (ACS or not), make sure you are
>> > using
>> > > qemu 2.5 and libvirt 1.3+, to support
>> > > dynamic auto-convergence (regular auto-convergence, almost useless,
>> > > available from qemu 1.6+) - becase live migration works well, until
>> you
>> > hit
>> > > busy production VM, where there is hi RAM change rate, then nothing
>> helps
>> > > except mentioned qemu 2.5+ dynamic autoconvergence (and even this
>> takes
>> > > ages to completely allow some very busy VMs to finish migration...).
>> > >
>> > > On 5 September 2017 at 22:52, ilya 
>> wrote:
>> > >
>> > > > Personal experience with KVM (not cloudstack related) and non-shared
>> > > > storage migration - works most of the time - but can be very slow -
>> > even
>> > > > with 10G backplane.
>> > > >
>> > > > On 9/5/17 6:27 AM, Marc-Aurèle Brothier wrote:
>> > > > > Hi Dimitriy,
>> > > > >
>> > > > > I wrote the PR for the live migration in cloudstack (PR 1709).
>> We're
>> > > > using
>> > > > > an older version than upstream so it's hard for me to fix the
>> > > integration
>> > > > > tests errors. All I can tell you, is that you should first
>> configure
>> > > > > libvirt correctly for migration. You can play with it by manually
>> > > running
>> > > > > virsh commands to initiate the migration. The networking part will
>> > not
>> > > > work
>> > > > > after the VM being on the other machine if down manually.
>> > > > >
>> > > > > Marc-Aurèle
>> > > > >
>> > > > > On Tue, Sep 5, 2017 at 2:07 PM, Dmitriy Kaluzhniy <
>> > > > > dmitriy.kaluzh...@gmail.com> wrote:
>> > > > >
>> > > > >> Hello,
>> > > > >> That's what I want, thank you!
>> > > > >> I want to have Live migration on KVM with non-shared storages.
>> > > > >> As I understood, migration is performed by LibVirt.
>> > > > >>
>> > > > >> 2017-09-01 17:04 GMT+03:00 Simon Weller > >:
>> > > > >>
>> > > > >>> Dmitriy,
>> > > > >>>
>> > > > >>> Can you give us a bit more information about what you're trying
>> to
>> > > do?
>> > > > >>> If you're looking for live migration on non shared storage with
>> > KVM,
>> > > > >> there
>> > > > >>> is an outstanding PR  in the works to support that:
>> > > > >>>
>> > > > >>> https://github.com/apache/cloudstack/pull/1709
>> > > > >>>
>> > > > >>> - Si
>> > > > >>>
>> > > > >>>
>> > > > >>> 
>> > > > >>> From: Rajani Karuturi 
>> > > > >>> Sent: Friday, September 1, 2017 4:07 AM
>> > > > >>> To: dev@cloudstack.apache.org
>> > > > >>> Subject: Re: Need to ask for help again (Migration in
>> cloudstack)
>> > > > >>>
>> > > > >>> You might start with this commit
>> > > > >>> https://github.com/apache/cloudstack

Re: Need to ask for help again (Migration in cloudstack)

2017-10-02 Thread Ivan Kudryavtsev
Hi. Just, don't compare 1g vs 10g or even 40g infiniband network. It might
look like linear bandwidth growth should lead to proportional time
decrease, but migration can stuck forever with 1g and work seconds with 10g
or 40g.

But, Indeed, autoconvergence is a great feature.

2 окт. 2017 г. 20:32 пользователь "Andrija Panic" 
написал:

> BTW, I went extreme and tested 24CPU/60GB busy VM migrate with dynamic
> auto-convergence (qemu2.5/libvirt1.3.1 and a nice patch to activate
> autoconverge flag inside ACS- thx to Mike Tutkowski !), where right after
> first migration cycle of 58G ram is finished (58GB RAM = Prime95 workload
> with all 24 CPUs) -  yet another 58GB of modified RAM needs to migrated :D
>
> So it really works like a charm :)
>
> On 2 October 2017 at 15:29, Andrija Panic  wrote:
>
> > Hi Ivan,
> >
> > yes you are right, but it works like crap (from downtime perspective),
> > because when we could not live migrate "normally" one 64GB client VM, we
> > manually (instead of ACS doing it...) paused the VM via VIRSH, and then
> VM
> > was in pauses state for 15min (yes it was only 1GBps management network
> at
> > that time), so VM was down for 15min... and that is unacceptable for
> client.
> >
> > So dynamic auto convergence will work in following way (based on my
> > experience monitoring migration cycles and CPU cycles with my 4 eyes :)
> )
> > - it will slowly throttle CPU, more and more, but very gently... until it
> > decide, enough is enough, and then after i.e. 16-30 migration iterations
> > (of almost full RAM being migrated each iteration), it will throttle CPU
> > aggressively and let VM migration finish (without downtime except during
> > finall pause of few tens of miliseconds or less).
> >
> > Again, just my experience, because we do have many "enterprise workload"
> > customers, and it was pain until we solved this to work fine (imagine
> host
> > maintenance mode also not working fully for those VMs...)
> >
> > Cheers
> >
> > On 2 October 2017 at 14:55, Dmitriy Kaluzhniy <
> dmitriy.kaluzh...@gmail.com
> > > wrote:
> >
> >> Hello!
> >> I want to say thanks to all!
> >> Nowadays I had no time to work on this, but I hope I will setup some
> test
> >> environment to try live migration + migration on non-shared.
> >>
> >> 2017-10-02 13:50 GMT+03:00 Ivan Kudryavtsev :
> >>
> >> > AFAIK ACS has VM suspend parameter in KVM agent which acts when ACS is
> >> > unable to migrate successfully. Also, I almost have no problem with
> >> > 8core/16GB migration over 10G, but you are right. Sometimes it doesn't
> >> work
> >> > as expected without autoconvergence and new Qemu/KVM does the work.
> >> >
> >> > 2017-10-02 17:44 GMT+07:00 Andrija Panic :
> >> >
> >> > > A bit late, and not directly related with original question - if you
> >> are
> >> > > doing any kind of KVM live migration (ACS or not), make sure you are
> >> > using
> >> > > qemu 2.5 and libvirt 1.3+, to support
> >> > > dynamic auto-convergence (regular auto-convergence, almost useless,
> >> > > available from qemu 1.6+) - becase live migration works well, until
> >> you
> >> > hit
> >> > > busy production VM, where there is hi RAM change rate, then nothing
> >> helps
> >> > > except mentioned qemu 2.5+ dynamic autoconvergence (and even this
> >> takes
> >> > > ages to completely allow some very busy VMs to finish migration...).
> >> > >
> >> > > On 5 September 2017 at 22:52, ilya 
> >> wrote:
> >> > >
> >> > > > Personal experience with KVM (not cloudstack related) and
> non-shared
> >> > > > storage migration - works most of the time - but can be very slow
> -
> >> > even
> >> > > > with 10G backplane.
> >> > > >
> >> > > > On 9/5/17 6:27 AM, Marc-Aurèle Brothier wrote:
> >> > > > > Hi Dimitriy,
> >> > > > >
> >> > > > > I wrote the PR for the live migration in cloudstack (PR 1709).
> >> We're
> >> > > > using
> >> > > > > an older version than upstream so it's hard for me to fix the
> >> > > integration
> >> > > > > tests errors. All I can tell you, is that you should first
> >> configure
> >> > > > > libvirt correctly for migration. You can play with it by
> manually
> >> > > running
> >> > > > > virsh commands to initiate the migration. The networking part
> will
> >> > not
> >> > > > work
> >> > > > > after the VM being on the other machine if down manually.
> >> > > > >
> >> > > > > Marc-Aurèle
> >> > > > >
> >> > > > > On Tue, Sep 5, 2017 at 2:07 PM, Dmitriy Kaluzhniy <
> >> > > > > dmitriy.kaluzh...@gmail.com> wrote:
> >> > > > >
> >> > > > >> Hello,
> >> > > > >> That's what I want, thank you!
> >> > > > >> I want to have Live migration on KVM with non-shared storages.
> >> > > > >> As I understood, migration is performed by LibVirt.
> >> > > > >>
> >> > > > >> 2017-09-01 17:04 GMT+03:00 Simon Weller
>  >> >:
> >> > > > >>
> >> > > > >>> Dmitriy,
> >> > > > >>>
> >> > > > >>> Can you give us a bit more information about what you're
> trying
> >> to
> >> > > do?
> >> > > > >>> If you're looking for live migration on non sh

Re: Primary interface on Windows templates

2017-10-02 Thread Ivan Kudryavtsev
Hi, I believe that if you change os type to linux, you'll get it. But it
could lead to problems with storage drivers as acs will announce it as
virtio too.

2 окт. 2017 г. 19:58 пользователь "Dmitriy Kaluzhniy" <
dmitriy.kaluzh...@gmail.com> написал:

> Hello,
> I was working with templates and find out that Windows templates
> automatically gets E1000 interface. Is there any way to change it to
> Virtio?
>
> --
>
>
>
>
> *​Best regards,Dmitriy Kaluzhniy+38 (073) 101 14 73*
>


[FEATURE REQUEST] Enforcing local password policies

2017-10-02 Thread Lotic Lists
Guys, what you think about enforce password policies for local users?

 

https://issues.apache.org/jira/browse/CLOUDSTACK-10082

 

Regards

Marcelo



Re: [FEATURE REQUEST] Enforcing local password policies

2017-10-02 Thread Rafael Weingärtner

This feature is interesting, I think it can help to improve ACS security

+1


On 10/2/2017 12:00 PM, Lotic Lists wrote:

Guys, what you think about enforce password policies for local users?

  


https://issues.apache.org/jira/browse/CLOUDSTACK-10082

  


Regards

Marcelo




--
Rafael Weingärtner



Re: [FEATURE REQUEST] Enforcing local password policies

2017-10-02 Thread Ivan Kudryavtsev
I suppose, the feature should be combined with old password confirmation
when password change request is sent by user him/her-self.

2 окт. 2017 г. 22:11 пользователь "Rafael Weingärtner" <
raf...@autonomiccs.com.br> написал:

> This feature is interesting, I think it can help to improve ACS security
>
> +1
>
>
> On 10/2/2017 12:00 PM, Lotic Lists wrote:
>
>> Guys, what you think about enforce password policies for local users?
>>
>>
>> https://issues.apache.org/jira/browse/CLOUDSTACK-10082
>>
>>
>> Regards
>>
>> Marcelo
>>
>>
>>
> --
> Rafael Weingärtner
>
>