GSoC2018

2018-02-01 Thread Daan Hoogland
People,
I went ahead and added some projects with label gsoc2018 to ur issues list.
I'm sure all of you can do way better then me ;)
https://issues.apache.org/jira/issues/?jql=project%20%3D%20CLOUDSTACK%20AND%20labels%20%3D%20gsoc2018

​regards,​
-- 
Daan


FW: Apache EU Roadshow 2018 in Berlin

2018-02-01 Thread Paul Angus
Hi Everyone,

I’m cross-posting again…

I think that it would be great if we could have a couple of CloudStack 
presentations, but maybe we could pool a user’s story with development piece to 
showcase CloudStack from both perspectives.

Any one up for this?


paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

From: Sharan F [mailto:sha...@apache.org]
Sent: 01 February 2018 14:47
To: committ...@apache.org
Subject: Apache EU Roadshow 2018 in Berlin

Hi Everyone

For those of you who may not have seen the blog post, we will be holding an 
Apache EU Roadshow co-located with FOSS Backstage in Berlin on 13th and 14th 
June 2018. 
https://blogs.apache.org/foundation/entry/the-apache-software-foundation-announces28

As we have limited capacity for tracks, we are focussing on areas and projects 
that can deliver full tracks and also can attract good audiences. (IoT, Cloud, 
Httpd and Tomcat). Our community and Apache Way related talks will be managed 
as part of the FOSS Backstage program.

The CFP for our EU Roadshow event is now open at 
http://apachecon.com/euroadshow18/ and we are looking forward to receiving your 
submissions. I encourage you to please promote this event within your projects.

More details will be coming out soon and you can keep up to date by regularly 
checking http://apachecon.com/ or following @ApacheCon on Twitter.

Thanks
Sharan




Re: CS 4.8 KVM VMs will not live migrate

2018-02-01 Thread Andrija Panic
The customer with serial number here :)

So, another issue which I noticed, when you have KVM host disconnections
(agent disconnect), then in some cases in the cloud.NICs table, there will
be missing broadcast URI, isolatio_URI and state or similar filed that is
NULL instead of having correct values for specific NIC of the affected VM.

In this case the VM will not live migrate via ACS (but you can of course
manually migrate it)...the fix is to fix the NICs table with proper values
(copy values from other NICs in the same network).

Check if this might be the case...

Cheers

On 31 January 2018 at 15:49, Tutkowski, Mike 
wrote:

> Glad to hear you fixed the issue! :)
>
> > On Jan 31, 2018, at 7:16 AM, David Mabry  wrote:
> >
> > Mike and Wei,
> >
> > Good news!  I was able to manually live migrate these VMs following the
> steps outlined below:
> >
> > 1.) virsh dumpxml 38 --migratable > 38.xml
> > 2.) Change the vnc information in 38.xml to match destination host IP
> and available VNC port
> > 3.) virsh migrate --verbose --live 38 --xml 38.xml qemu+tcp://
> destination.host.net/system
> >
> > To my surprise, Cloudstack was able to discover and properly handle the
> fact that this VM was live migrated to a new host without issue.  Very cool.
> >
> > Wei, I suspect you are correct when you said this was an issue with the
> cloudstack agent code.  After digging a little deeper, the agent is never
> attempting to talk to libvirt at all after prepping the dxml to send to the
> destination host.  I'm going to attempt to reproduce this in my lab and
> attach a remote debugger and see if I can get to the bottom of it.
> >
> > Thanks again for the help guys!  I really appreciate it.
> >
> > Thanks,
> > David Mabry
> >
> > On 1/30/18, 9:55 AM, "David Mabry"  wrote:
> >
> >Ah, understood.  I'll take a closer look at the logs and make sure
> that I didn't accidentally miss those lines when I pulled together the logs
> for this email chain.
> >
> >Thanks,
> >David Mabry
> >On 1/30/18, 8:34 AM, "Wei ZHOU"  wrote:
> >
> >Hi David,
> >
> >I encountered the UnsupportAnswer once before, when I made some
> changes in
> >the kvm plugin.
> >
> >Normally there should be some network configurations in the
> agent.log but I
> >do not see it.
> >
> >-Wei
> >
> >
> >2018-01-30 15:00 GMT+01:00 David Mabry :
> >
> >> Hi Wei,
> >>
> >> I detached the iso and received the same error.  Just out of curiosity,
> >> what leads you to believe it is something in the vxlan code?  I guess at
> >> this point, attaching a remote debugger to the agent in question might
> be
> >> the best way to get to the bottom of what is going on.
> >>
> >> Thanks in advance for the help.  I really, really appreciate it.
> >>
> >> Thanks,
> >> David Mabry
> >>
> >> On 1/30/18, 3:30 AM, "Wei ZHOU"  wrote:
> >>
> >>The answer should be caused by an exception in the cloudstack agent.
> >>I tried to migrate a vm in our testing env, it is working.
> >>
> >>there are some different between our env and yours.
> >>(1) vlan VS vxlan
> >>(2) no ISO VS attached ISO
> >>(3) both of us use ceph and centos7.
> >>
> >>I suspect it is caused by codes on vxlan.
> >>However, could you detach the ISO and try again ?
> >>
> >>-Wei
> >>
> >>
> >>
> >>2018-01-29 19:48 GMT+01:00 David Mabry :
> >>
> >>> Good day Cloudstack Devs,
> >>>
> >>> I've run across a real head scratcher.  I have two VMs, (initially 3
> >> VMs,
> >>> but more on that later) on a single host, that I cannot live migrate
> >> to any
> >>> other host in the same cluster.  We discovered this after attempting
> >> to
> >>> roll out patches going from CentOS 7.2 to CentOS 7.4.  Initially, we
> >>> thought it had something to do with the new version of libvirtd or
> >> qemu-kvm
> >>> on the other hosts in the cluster preventing these VMs from
> >> migrating, but
> >>> we are able to live migrate other VMs to and from this host without
> >> issue.
> >>> We can even create new VMs on this specific host and live migrate
> >> them
> >>> after creation with no issue.  We've put the migration source agent,
> >>> migration destination agent and the management server in debug and
> >> don't
> >>> seem to get anything useful other than "Unsupported command".
> >> Luckily, we
> >>> did have one VM that was shutdown and restarted, this is the 3rd VM
> >>> mentioned above.  Since that VM has been restarted, it has no issues
> >> live
> >>> migrating to any other host in the cluster.
> >>>
> >>> I'm at a loss as to what to try next and I'm hoping that someone out
> >> there
> >>> might have had a similar issue and could shed some light on what to
> >> do.
> >>> Obviously, I can contact the customer and have them shutdown their
> >> VMs, but
> >>> that will potentially just delay this problem to be solved another
> >> day.
> >>> Even if shutting down the VMs is ultimately the solution, I'd still
> >> like to
> >>> unde

Re: CS 4.8 KVM VMs will not live migrate

2018-02-01 Thread David Mabry
Andrija,

Thanks for the tip.  I'll check that out and let you know what I find.

Thanks,
David Mabry
On 2/1/18, 2:04 PM, "Andrija Panic"  wrote:

The customer with serial number here :)

So, another issue which I noticed, when you have KVM host disconnections
(agent disconnect), then in some cases in the cloud.NICs table, there will
be missing broadcast URI, isolatio_URI and state or similar filed that is
NULL instead of having correct values for specific NIC of the affected VM.

In this case the VM will not live migrate via ACS (but you can of course
manually migrate it)...the fix is to fix the NICs table with proper values
(copy values from other NICs in the same network).

Check if this might be the case...

Cheers

On 31 January 2018 at 15:49, Tutkowski, Mike 
wrote:

> Glad to hear you fixed the issue! :)
>
> > On Jan 31, 2018, at 7:16 AM, David Mabry  wrote:
> >
> > Mike and Wei,
> >
> > Good news!  I was able to manually live migrate these VMs following the
> steps outlined below:
> >
> > 1.) virsh dumpxml 38 --migratable > 38.xml
> > 2.) Change the vnc information in 38.xml to match destination host IP
> and available VNC port
> > 3.) virsh migrate --verbose --live 38 --xml 38.xml qemu+tcp://
> destination.host.net/system
> >
> > To my surprise, Cloudstack was able to discover and properly handle the
> fact that this VM was live migrated to a new host without issue.  Very 
cool.
> >
> > Wei, I suspect you are correct when you said this was an issue with the
> cloudstack agent code.  After digging a little deeper, the agent is never
> attempting to talk to libvirt at all after prepping the dxml to send to 
the
> destination host.  I'm going to attempt to reproduce this in my lab and
> attach a remote debugger and see if I can get to the bottom of it.
> >
> > Thanks again for the help guys!  I really appreciate it.
> >
> > Thanks,
> > David Mabry
> >
> > On 1/30/18, 9:55 AM, "David Mabry"  wrote:
> >
> >Ah, understood.  I'll take a closer look at the logs and make sure
> that I didn't accidentally miss those lines when I pulled together the 
logs
> for this email chain.
> >
> >Thanks,
> >David Mabry
> >On 1/30/18, 8:34 AM, "Wei ZHOU"  wrote:
> >
> >Hi David,
> >
> >I encountered the UnsupportAnswer once before, when I made some
> changes in
> >the kvm plugin.
> >
> >Normally there should be some network configurations in the
> agent.log but I
> >do not see it.
> >
> >-Wei
> >
> >
> >2018-01-30 15:00 GMT+01:00 David Mabry :
> >
> >> Hi Wei,
> >>
> >> I detached the iso and received the same error.  Just out of curiosity,
> >> what leads you to believe it is something in the vxlan code?  I guess 
at
> >> this point, attaching a remote debugger to the agent in question might
> be
> >> the best way to get to the bottom of what is going on.
> >>
> >> Thanks in advance for the help.  I really, really appreciate it.
> >>
> >> Thanks,
> >> David Mabry
> >>
> >> On 1/30/18, 3:30 AM, "Wei ZHOU"  wrote:
> >>
> >>The answer should be caused by an exception in the cloudstack agent.
> >>I tried to migrate a vm in our testing env, it is working.
> >>
> >>there are some different between our env and yours.
> >>(1) vlan VS vxlan
> >>(2) no ISO VS attached ISO
> >>(3) both of us use ceph and centos7.
> >>
> >>I suspect it is caused by codes on vxlan.
> >>However, could you detach the ISO and try again ?
> >>
> >>-Wei
> >>
> >>
> >>
> >>2018-01-29 19:48 GMT+01:00 David Mabry :
> >>
> >>> Good day Cloudstack Devs,
> >>>
> >>> I've run across a real head scratcher.  I have two VMs, (initially 3
> >> VMs,
> >>> but more on that later) on a single host, that I cannot live migrate
> >> to any
> >>> other host in the same cluster.  We discovered this after attempting
> >> to
> >>> roll out patches going from CentOS 7.2 to CentOS 7.4.  Initially, we
> >>> thought it had something to do with the new version of libvirtd or
> >> qemu-kvm
> >>> on the other hosts in the cluster preventing these VMs from
> >> migrating, but
> >>> we are able to live migrate other VMs to and from this host without
> >> issue.
> >>> We can even create new VMs on this specific host and live migrate
> >> them
> >>> after creation with no issue.  We've put the migration source agent,
> >>> migration destination agent and the management server in debug and
> >> don't
> >>> seem to get anything useful other than "Unsupported command".
> >> Luckily,

[fosdem] Anybody going to Fosdem this weekend?

2018-02-01 Thread Rohit Yadav
Hi all,

I will be at Fosdem in Brussels this weekend, and I know Daan is going to be 
there too - if you're going it would be lovely to meet you and discuss 
CloudStack among other things, tweet me @rhtyd.

Cheers.

rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



Re: CS 4.8 KVM VMs will not live migrate

2018-02-01 Thread David Mabry
Andrija,

You were right!  The isolation_uri and the broadcast_uri where both blank for 
the problem VMs.  Once I corrected the issue, I was able to migrate them inside 
of CS without issue.  Thanks for helping me get to the root cause of this 
issue.  

Thanks,
David Mabry

On 2/1/18, 3:27 PM, "David Mabry"  wrote:

Andrija,

Thanks for the tip.  I'll check that out and let you know what I find.

Thanks,
David Mabry
On 2/1/18, 2:04 PM, "Andrija Panic"  wrote:

The customer with serial number here :)

So, another issue which I noticed, when you have KVM host disconnections
(agent disconnect), then in some cases in the cloud.NICs table, there 
will
be missing broadcast URI, isolatio_URI and state or similar filed that 
is
NULL instead of having correct values for specific NIC of the affected 
VM.

In this case the VM will not live migrate via ACS (but you can of course
manually migrate it)...the fix is to fix the NICs table with proper 
values
(copy values from other NICs in the same network).

Check if this might be the case...

Cheers

On 31 January 2018 at 15:49, Tutkowski, Mike 
wrote:

> Glad to hear you fixed the issue! :)
>
> > On Jan 31, 2018, at 7:16 AM, David Mabry  
wrote:
> >
> > Mike and Wei,
> >
> > Good news!  I was able to manually live migrate these VMs following 
the
> steps outlined below:
> >
> > 1.) virsh dumpxml 38 --migratable > 38.xml
> > 2.) Change the vnc information in 38.xml to match destination host 
IP
> and available VNC port
> > 3.) virsh migrate --verbose --live 38 --xml 38.xml qemu+tcp://
> destination.host.net/system
> >
> > To my surprise, Cloudstack was able to discover and properly handle 
the
> fact that this VM was live migrated to a new host without issue.  
Very cool.
> >
> > Wei, I suspect you are correct when you said this was an issue with 
the
> cloudstack agent code.  After digging a little deeper, the agent is 
never
> attempting to talk to libvirt at all after prepping the dxml to send 
to the
> destination host.  I'm going to attempt to reproduce this in my lab 
and
> attach a remote debugger and see if I can get to the bottom of it.
> >
> > Thanks again for the help guys!  I really appreciate it.
> >
> > Thanks,
> > David Mabry
> >
> > On 1/30/18, 9:55 AM, "David Mabry"  wrote:
> >
> >Ah, understood.  I'll take a closer look at the logs and make 
sure
> that I didn't accidentally miss those lines when I pulled together 
the logs
> for this email chain.
> >
> >Thanks,
> >David Mabry
> >On 1/30/18, 8:34 AM, "Wei ZHOU"  wrote:
> >
> >Hi David,
> >
> >I encountered the UnsupportAnswer once before, when I made 
some
> changes in
> >the kvm plugin.
> >
> >Normally there should be some network configurations in the
> agent.log but I
> >do not see it.
> >
> >-Wei
> >
> >
> >2018-01-30 15:00 GMT+01:00 David Mabry 
:
> >
> >> Hi Wei,
> >>
> >> I detached the iso and received the same error.  Just out of 
curiosity,
> >> what leads you to believe it is something in the vxlan code?  I 
guess at
> >> this point, attaching a remote debugger to the agent in question 
might
> be
> >> the best way to get to the bottom of what is going on.
> >>
> >> Thanks in advance for the help.  I really, really appreciate it.
> >>
> >> Thanks,
> >> David Mabry
> >>
> >> On 1/30/18, 3:30 AM, "Wei ZHOU"  wrote:
> >>
> >>The answer should be caused by an exception in the cloudstack 
agent.
> >>I tried to migrate a vm in our testing env, it is working.
> >>
> >>there are some different between our env and yours.
> >>(1) vlan VS vxlan
> >>(2) no ISO VS attached ISO
> >>(3) both of us use ceph and centos7.
> >>
> >>I suspect it is caused by codes on vxlan.
> >>However, could you detach the ISO and try again ?
> >>
> >>-Wei
> >>
> >>
> >>
> >>2018-01-29 19:48 GMT+01:00 David Mabry :
> >>
> >>> Good day Cloudstack Devs,
> >>>
> >>> I've run across a real head scratcher.  I have two VMs, 
(initially 3
> >> VMs,
> >>> but more on that later) on a single host, that I cannot live 
migrate
> >>