Re: [poll] cloudstack exam

2018-05-01 Thread Andrija Panic
Hi Giles,

FYI, discount code is no more valid:

Discount validation failed.

Exam: ACCEL-100, May 8 at 11:30 AM

   - This discount cannot be used with appointments scheduled after 31 Oct
   2016.

Cheers,
Andrija

On Mon, Nov 30, 2015, 19:44 Stephan Seitz <
s.se...@secretresearchfacility.com> wrote:

>
> > Quick poll: has anybody here taken the ACCEL cloudstack certification
> > exam ? what did you think ? Too hard, too easy ? – about right ?
>
> Well, I signed the usual NDA at pearson vue, so I shouldn't answer in
> detail :)
> The exam covered a lot of aspects around ACS, in my opinion well
> balanced. I did it spontanously (but with (A)CS hands-on since 2.2) and
> managed it.
> It obviously shows some parallels to LPIC 304, but I assume this is
> inevitable.
>
> So, about right, I'ld say.
>
> >
> > Also, by way of reminder: if you use the code  ACCELpromocodeASF when
> > registering for the exam, 1/3 of the fee goes to the ACS project
> >
> > Kind Regards
> > Giles
> >
> > Giles Sirett
> > CEO
>


Re: [DISCUSS] VR upgrade downtime reduction

2018-05-01 Thread Rohit Yadav
All,


A short-term solution to VR upgrade or network restart (with cleanup=true) has 
been implemented:


- The strategy for redundant VRs builds on top of Wei's original patch where 
backup routers are removed and replace in a rolling basis. The downtime I saw 
was usually 0-2 seconds, and theoretically downtime is maximum of [0, 
3*advertisement interval + skew seconds] or 0-10 seconds (with cloudstack's 
default of 1s advertisement interval).


- For non-redundant routers, I've implemented a strategy where first a new VR 
is deployed, then old VR is powered-off/destroyed, and the new VR is again 
re-programmed. With this strategy, two identical VRs may be up for a brief 
moment (few seconds) where both can serve traffic, however the new VR performs 
arp-ping on its interfaces to update neighbours. After the old VR is removed, 
the new VR is re-programmed which among many things performs another arpping. 
The theoretical downtime is therefore limited by the arp-cache refresh which 
can be up to 30 seconds. In my experiments, against various VMware, KVM and 
XenServer versions I found that the downtime was indeed less than 30s, usually 
between 5-20 seconds. Compared to older ACS versions, especially in cases where 
VRs deployment require full volume copy (like in VMware) a 10x-12x improvement 
was seen.


Please review, test the following PRs which has test details, benchmarks, and 
some screenshots:

https://github.com/apache/cloudstack/pull/2508


Future work can be driven towards making all VRs redundant enabled by default 
that can allow for a firewall+connections state transfer (conntrackd + VRRP2/3 
based) during rolling reboots.


- Rohit






From: Daan Hoogland 
Sent: Thursday, February 8, 2018 3:11:51 PM
To: dev
Subject: Re: [DISCUSS] VR upgrade downtime reduction

to stop the vote and continue the discussion. I personally want unification
of all router vms: VR, 'shared network', rVR, VPC, rVPC, and eventually the
one we want to create for 'enterprise topology hand-off points'. And I
think we have some level of consensus on that but the path there is a
concern for Wido and for some of my colleagues as well, and rightly so. One
issue is upgrades from older versions.

I the common scenario as follows:
+ redundancy is deprecated and only number of instances remain.
+ an old VR is replicated in memory by an redundant enabled version, that
will be in a state of running but inactive.
- the old one will be destroyed while a ping is running
- as soon as the ping fails more then three times in a row (this might have
to have a hypervisor specific implementation or require a helper vm)
+ the new one is activated

after this upgrade Wei's and/or Remi's code will do the work for any
following upgrade.

flames, please



On Wed, Feb 7, 2018 at 12:17 PM, Nux!  wrote:

> +1 too
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> 
rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

- Original Message -
> > From: "Rene Moser" 
> > To: "dev" 
> > Sent: Wednesday, 7 February, 2018 10:11:45
> > Subject: Re: [DISCUSS] VR upgrade downtime reduction
>
> > On 02/06/2018 02:47 PM, Remi Bergsma wrote:
> >> Hi Daan,
> >>
> >> In my opinion the biggest issue is the fact that there are a lot of
> different
> >> code paths: VPC versus non-VPC, VPC versus redundant-VPC, etc. That's
> why you
> >> cannot simply switch from a single VPC to a redundant VPC for example.
> >>
> >> For SBP, we mitigated that in Cosmic by converting all non-VPCs to a
> VPC with a
> >> single tier and made sure all features are supported. Next we merged
> the single
> >> and redundant VPC code paths. The idea here is that redundancy or not
> should
> >> only be a difference in the number of routers. Code should be the same.
> A
> >> single router, is also "master" but there just is no "backup".
> >>
> >> That simplifies things A LOT, as keepalived is now the master of the
> whole
> >> thing. No more assigning ip addresses in Python, but leave that to
> keepalived
> >> instead. Lots of code deleted. Easier to maintain, way more stable. We
> just
> >> released Cosmic 6 that has this feature and are now rolling it out in
> >> production. Looking good so far. This change unlocks a lot of
> possibilities,
> >> like live upgrading from a single VPC to a redundant one (and back). In
> the
> >> end, if the redundant VPC is rock solid, you most likely don't even
> want single
> >> VPCs any more. But that will come.
> >>
> >> As I said, we're rolling this out as we speak. In a few weeks when
> everything is
> >> upgraded I can share what we learned and how well it works. CloudStack
> could
> >> use a similar approach.
> >
> > +1 Pretty much this.
> >
> > René
>



--
Daan


Re: [DISCUSS] VR upgrade downtime reduction

2018-05-01 Thread Daan Hoogland
good work Rohit,
I'll review 2508 https://github.com/apache/cloudstack/pull/2508

On Tue, May 1, 2018 at 12:08 PM, Rohit Yadav 
wrote:

> All,
>
>
> A short-term solution to VR upgrade or network restart (with cleanup=true)
> has been implemented:
>
>
> - The strategy for redundant VRs builds on top of Wei's original patch
> where backup routers are removed and replace in a rolling basis. The
> downtime I saw was usually 0-2 seconds, and theoretically downtime is
> maximum of [0, 3*advertisement interval + skew seconds] or 0-10 seconds
> (with cloudstack's default of 1s advertisement interval).
>
>
> - For non-redundant routers, I've implemented a strategy where first a new
> VR is deployed, then old VR is powered-off/destroyed, and the new VR is
> again re-programmed. With this strategy, two identical VRs may be up for a
> brief moment (few seconds) where both can serve traffic, however the new VR
> performs arp-ping on its interfaces to update neighbours. After the old VR
> is removed, the new VR is re-programmed which among many things performs
> another arpping. The theoretical downtime is therefore limited by the
> arp-cache refresh which can be up to 30 seconds. In my experiments, against
> various VMware, KVM and XenServer versions I found that the downtime was
> indeed less than 30s, usually between 5-20 seconds. Compared to older ACS
> versions, especially in cases where VRs deployment require full volume copy
> (like in VMware) a 10x-12x improvement was seen.
>
>
> Please review, test the following PRs which has test details, benchmarks,
> and some screenshots:
>
> https://github.com/apache/cloudstack/pull/2508
>
>
> Future work can be driven towards making all VRs redundant enabled by
> default that can allow for a firewall+connections state transfer
> (conntrackd + VRRP2/3 based) during rolling reboots.
>
>
> - Rohit
>
> 
>
>
>
> 
> From: Daan Hoogland 
> Sent: Thursday, February 8, 2018 3:11:51 PM
> To: dev
> Subject: Re: [DISCUSS] VR upgrade downtime reduction
>
> to stop the vote and continue the discussion. I personally want unification
> of all router vms: VR, 'shared network', rVR, VPC, rVPC, and eventually the
> one we want to create for 'enterprise topology hand-off points'. And I
> think we have some level of consensus on that but the path there is a
> concern for Wido and for some of my colleagues as well, and rightly so. One
> issue is upgrades from older versions.
>
> I the common scenario as follows:
> + redundancy is deprecated and only number of instances remain.
> + an old VR is replicated in memory by an redundant enabled version, that
> will be in a state of running but inactive.
> - the old one will be destroyed while a ping is running
> - as soon as the ping fails more then three times in a row (this might have
> to have a hypervisor specific implementation or require a helper vm)
> + the new one is activated
>
> after this upgrade Wei's and/or Remi's code will do the work for any
> following upgrade.
>
> flames, please
>
>
>
> On Wed, Feb 7, 2018 at 12:17 PM, Nux!  wrote:
>
> > +1 too
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> >
> >
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> - Original Message -
> > > From: "Rene Moser" 
> > > To: "dev" 
> > > Sent: Wednesday, 7 February, 2018 10:11:45
> > > Subject: Re: [DISCUSS] VR upgrade downtime reduction
> >
> > > On 02/06/2018 02:47 PM, Remi Bergsma wrote:
> > >> Hi Daan,
> > >>
> > >> In my opinion the biggest issue is the fact that there are a lot of
> > different
> > >> code paths: VPC versus non-VPC, VPC versus redundant-VPC, etc. That's
> > why you
> > >> cannot simply switch from a single VPC to a redundant VPC for example.
> > >>
> > >> For SBP, we mitigated that in Cosmic by converting all non-VPCs to a
> > VPC with a
> > >> single tier and made sure all features are supported. Next we merged
> > the single
> > >> and redundant VPC code paths. The idea here is that redundancy or not
> > should
> > >> only be a difference in the number of routers. Code should be the
> same.
> > A
> > >> single router, is also "master" but there just is no "backup".
> > >>
> > >> That simplifies things A LOT, as keepalived is now the master of the
> > whole
> > >> thing. No more assigning ip addresses in Python, but leave that to
> > keepalived
> > >> instead. Lots of code deleted. Easier to maintain, way more stable. We
> > just
> > >> released Cosmic 6 that has this feature and are now rolling it out in
> > >> production. Looking good so far. This change unlocks a lot of
> > possibilities,
> > >> like live upgrading from a single VPC to a redundant one (and back).
> In
> > the
> > >> end, if the redundant VPC is rock solid, you most likely don't even
> > want single
> > >> VPCs any more. But that will come.
> > >>
> > >> As I said, we're rol

ApacheCon North America 2018 schedule is now live.

2018-05-01 Thread Rich Bowen

Dear Apache Enthusiast,

We are pleased to announce our schedule for ApacheCon North America 
2018. ApacheCon will be held September 23-27 at the Montreal Marriott 
Chateau Champlain in Montreal, Canada.


Registration is open! The early bird rate of $575 lasts until July 21, 
at which time it goes up to $800. And the room block at the Marriott 
($225 CAD per night, including wifi) closes on August 24th.


We will be featuring more than 100 sessions on Apache projects. The 
schedule is now online at https://apachecon.com/acna18/


The schedule includes full tracks of content from Cloudstack[1], 
Tomcat[2], and our GeoSpatial community[3].


We will have 4 keynote speakers, two of whom are Apache members, and two 
from the wider community.


On Tuesday, Apache member and former board member Cliff Schmidt will be 
speaking about how Amplio uses technology to educate and improve the 
quality of life of people living in very difficult parts of the 
world[4]. And Apache Fineract VP Myrle Krantz will speak about how Open 
Source banking is helping the global fight against poverty[5].


Then, on Wednesday, we’ll hear from Bridget Kromhout, Principal Cloud 
Developer Advocate from Microsoft, about the really hard problem in 
software - the people[6]. And Euan McLeod, ‎VP VIPER at ‎Comcast will 
show us the many ways that Apache software delivers your favorite shows 
to your living room[7].


ApacheCon will also feature old favorites like the Lightning Talks, the 
Hackathon (running the duration of the event), PGP key signing, and lots 
of hallway-track time to get to know your project community better.


Follow us on Twitter, @ApacheCon, and join the disc...@apachecon.com 
mailing list (send email to discuss-subscr...@apachecon.com) to stay up 
to date with developments. And if your company wants to sponsor this 
event, get in touch at h...@apachecon.com for opportunities that are 
still available.


See you in Montreal!

Rich Bowen
VP Conferences, The Apache Software Foundation
h...@apachecon.com
@ApacheCon

[1] http://cloudstackcollab.org/
[2] http://tomcat.apache.org/conference.html
[3] http://apachecon.dukecon.org/acna/2018/#/schedule?search=geospatial
[4] 
http://apachecon.dukecon.org/acna/2018/#/scheduledEvent/df977fd305a31b903
[5] 
http://apachecon.dukecon.org/acna/2018/#/scheduledEvent/22c6c30412a3828d6
[6] 
http://apachecon.dukecon.org/acna/2018/#/scheduledEvent/fbbb2384fa91ebc6b
[7] 
http://apachecon.dukecon.org/acna/2018/#/scheduledEvent/88d50c3613852c2de


Re: [DISCUSS] VR upgrade downtime reduction

2018-05-01 Thread Simon Weller
Yes, nice work!





From: Daan Hoogland 
Sent: Tuesday, May 1, 2018 5:28 AM
To: us...@cloudstack.apache.org
Cc: dev
Subject: Re: [DISCUSS] VR upgrade downtime reduction

good work Rohit,
I'll review 2508 https://github.com/apache/cloudstack/pull/2508

On Tue, May 1, 2018 at 12:08 PM, Rohit Yadav 
wrote:

> All,
>
>
> A short-term solution to VR upgrade or network restart (with cleanup=true)
> has been implemented:
>
>
> - The strategy for redundant VRs builds on top of Wei's original patch
> where backup routers are removed and replace in a rolling basis. The
> downtime I saw was usually 0-2 seconds, and theoretically downtime is
> maximum of [0, 3*advertisement interval + skew seconds] or 0-10 seconds
> (with cloudstack's default of 1s advertisement interval).
>
>
> - For non-redundant routers, I've implemented a strategy where first a new
> VR is deployed, then old VR is powered-off/destroyed, and the new VR is
> again re-programmed. With this strategy, two identical VRs may be up for a
> brief moment (few seconds) where both can serve traffic, however the new VR
> performs arp-ping on its interfaces to update neighbours. After the old VR
> is removed, the new VR is re-programmed which among many things performs
> another arpping. The theoretical downtime is therefore limited by the
> arp-cache refresh which can be up to 30 seconds. In my experiments, against
> various VMware, KVM and XenServer versions I found that the downtime was
> indeed less than 30s, usually between 5-20 seconds. Compared to older ACS
> versions, especially in cases where VRs deployment require full volume copy
> (like in VMware) a 10x-12x improvement was seen.
>
>
> Please review, test the following PRs which has test details, benchmarks,
> and some screenshots:
>
> https://github.com/apache/cloudstack/pull/2508
>
>
> Future work can be driven towards making all VRs redundant enabled by
> default that can allow for a firewall+connections state transfer
> (conntrackd + VRRP2/3 based) during rolling reboots.
>
>
> - Rohit
>
> 
>
>
>
> 
> From: Daan Hoogland 
> Sent: Thursday, February 8, 2018 3:11:51 PM
> To: dev
> Subject: Re: [DISCUSS] VR upgrade downtime reduction
>
> to stop the vote and continue the discussion. I personally want unification
> of all router vms: VR, 'shared network', rVR, VPC, rVPC, and eventually the
> one we want to create for 'enterprise topology hand-off points'. And I
> think we have some level of consensus on that but the path there is a
> concern for Wido and for some of my colleagues as well, and rightly so. One
> issue is upgrades from older versions.
>
> I the common scenario as follows:
> + redundancy is deprecated and only number of instances remain.
> + an old VR is replicated in memory by an redundant enabled version, that
> will be in a state of running but inactive.
> - the old one will be destroyed while a ping is running
> - as soon as the ping fails more then three times in a row (this might have
> to have a hypervisor specific implementation or require a helper vm)
> + the new one is activated
>
> after this upgrade Wei's and/or Remi's code will do the work for any
> following upgrade.
>
> flames, please
>
>
>
> On Wed, Feb 7, 2018 at 12:17 PM, Nux!  wrote:
>
> > +1 too
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> >
> >
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> - Original Message -
> > > From: "Rene Moser" 
> > > To: "dev" 
> > > Sent: Wednesday, 7 February, 2018 10:11:45
> > > Subject: Re: [DISCUSS] VR upgrade downtime reduction
> >
> > > On 02/06/2018 02:47 PM, Remi Bergsma wrote:
> > >> Hi Daan,
> > >>
> > >> In my opinion the biggest issue is the fact that there are a lot of
> > different
> > >> code paths: VPC versus non-VPC, VPC versus redundant-VPC, etc. That's
> > why you
> > >> cannot simply switch from a single VPC to a redundant VPC for example.
> > >>
> > >> For SBP, we mitigated that in Cosmic by converting all non-VPCs to a
> > VPC with a
> > >> single tier and made sure all features are supported. Next we merged
> > the single
> > >> and redundant VPC code paths. The idea here is that redundancy or not
> > should
> > >> only be a difference in the number of routers. Code should be the
> same.
> > A
> > >> single router, is also "master" but there just is no "backup".
> > >>
> > >> That simplifies things A LOT, as keepalived is now the master of the
> > whole
> > >> thing. No more assigning ip addresses in Python, but leave that to
> > keepalived
> > >> instead. Lots of code deleted. Easier to maintain, way more stable. We
> > just
> > >> released Cosmic 6 that has this feature and are now rolling it out in
> > >> production. Looking good so far. This change unlocks a lot of
> > possibilities,
> > >> like live upgrading 

Re: [DISCUSS] new way of github working

2018-05-01 Thread Tutkowski, Mike
Hi everyone,

We had a good conversation going here. Maybe we can continue it, get some level 
of reasonable consensus, and implement it (if, in fact, the consensus is a 
change from what we currently have).

My suggested approach is the following:

Before you create a PR, squash all applicable commits to make it more readable 
for reviewers. Once reviews start coming in and you start making changes, push 
new commits on top of the prior ones (do not squash at this point). This will 
make it easier for reviewers to confirm that you and they are on the same page 
with regards to what was changed. When you need to draw in changes from the 
base branch, rebase your commits on top of it. When the PR is given a LGTM by 
2+ reviewers and passed the necessary regression tests, it should be squashed 
and then merged. I see the evolution of commits during the life of the PR as a 
temporary sandbox of history that is no longer required once the PR has been 
completed.

I think that process should cover the vast majority of our PRs.

There are usually some exceptions to the rule, however. When this happens, 
discuss your situation with the reviewers and bring any concerns to the mailing 
list before deviating from the standard process.

Thoughts?
Mike

On 1/15/18, 1:47 PM, "Rene Moser"  wrote:



On 01/15/2018 09:06 PM, Rafael Weingärtner wrote:
> Daan,
> 
> Now that master is open for merges again, can we get a feedback here? It
> might be interesting to find a consensus and a standardize way of working
> for everybody before we start merging things in master again …

+1 to allow merge commits on master branch to keep history of a series
of patches when they help to understand the change.

René