Hi all,

Hereby my vote for the ACS 4.7.0 RC 1

Details:

Vote: +1

Besides the integration tests (which all ran fine) I've also tested the
following:

 - S3 Integration (Secondary Storage) with NFS Staging store
 - Ceph RBD storage (Primary Storage)
 - Basic networking with security groups 

-- 

Met vriendelijke groet / Kind regards,

Boris Schrijver

PCextreme B.V.

http://www.pcextreme.nl/contact
Tel direct: +31 (0) 118 700 215

> On December 16, 2015 at 12:34 AM Remi Bergsma <rberg...@schubergphilis.com>
> wrote:
> 
> 
> +1 (binding)
> 
> This vote is based on testing on a real cloud.
> 
> At Schuberg Philis we built a new cloud based on ACS 4.7.0RC1 (upgraded from
> 4.6). It runs XenServer 6.5 clusters, a CentOS 7 management cluster, Galera DB
> (also on CentOS 7), HA proxies (CentOS 7), NFS storage and Nicira/NSX for
> networking/SDN. Capacity to start with is about 12TB ram and 500+ cores.
> Secondary storage is an S3 compatible solution (Cloudian) with NFS staging
> store. Configured LDAP for authentication.
> 
> Before a go-live we always do thorough testing and try to break the setup
> emulating crashes and problems.
> 
> We successful executed these CloudStack related tests:
> 
>   *   crashed a hypervisor which was poolmaster and saw recovery in about 5
> min (tested with/without returning of the hypervisor)
>   *   crashed a hypervisor which was NOT poolmaster and saw recovery in about
> 5 min (tested with/without returning of the hypervisor)
>   *   crashed overbooked hypervisor in a cluster with too many VMs to run on
> the remaining hypervisors. Saw it recovered fully when crashed hypervisor
> returned. (this you don’t want to happen, but at least the recovery was
> automatic)
>   *   crashed one of the app servers; the other one continued and took over.
> No user impact.
>   *   crashed the main Galera DB node, the two remaining nodes survived and
> kept working. No CloudStack impact.
>   *   did performance tests and walked into the default 200mbps limit on
> tiers. When we removed it (aka configured it properly) we could use full
> 10gbps.
>   *   crashed the NFS staging store, could not deploy VM from template that
> was not already on primary storage. Recovered automatically when NFS returned
> and VM was started.
>   *   many functional tests, also covered In the integration tests (spin many
> VMs, migrate, make port forwardings etc).
>   *   executed patch round (live migrating vms around), rebooting all
> hypervisors without user impact.
> 
> Conclusion:
> It’s pretty solid, even with one management server and a degraded database we
> could still continue and operate existing VMs and start new ones. When the
> nodes returned recovery was automatic.
> We feel confident running production with Apache CloudStack 4.7 and will start
> doing so later today!
> 
> Regards,
> Remi
> 
> PS:
> The integration tests we run in the dev/test environments were also successful
> (the same I executed on the PRs that were merged).
> 
> 
> 
> From: Remi Bergsma
> <rberg...@schubergphilis.com<mailto:rberg...@schubergphilis.com>>
> Date: Sunday 13 December 2015 21:27
> To: "dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>"
> <dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>>
> Subject: [VOTE] Apache CloudStack 4.7.0
> 
> Hi all,
> 
> Since our 4.6.0 release (on Nov 13th, exactly 1 month ago), we have merged
> 100+ pull requests [1] with lots of bug fixes, refactoring and of course new
> features. Time for a new release!
> 
> 
> I've created a 4.7.0 release candidate, with the following artifacts up for a
> vote:
> 
> Git Branch and Commit SH:
> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=4.7.0-RC20151213T2109
> 
> Commit: 2f26a859a971a9852ed9f6f34fe35e52fe6028a9
> 
> Source release (checksums and signatures are available at the same location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.7.0/
> 
> PGP release keys (signed using A47DDC4F):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> 
> Vote will be open for at least 72 hours.
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate
> "(binding)" with their vote?
> 
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
> 
> 
> [1] git log --pretty=oneline --abbrev-commit origin/4.6..4.7.0-RC20151213T2109
> | grep "Merge pull request"
>

Reply via email to