GitHub user rhtyd opened a pull request:
https://github.com/apache/cloudstack/pull/1601
CLOUDSTACK-9348: Reduce Nio selector wait time
This reduced the Nio loop selector wait time, this way the selector will
check frequently (as much as 100ms per iteration) and handle any pending
Hi Will,
I've created a PR that aims to make Nio related connections slightly aggressive
which should solve any timeout/reconnection issues. If you're still seeing the
issue with *latest* master, please test and merge this PR
https://github.com/apache/cloudstack/pull/1601
I could not reproduc
There's not been much response to this, but I'll start clearing away the
unclaimed items, people can always add them back.
Kind regards,
Paul Angus
paul.an...@shapeblue.comĀ
www.shapeblue.com
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue
Hi,
I can't edit the page but I'll be glad to put some effort for the V5:
- Live migration for KVM
- Improve logging using UUIDs (as I already did part of that for us at exoscale)
I'm in the process to add another feature we need: graceful shutdown of a
management server when running a cluster o
That sounds great. I'll get it added.
Kind regards,
Paul Angus
paul.an...@shapeblue.comĀ
www.shapeblue.com
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue
-Original Message-
From: ma...@exoscale.ch [mailto:ma...@exoscale.ch]
Sent: 01 July 2016 11:31
To: dev@clouds
Github user prashanthvarma commented on the issue:
https://github.com/apache/cloudstack/pull/1580
Marvin test code PEP8 & PyFlakes compliance:
CloudStack$
CloudStack$ pep8 --max-line-length=150
test/integration/plugins/nuagevsp/*.py
CloudStack$
CloudStack$ pyflakes test
Github user mike-tutkowski commented on the issue:
https://github.com/apache/cloudstack/pull/1600
My fourth test run is successful - TestSnapshots.py:
test_01_create_volume_snapshot_using_sf_snapshot
(TestSnapshots.TestSnapshots) ... === TestName:
test_01_create_volume_snapsh
GitHub user nvazquez opened a pull request:
https://github.com/apache/cloudstack/pull/1602
CLOUDSTACK-9422: Granular VMware vms creation as full clones on HV
### Introduction
For VMware, It is possible to decide creating VMs as full clones on ESX HV,
adjusting `vmware.creat
Github user mike-tutkowski commented on the issue:
https://github.com/apache/cloudstack/pull/1600
My fifth and final test run is successful - TestAddRemoveHosts.py:
test_add_remove_host_with_solidfire_plugin_1
(TestAddRemoveHosts.TestAddRemoveHosts) ... === TestName:
test_add
Github user mike-tutkowski commented on the issue:
https://github.com/apache/cloudstack/pull/1600
The code LGTM.
In addition, my automated regression tests have all come back successful
(documented above).
I plan to write new automated tests for this PR.
---
If your
Github user remibergsma commented on the issue:
https://github.com/apache/cloudstack/pull/1580
PEP8 recommends max line length of 79, why bump it to 150? That doesn't
help in readability IMO. And yes, I know many other tests are crappy formatted
too. But since you state it's PEP8 comp
Github user prashanthvarma commented on the issue:
https://github.com/apache/cloudstack/pull/1580
@remibergsma As a common team practice, we just happened to agree upon with
this line length :). Agree, we should make our Marvin code truly PEP8 compliant
as you suggested, and update ou
Marco
RE:
> - Live migration for KVM
How far are you from completing this?
Reason i'm asking:
We have this feature completed as plugin for cross cluster migration for
LIVE VMs.
We would like to take it out of plugin and contribute it to cloudstack
core.
Also considering adding COLD VM cross clu
Marco,
I written a tiny shell script that does following:
Make's sure there are async_jobs that arent running, also block 8080 via
iptables - to avoid user connecting to MS thats about to go down.
It needs a bit of enhancement - and should lookup the MSID of that
specific server, it looks someth
Github user mike-tutkowski commented on the issue:
https://github.com/apache/cloudstack/pull/1600
In manual testing, I've noticed that the SR that contains the VDI that we
copy from primary to secondary storage is not removed from XenServer.
I see that the SolidFire volume tha
15 matches
Mail list logo