I have updated the bug so it's high priority and tagged with
kilo-rc-potential, and added your note from below as a comment on the bug.
It looks like it might be worth a backport so it gets into RC2? Can anyone
take that bit on please?
Thanks,
John
On Sunday, April 12, 2015, Gary Kotton wrote:
Hi, Kevin,
I assumed that all agents are connected to same IP address of RabbitMQ, then
the connection will exceed the port ranges limitation.
For a RabbitMQ cluster, for sure the client can connect to any one of member in
the cluster, but in this case, the client has to be designed in fail
Gary, John,
Just to speed things up, i filed a backport:
https://review.openstack.org/#/c/172710/
thanks,
dims
On Sun, Apr 12, 2015 at 4:23 AM, John Garbutt wrote:
> I have updated the bug so it's high priority and tagged with
> kilo-rc-potential, and added your note from below as a comment on
Thanks!
On 4/12/15, 3:04 PM, "Davanum Srinivas" wrote:
>Gary, John,
>
>Just to speed things up, i filed a backport:
>https://review.openstack.org/#/c/172710/
>
>thanks,
>dims
>
>On Sun, Apr 12, 2015 at 4:23 AM, John Garbutt
>wrote:
>> I have updated the bug so it's high priority and tagged with
Excerpts from Clint Byrum's message of 2015-04-08 23:11:29 +:
>
> I discussed a format for something similar here:
>
> https://review.openstack.org/#/c/162267/
>
> Perhaps we could merge the effort.
>
> The design and implementation in that might take some time, but if we
> can document the
Kevin Benton wrote:
So IIUC tooz would be handling the liveness detection for the agents.
That would be nice to get ride of that logic in Neutron and just
register callbacks for rescheduling the dead.
Where does it store that state, does it persist timestamps to the DB
like Neutron does? If so,
On 04/12/2015 04:16 AM, Bernd Bausch wrote:
> There is nothing like a good rage on a Sunday (yes Sunday) afternoon. Many
> thanks, Monty. You helped me make glance work for my particular case; I will
> limit any further messages to the docs mailing list.
Rage on a Sunday followed up by rage coding
>I assumed that all agents are connected to same IP address of RabbitMQ,
then the connection will exceed the port ranges limitation.
Only if the clients are all using the same IP address. If connections
weren't scoped by source IP, busy servers would be completely unreliable
because clients would
>Timestamps are just one way (and likely the most primitive), using redis
(or memcache) key/value and expiry are another (and letting memcache or
redis expire using its own internal algorithms), using zookeeper ephemeral
nodes[1] are another... The point being that its backend specific and tooz
sup
Right now we do something that upstream pip considers wrong: we make
our requirements.txt be our install_requires.
Upstream there are two separate concepts.
install_requirements, which are meant to document what *must* be
installed to import the package, and should encode any mandatory
version co
On 04/12/2015 06:43 PM, Robert Collins wrote:
> Right now we do something that upstream pip considers wrong: we make
> our requirements.txt be our install_requires.
>
> Upstream there are two separate concepts.
>
> install_requirements, which are meant to document what *must* be
> installed to im
On Mon, Apr 13, 2015 at 9:12 AM, Monty Taylor wrote:
> On 04/12/2015 06:43 PM, Robert Collins wrote:
> > Right now we do something that upstream pip considers wrong: we make
> > our requirements.txt be our install_requires.
> >
> > Upstream there are two separate concepts.
> >
> > install_require
On 04/12/2015 08:01 PM, James Polley wrote:
> On Mon, Apr 13, 2015 at 9:12 AM, Monty Taylor wrote:
>
>> On 04/12/2015 06:43 PM, Robert Collins wrote:
>>> Right now we do something that upstream pip considers wrong: we make
>>> our requirements.txt be our install_requires.
>>>
>>> Upstream there a
On 13 April 2015 at 12:01, James Polley wrote:
>
>
> That sounds, to me, very similar to a discussion we had a few weeks ago in
> the context of our stable branches.
>
> In that context, we have two competing requirements. One requirement is that
> our CI system wants a very tightly pinned requir
On 13 April 2015 at 12:53, Monty Taylor wrote:
> What we have in the gate is the thing that produces the artifacts that
> someone installing using the pip tool would get. Shipping anything with
> those artifacts other that a direct communication of what we tested is
> just mean to our end users.
There were several problems with the keystoneclient stable/juno branch that
have been or are in the process of being fixed since its creation.
Hopefully this note will be useful to other projects that create stable
branches for their libraries.
1) Unit tests didn't pass with earlier packages
The
Hi, as promised I now have details of a charity for people to donate
to in Chris' memory:
http://participate.freetobreathe.org/site/TR?px=1582460&fr_id=2710&pg=personal#.VSscH5SUd90
In the words of the family:
"We would prefer that people donate to lung cancer research in lieu of
flowers. L
On 13 April 2015 at 13:09, Robert Collins wrote:
> On 13 April 2015 at 12:53, Monty Taylor wrote:
>
>> What we have in the gate is the thing that produces the artifacts that
>> someone installing using the pip tool would get. Shipping anything with
>> those artifacts other that a direct communica
Hi, Kevin and Joshua,
As my understanding, Tooz only addresses the issue of agent status management,
but how to solve the concurrent dynamic load impact on large scale ( for
example 100k managed nodes with the dynamic load like security goup rule
update, routers_updated, etc )
And one more que
Kevin Benton wrote:
>Timestamps are just one way (and likely the most primitive), using
redis (or memcache) key/value and expiry are another (and letting
memcache or redis expire using its own internal algorithms), using
zookeeper ephemeral nodes[1] are another... The point being that its
backen
joehuang wrote:
Hi, Kevin and Joshua,
As my understanding, Tooz only addresses the issue of agent status
management, but how to solve the concurrent dynamic load impact on large
scale ( for example 100k managed nodes with the dynamic load like
security goup rule update, routers_updated, etc )
Joshua Harlow wrote:
Kevin Benton wrote:
>Timestamps are just one way (and likely the most primitive), using
redis (or memcache) key/value and expiry are another (and letting
memcache or redis expire using its own internal algorithms), using
zookeeper ephemeral nodes[1] are another... The point
On 04/10/2015 06:58 PM, Colleen Murphy wrote:
> Just to make it official: since we only had one nominee for PTL, we will
> go ahead and name Emilien Macchi as our new PTL without proceeding with
> an election process. Thanks, Emilien, for all your hard work and for
> taking on this responsibility
Hi,
I am very saddened to read this. Not only will Chris be missed on a
professional level but on a personal level. He was a real mensh
(http://www.thefreedictionary.com/mensh). He was always helpful and
supportive. Wishing his family a long life.
Thanks
Gary
On 4/13/15, 4:33 AM, "Michael Still"
- Original Message -
> From: "Kevin Benton"
> To: "OpenStack Development Mailing List (not for usage questions)"
>
> Sent: Sunday, April 12, 2015 4:17:29 AM
> Subject: Re: [openstack-dev] [neutron] Neutron scaling datapoints?
>
>
>
> So IIUC tooz would be handling the liveness det
I would like to see some form of this merged at least as an error message.
If a server has a bad CMOS battery and suffers a power outage, it's clock
could easily be several years behind. In that scenario, the NTP daemon
could refuse to sync due to a sanity check.
On Wed, Apr 8, 2015 at 10:46 AM, S
From now on magnum had container create and delete api .The container create
api will pull docker image from docker-registry.But the container delete api
didn't delete image.It will let the image remain even though didn't had
container use it.Is it much better we can clear the image in someway?
27 matches
Mail list logo