While we all recover from havana-2 and some of us party for the 3rd
OpenStack birthday at OSCON, we'll be skipping the release status
meeting tomorrow at 21:00 UTC.
PTLs should take the extra time to review their havana-3 plans and trim
them to come up with reasonable objectives where appropriate
Hi Brian,
Thanks for your reply.
> 1. This isn't something a tenant should be able to do, so should be
> admin-only,
> correct?
Correct.
> 2. I think it would be useful for an admin to be able to add metering rules
> for
> all tenants with a single command. This gets back to wanting to pre-s
On Fri, Jul 19 2013, Sean Dague wrote:
> I assume it would gracefully degrade to the existing static allocators if
> something went wrong. If not, well that would be very bad.
>
> Ceilometer is an integrated project in Havana. Utilization based scheduling
> would be a new feature. I'm not sure why
Hi All.
There is a blueprint (
https://blueprints.launchpad.net/nova/+spec/db-reconnect) by Devananda van
der Veen, which goal is to implement reconnection to a database and
retrying of the last operation if a db connection fails. I’m working on the
implementation of this BP in oslo-incubator (
ht
Hi Salvatore,
I intend to replace the netifaces module which Ryu agent uses with the
ip_lib module.
Thanks,
Kaneko
2013/7/21 Salvatore Orlando :
> I reckon the netifaces package is only used in Neutron's Ryu plugin.
> At a first glance, it should be possible to replace its current usage with
>
If you were using it from https://github.com/tripleo/incubator, you
should update your remotes to reference
https://github.com/openstack/tripleo-incubator.
I'll get the .gitreview stuff setup shortly.
Cheers,
Rob
--
Robert Collins
Distinguished Technologist
HP Cloud Services
_
Joe,
>> Speaking of Chris Beherns "Relying on anything but the DB for current
memory free, etc, is just too laggy… so we need to stick with it, IMO."
http://lists.openstack.org/pipermail/openstack-dev/2013-June/010485.html
It doesn't scale, use tons of resources, works slow and is hard to extend
Sandy,
I see only one race condition. (in current solution we have the same
situtaiton)
Between request to compute node and data is updated in DB, we could use
wrong state of compute node.
By the way it is fixed by retry.
I don't see any new races that are produces by new approach without DB.
Cou
On 07/22/2013 08:16 AM, Boris Pavlovic wrote:
>>> * How do you bring a new scheduler up in an existing deployment and make it
>>> get the full state of the system?
>
> You should wait for a one periodic task time. And you will get full
> information about all compute nodes.
This also affects up
Sandy Walsh wrote on 2013-07-19:
>
>
> On 07/19/2013 09:47 AM, Day, Phil wrote:
>>> -Original Message-
>>> From: Sean Dague [mailto:[email protected]]
>>> Sent: 19 July 2013 12:04
>>> To: OpenStack Development Mailing List
>>> Subject: Re: [openstack-dev] [Nova] Ceilometer vs. Nova internal
Daniel raised a good point, I also agreed that is not a good architecture.
Nova can't touch any monitoring stuffs - I don't think that is good.
At least, Ceilometer can be a monitoring hub for external utilities.
On the other hand, for the options Lianhao raised.
Is a query on a DB and a json colu
Russell,
To get information about "all" compute nodes we should wait one periodic
task (60 seconds by default).
So starting will take a while.
But I don't think that this is a big problem:
1) if we are already able to wait each time for heavy and long (> few
seconds) db querie
2) if we have more
Hi folks,
I would like to start a discussion about the blueprint I raised about
multi region support.
I would like to get feedback from you. If something is not clear or you
have questions do not hesitate to ask.
Please let me know what you think.
Blueprint: https://blueprints.launchpad.net/h
On 07/22/2013 10:43 AM, Boris Pavlovic wrote:
> Russell,
>
> To get information about "all" compute nodes we should wait one periodic
> task (60 seconds by default).
> So starting will take a while.
>
> But I don't think that this is a big problem:
> 1) if we are already able to wait each time fo
Hello,
I've written an article about my ongoing work on improving OpenStack's
parallel performance:
http://blog.gridcentric.com/bid/318277/Boosting-OpenStack-s-Parallel-Performance
The article discusses host configuration changes and patches (upstream
and in progress) that give a 74% speedup in
As for the scalability issue, boris, are you talking about the VF number issue,
i.e. A physical PCI devices can at most have 256 virtual functions?
I think we have discussed this before. We should keep the compute node to
manage the same VF functions, so that VFs belongs to the same PF will hav
Hi,
In Portland, we discussed a somewhat related issue of having multiple
replication levels in one Swift cluster.
It may be that a provider would not wish to expose the use of EC or the
level of replication used. For example a provider may offer a predefined
set of services such as "Gold", "Silve
The thing laggy is, currently resource tracker will update the usage
information whenever resource changes, not only in periodic tasks. If you
really want to get the current result with periodic update, you have to do some
in-memory management and you even need sync between different scheduler
I updated the REST API draft -
https://etherpad.openstack.org/savanna_API_draft_EDP_extensions. New
methods related to job source and data discovery components were added;
also the job object was updated.
On Fri, Jul 19, 2013 at 12:26 AM, Trevor McKay wrote:
> fyi, updates to the diagram based
Hi, Boris
I'm a surprised that you want to postpone the PCI support
(https://blueprints.launchpad.net/nova/+spec/pci-passthrough-base) to I
release. You and our team have been working on this for a long time, and the
patches has been reviewed several rounds. And we have been waiting for
On 07/22/2013 11:17 AM, Jiang, Yunhong wrote:
> Hi, Boris
> I'm a surprised that you want to postpone the PCI support
> (https://blueprints.launchpad.net/nova/+spec/pci-passthrough-base) to I
> release. You and our team have been working on this for a long time, and the
> patches has been
Hi there,
I've been working on the changes that would need to be done to make the
default config generator work for Neutron.
However, the current default config generator doesn't support the
possibility to generate different configuration files (e.g. one per
service/plugin). I can imagine two opt
Per the last summit, there are many interested parties waiting on PCI
support. Boris (who unfortunately waasn't there) jumped in with an
implementation before the rest of us could get a blueprint up, but I
suspect he's been stretched rather thinly and progress has been much
slower than I was hopin
As a heads up I filed bugs with each of these projects (with the exception
of netifaces, which doesn't appear to have a tracker). The dnspython
maintainer has already uploaded the package to PyPi and disabled scraping!
Alex
On Fri, Jul 19, 2013 at 8:04 PM, Monty Taylor wrote:
> Hey guys!
>
> P
There is bug in requests package versions. Current version 1.2.3 is stable.
According to pypi, eggs of versions 1.2.1, 1.2.2 are broken. Please fix it
im cinder requirements
___
OpenStack-dev mailing list
[email protected]
http://lists.ope
On 22 July 2013 13:23, Boris Pavlovic wrote:
> I see only one race condition. (in current solution we have the same
> situtaiton)
> Between request to compute node and data is updated in DB, we could use
> wrong state of compute node.
> By the way it is fixed by retry.
This race turns out to be a
Hi All,
I am following Barbican project and I have some question around it, I would
appreciate if someone can answer them or point me to the correct resource
1. What is the state of the project, is it in the state where it can be
utilized in production deployments?
2.Dose Barbi
I think most folks are on the same page wrt EC being consider a "level" or
"storage policy" as we've been discussing in other forums. I saw the previous
note on account versus container and was actually thinking it made more sense
to me at least to enable billing per container as opposed to tr
On Jul 22, 2013, at 9:34 AM, David Hadas wrote:
> Hi,
>
> In Portland, we discussed a somewhat related issue of having multiple
> replication levels in one Swift cluster.
> It may be that a provider would not wish to expose the use of EC or the level
> of replication used. For example a pro
On 07/22/2013 12:51 PM, John Garbutt wrote:
> On 22 July 2013 13:23, Boris Pavlovic wrote:
>> I see only one race condition. (in current solution we have the same
>> situtaiton)
>> Between request to compute node and data is updated in DB, we could use
>> wrong state of compute node.
>> By the way
Greetings,
havana-1: 16 blueprints implemented
havana-2: 25 blueprints implemented
havana-3: currently 96 blueprints targeted [1]
The number of blueprints targeted at havana-3 is completely unrealistic.
As a result, there are number of important points and actions we need
to take:
* If you have
Thx for helping corral it all russell :)
Sent from my really tiny device...
On Jul 22, 2013, at 11:03 AM, "Russell Bryant" wrote:
> Greetings,
>
> havana-1: 16 blueprints implemented
> havana-2: 25 blueprints implemented
> havana-3: currently 96 blueprints targeted [1]
>
> The number of bluep
Ian,
I don't like to write anything personally.
But I have to write some facts:
1) I see tons of hands and only 2 solutions my and one more that is based
on code.
2) My code was published before session (18. Apr 2013)
3) Blueprints from summit were published (03. Mar 2013)
4) My Blueprints were p
Hi!
While I was at the Community Leadership Summit conference this weekend, I met
the community manager for the Xen hypervisor project. He told me that there
are *no* OpenStack talks submitted to the upcoming XenCon conference.
The CFP closes this Friday.
Allow me to suggest that any of us wh
A while back (just before the summit, as I recall), there was a patch
submitted to remove the constraints on being able to connect multiple
interfaces of the same VM to the same Neutron network. [1]
It was unclear at the time whether this is a bug being fixed or a
feature being added, which rather
On 07/22/2013 01:38 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis)
wrote:
Hello,
I have been reading source code in an attempt to figure out how to use
the new split backend feature, specifically how to split the identity
data between an ldap server and the standard Keystone sql database.
Hi one more time.
I will refactor DB layer tomorrow. As I said I don't want to be a block.
Best regards,
Boris Pavlovic
---
Mirantis Inc.
On Mon, Jul 22, 2013 at 11:08 PM, Boris Pavlovic wrote:
> Ian,
>
> I don't like to write anything personally.
> But I have to write some facts:
>
> 1)
Sylvain,
Something like this would require no marking:
# iptables -N test2
# iptables -N test3
# iptables -A test3
# iptables -A test2 -d 9.9.9.9/32 -j RETURN
# iptables -A test2 -d 10.10.10.10/32 -j RETURN
# iptables -A test2 -j test3
# iptables -A OUTPUT -j test2
# ping -I eth0 -r 9.9.9.9
PING
I'm the product owner for Barbican at Rackspace. I'll take a shot an answering
your questions.
> 1. What is the state of the project, is it in the state where it can be
> utilized in production deployments?
We currently in active development and pushing for our 1.0 release for Havana.
As to pr
08:47 <@openstack> Meeting ended Mon Jul 22 20:47:18 2013 UTC.
Information about MeetBot at
http://wiki.debian.org/MeetBot . (v 0.1.4)
08:47 <@openstack> Minutes:
http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-07-22-20.01.html
08:47 -!- esker [~es...@nat-216-2
On 22 July 2013 21:08, Boris Pavlovic wrote:
> Ian,
>
> I don't like to write anything personally.
> But I have to write some facts:
>
> 1) I see tons of hands and only 2 solutions my and one more that is based on
> code.
> 2) My code was published before session (18. Apr 2013)
> 3) Blueprints fro
Dear all,
Following the initial discussions at the last design summit, we have
published the design [2] and the first take on the implementation [3] of
the blueprint adding support for multiple active scheduler
policies/drivers [1].
In a nutshell, the idea is to allow overriding the 'default'
Ian, your suggestion of retrieving changes since a timestamp is good. When a
scheduler first comes online (in an HA context), it requests compute node
status providing Null for timestamp to retrieve everything.
It also paves the way for full in memory record of all compute node status
because
On 07/22/2013 05:15 PM, Alex Glikson wrote:
> Dear all,
>
> Following the initial discussions at the last design summit, we have
> published the design [2] and the first take on the implementation [3] of
> the blueprint adding support for multiple active scheduler
> policies/drivers [1].
> In a nu
On 22/07/13 16:52 +0200, Bartosz Górski wrote:
Hi folks,
I would like to start a discussion about the blueprint I raised about
multi region support.
I would like to get feedback from you. If something is not clear or
you have questions do not hesitate to ask.
Please let me know what you think
Also for your information,
I created a bug : https://bugs.launchpad.net/neutron/+bug/1199963
and a first patchset for generating the new configuration file with Oslo
config scripts :
https://review.openstack.org/#/c/36546/
Emilien Macchi
# Open
On Mon, Jul 22, 2013 at 5:16 AM, Boris Pavlovic wrote:
> Joe,
>
> >> Speaking of Chris Beherns "Relying on anything but the DB for current
> memory free, etc, is just too laggy… so we need to stick with it, IMO."
> http://lists.openstack.org/pipermail/openstack-dev/2013-June/010485.html
>
> It d
On Mon, Jul 22, 2013 at 3:04 PM, Russell Bryant wrote:
> On 07/22/2013 05:15 PM, Alex Glikson wrote:
> > Dear all,
> >
> > Following the initial discussions at the last design summit, we have
> > published the design [2] and the first take on the implementation [3] of
> > the blueprint adding sup
An interesting idea, I'm not sure how useful it is but it could be.
If you think of the compute node capability information as an 'event stream'
then you could imagine using something like apache flume
(http://flume.apache.org/) or storm (http://storm-project.net/) to be able to
sit on this str
A couple weeks ago after a really *fun* night we started down this road
of uncapping all the python clients to ensure that we're actually
testing the git clients in the gate. We're close, but we need the help
of the horizon and ceilometerclient teams to get us there:
1) we need a rebase on thi
On 07/22/2013 07:43 PM, Miller, Mark M (EB SW Cloud - R&D - Corvallis)
wrote:
Adam,
You wrote:
[identity]
driver = keystone.identity.backends.ldap.Identity
[assignment]
driver = keystone.assignment.backends.sql.Identity
Did you mean to write:
[assignment]
driver = keystone.assignment.ba
Hi!
I am interested to know if the topic of surfacing networking statistics through
the Neutron APIs has been discussed and if there are any existing blueprints
working on this feature? Specifically, the current APIs,
https://wiki.openstack.org/wiki/Neutron/APIv2-specification, do not support
Hi Mellquist
I'm also interested in the feature.
Could you write some blueprint for proposal?
(May be havana is overloaded, so it will go to icehouse)
Best
Nachi
2013/7/22 Mellquist, Peter :
> Hi!
>
>
>
> I am interested to know if the topic of surfacing networking statistics
> through the Neu
** No meeting this week **
I have a conflict and can't run the meeting this week. We'll be back
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130731T1700
Two of us ran into a problem with an odd pep8 failure:
> E: nova.conf.sample is not up to date, please run
> tools/conf/generat
Sounds like a deafening silence.
--
Joshua McKenty
Chief Technology Officer
Piston Cloud Computing, Inc.
+1 (650) 242-5683
+1 (650) 283-6846
http://www.pistoncloud.com
"Oh, Westley, we'll never survive!"
"Nonsense. You're only saying that because no one ever has."
On Jul 22, 2013, at 12:19 PM,
On Tue, Jul 23, 2013 at 5:19 AM, Atwood, Mark wrote:
> Hi!
>
> While I was at the Community Leadership Summit conference this weekend, I met
> the community manager for the Xen hypervisor project. He told me that there
> are *no* OpenStack talks submitted to the upcoming XenCon conference.
>
>
Speaking of deafening silence…
We launched the CFP for OpenStack on Ales http://openstack.onales.com and
have only received a handful of proposals so far. The event is September
30 and October 1, 2013. CFP is set to close August 15.
On Mon, Jul 22, 2013 at 7:39 PM, Michael Still wrote:
> O
On Tue, Jul 23, 2013 at 6:55 AM, Emilien Macchi wrote:
> Also for your information,
>
> I created a bug : https://bugs.launchpad.net/neutron/+bug/1199963
> and a first patchset for generating the new configuration file with Oslo
> config scripts :
> https://review.openstack.org/#/c/36546/
>
> Em
Russell Bryant wrote on 23/07/2013 01:04:24 AM:
> > [1]
https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
> > [2] https://wiki.openstack.org/wiki/Nova/MultipleSchedulerPolicies
> > [3] https://review.openstack.org/#/c/37407/
>
> Thanks for bringing this up. I do have some c
Looks like some interesting threads started up on the mailing list that we can
talk about if people are available:
1) Ceilometer vs. Nova metrics collector
2) A simple way to improve Nova scheduler
3) Multiple active scheduler policies/drivers
--
Don Dugger
"Censeo Toto nos in Kansa esse deci
On 07/23/2013 10:46 AM, Angus Salkeld wrote:
> On 22/07/13 16:52 +0200, Bartosz Górski wrote:
>> Hi folks,
>>
>> I would like to start a discussion about the blueprint I raised about
>> multi region support.
>> I would like to get feedback from you. If something is not clear or
>> you have question
61 matches
Mail list logo