Daniel, thank you very much for the extensive and detailed email.
The plan looks good to me and it makes sense, also the OVS option will
still be
tested, and available when selected.
On Wed, Oct 24, 2018 at 4:41 PM Daniel Alvarez Sanchez
wrote:
> Hi Stackers!
>
> The purpose of this email is
ne
>
> On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo
> wrote:
>
> Hello
>
> Yesterday, during the Oslo meeting we discussed [6] the possibility of
> creating a new Special Interest Group [1][2] to provide home and release
> means for operator related tools [3]
Hello
Yesterday, during the Oslo meeting we discussed [6] the possibility of
creating a new Special Interest Group [1][2] to provide home and release
means for operator related tools [3] [4] [5]
I continued the discussion with M.Hillsman later, and he made me aware
of the operator working
have a look at dragonflow project, may be it's similar to what you're
trying to accomplish
On Fri, Oct 5, 2018, 1:56 PM Niket Agrawal wrote:
> Hi,
>
> Thanks for the help. I am trying to run a custom Ryu app from the nova
> compute node and have all the openvswitches connected to this new
> cont
That's fantastic,
I believe we could add some of the networking ovn jobs, we need to
decide which one would be more beneficial.
On Tue, Oct 2, 2018 at 10:02 AM wrote:
> Hi Miguel, all,
>
> The initiative is very welcome and will help make it more efficient to
> develop in stadium projects.
>
Hi Jirka & Daniel, thanks for your answers... more inline.
On Wed, Oct 3, 2018 at 10:44 AM Jiří Stránský wrote:
> On 03/10/2018 10:14, Miguel Angel Ajo Pelayo wrote:
> > Hi folks
> >
> >I was trying to deploy neutron with networking-ovn via
> tripleo-quickstar
Hi folks
I was trying to deploy neutron with networking-ovn via tripleo-quickstart
scripts on master, and this config file [1]. It doesn't work, overcloud
deploy cries with:
1) trying to deploy ovn I end up with a 2018-10-02 17:48:12 | "2018-10-02
17:47:51,864 DEBUG: 26691 -- Error: image
tripl
Thanks for the info Doug.
On Mon, Oct 1, 2018 at 6:25 PM Doug Hellmann wrote:
> Miguel Angel Ajo Pelayo writes:
>
> > Thank you for the guidance and ping Doug.
> >
> > Was this triggered by [1] ? or By the 1.1.0 tag pushed to gerrit?
>
> The release jobs are alway
Oh, ok 1.1.0 tag didn't have 'venv' in tox.ini, but master has it since:
https://review.openstack.org/#/c/548618/7/tox.ini@37
On Mon, Oct 1, 2018 at 10:01 AM Miguel Angel Ajo Pelayo
wrote:
> Thank you for the guidance and ping Doug.
>
> Was this triggered by [1] ? or
Thank you for the guidance and ping Doug.
Was this triggered by [1] ? or By the 1.1.0 tag pushed to gerrit?
I'm working to make os-log-merger part of the OpenStack governance
projects, and to make sure we release it as a tarball.
It's a small tool I've been using for years making my life easier
Good luck Gary, thanks for all those years on Neutron! :)
Best regards,
Miguel Ángel
On Wed, Sep 19, 2018 at 9:32 PM Nate Johnston
wrote:
> On Wed, Sep 19, 2018 at 06:19:44PM +, Gary Kotton wrote:
>
> > I have recently transitioned to a new role where I will be working on
> other parts of O
; address them as suitable for the specific plugin.
>
> Thanks
>
> Gary
>
>
>
> *From: *Miguel Angel Ajo Pelayo
> *Reply-To: *OpenStack List
> *Date: *Saturday, April 7, 2018 at 8:56 AM
> *To: *OpenStack List
> *Subject: *Re: [openstack-dev] [neutron] [OVN] Tempest
this issue isn't only for networking ovn, please note that it happens with
a flew other vendor plugins (like nsx), at least this is something we have
found in downstream certifications.
Cheers,
On Sat, Apr 7, 2018, 12:36 AM Daniel Alvarez wrote:
>
>
> > On 6 Apr 2018, at 19:04, Sławek Kapłoński
You can run as many as you want, generally an haproxy is used in front of
them to balance load across neutron servers.
Also, keep in mind, that the db backend is a single mysql, you can also
distribute that with galera.
That is the configuration you will get by default when you deploy in HA
with
Right, that's a little absurd, 1TB? :-) , I completely agree.
They could live with anything, but I'd try to estimate minimums across
distributions
for example, an RDO test deployment with containers looks like:
(undercloud) [stack@undercloud ~]$ ssh heat-admin@192.168.24.8 "sudo df -h
; sudo free
Very good summary, thanks for leading the PTG and neutron so well. :)
On Mon, Mar 12, 2018 at 11:25 PM fumihiko kakuma
wrote:
> Hi Miguel,
>
> > * As part of the neutron-lib effort, we have found networking projects
> that
> > are very inactive. Examples are networking-brocade (no updates since
'm moving this to the openstack-dev list
> Ihar
>
> On Mon, Feb 12, 2018 at 12:37 AM, Miguel Angel Ajo Pelayo
> wrote:
> > Hi folks :)
> >
> >We were talking this morning about the change for the new engine
> facade
> > in neutron [1],
> >
I have created an etherpad for networking-ovn, if
https://etherpad.openstack.org/p/networking-ovn-ptg-rocky with some topics
I thought are relevant.
But please feel free to add anything you believe it could be interesting
and fill attendance so it's easier to sync & meet. :)
__
That may help, of course, but I gues it could also be capacity related.
On Wed, Dec 20, 2017 at 11:42 AM Takashi Yamamoto
wrote:
> On Wed, Dec 20, 2017 at 7:18 PM, Lucas Alvares Gomes
> wrote:
> > Hi,
> >
> >>> Hi all,
> >>>
> >>> Just sending this email to try to understand the model for stabl
If we could have one member from networking-ovn on the neutron-stable-maint
team that would be great. That means the member would have to be trusted
not to handle neutron-patches when not knowing what he's doing, and of
course, follow the stable guidelines, which are absolutely important. But I
bel
That adds more latency, I believe some vendor plugins do it like that
(service VM).
Have you checked out networking-ovn?, it's all done in openflow, and you
have Ha (A/P) for free without extra namespaces, just flows and bfd
monitoring.
On Dec 4, 2017 4:22 PM, "Jaze Lee" wrote:
> Hello,
> C
Hi Folks,
I wanted to rise this topic, I have been wanting to do it from long
ago,
but preferred to wait until the zuulv3 stuff was a little bit more stable,
may
be now it's a good time.
We were thinking about the option of having a couple of non-voting jobs
on
the neutron check for netwo
Welcome Daniel! :)
On Fri, Dec 1, 2017 at 5:45 PM, Lucas Alvares Gomes
wrote:
> Hi all,
>
> I would like to welcome Daniel Alvarez to the networking-ovn core team!
>
> Daniel has been contributing with the project for a good time already
> and helping *a lot* with reviews and code.
>
> Welcome o
"+1" I know, I'm not active, but I care about neutron, and slaweq is a
great contributor.
On Nov 29, 2017 8:37 PM, "Ihar Hrachyshka" wrote:
> YES, FINALLY.
>
> On Wed, Nov 29, 2017 at 11:29 AM, Kevin Benton wrote:
> > +1! ... even though I haven't been around. :)
> >
> > On Wed, Nov 29, 2017 at
Thank you very much :-)
On Tue, Oct 10, 2017 at 4:09 PM, Lucas Alvares Gomes Martins <
lmart...@redhat.com> wrote:
> Hi,
>
> On Tue, Oct 10, 2017 at 2:25 PM, Russell Bryant
> wrote:
> > Hello, everyone. I'd like to welcome two new members to the
> > networking-ovn-core team: Miguel Angel Ajo an
I'll definetely dig more into this.
> Having a lot of messages broadcasted to all the neutron agents is not
> something you want especially in the context of femdc[1].
>
> Best,
>
> Matt
>
> [1]: https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds
&g
It could be that too TBH I'm not sure :)
On Fri, Sep 22, 2017 at 11:02 AM, Sławomir Kapłoński
wrote:
> Isn't OVS setting MTU automatically MTU for bridge as lowest value from
> ports connected to this bridge?
>
>
> > Wiadomość napisana przez Miguel Angel Ajo Pelayo
I believe that one of the problems is that if you set a certain MTU in an
OVS switch, new connected ports will be automatically assigned to such MTU
the ovs-vswitchd daemon.
On Wed, Sep 20, 2017 at 10:45 PM, Ian Wells wrote:
> Since OVS is doing L2 forwarding, you should be fine setting the MT
Thanks! :)
On Thu, Sep 21, 2017 at 3:16 AM, Kevin Benton wrote:
> https://photos.app.goo.gl/Aqa51E2aVkv5b4ah1
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.o
I wrote those lines.
At that time, I tried a couple a publisher and a receiver at that scale. It
was the receiver side what crashed trying to subscribe, the sender was
completely fine.
Sadly I don't keep the test examples, I should have stored them in github
or something. It shouldn't be hard to
+1! Thanks for organizing
On Wed, Sep 13, 2017 at 10:11 AM, Sandhya Dasu (sadasu)
wrote:
> +1
>
> Thanks for organizing.
>
> On 9/13/17, 7:28 AM, "Thomas Morin" wrote:
>
> +1
>
> -Thomas
>
>
> Takashi Yamamoto, 2017-09-13 03:05:
> > +1
> >
> > On Wed, Sep 13, 2017 at 2:5
Kevin!, and thank you for all the effort and energy you have put into
openstack-neutron during the last few years. It's been great to have you on
the project.
On Mon, Sep 11, 2017 at 5:18 PM, Ihar Hrachyshka
wrote:
> It's very sad news for the team, but I hope that Kevin will still be
> able to
I'm also interested in this topic. :)
On Mon, Sep 11, 2017 at 11:12 AM, Jay Pipes wrote:
> I'm interested in this. I get in to Denver this evening so if we can do
> this session tomorrow or later, that would be super.
>
> Best,
> -jay
>
>
> On 09/11/2017 01:11 PM, Mooney, Sean K wrote:
>
>> Hi e
Big +1 for Miguel Lavalle for me, Miguel, thank you for taking this
responsibility on behalf of the Neutron/OpenStack community.
On Fri, Sep 8, 2017 at 8:59 PM, Kevin Benton wrote:
> Hi everyone,
>
> Due to a change in my role at my employer, I no longer have time to be the
> PTL of Neutron. Eff
Thank you Kevin & Miguel! ;)
On Thu, Sep 7, 2017 at 4:04 PM, Kevin Benton wrote:
> Hello everyone,
>
> With the help of Miguel we have a tentative schedule in the PTG. Please
> check the etherpad and if there is anything missing you wanted to see
> discussed, please reach out to me or Miguel rig
I wonder if it makes sense to provide a helper script to do what it's
explained on the document.
So we could ~/devstack/tools/run_locally.sh n-sch.
If yes, I'll send the patch.
On Fri, Sep 8, 2017 at 3:00 PM, Eric Fried wrote:
> Oh, are we talking about the logs produced by CI jobs? I thought
it a bit more future proof, and able to easily
integrate with vendor plugins without the need to modify the service file.
On Tue, Sep 5, 2017 at 9:27 AM, Miguel Angel Ajo Pelayo wrote:
> Why do we need to put all the configuration in a single file?
>
> That would be a big big change to
Why do we need to put all the configuration in a single file?
That would be a big big change to deployers. It'd be great if we can think
of an alternate solution. (not sure how that's being handled for other
services though).
Best regards,
Miguel Ángel.
On Mon, Sep 4, 2017 at 3:01 PM, Kevin Bent
Good (amazing) job folks. :)
El 10 ago. 2017 9:43, "Thierry Carrez" escribió:
> Oh, that's good for us. Should still be fixed, if only so that we can
> test properly :)
>
> Kevin Benton wrote:
> > This is just the code simulating the conntrack entries that would be
> > created by real traffic in
On Mon, May 8, 2017 at 2:48 AM, Michael Still wrote:
> It would be interesting for this to be built in a way where other
> endpoints could be added to the list that have extra headers added to them.
>
> For example, we could end up with something quite similar to EC2 IAMS if
> we could add header
Hi everybody,
Some of you already know, but I wanted to make it official.
Recently I moved to work on the networking-ovn component,
and OVS/OVN itself, and while I'll stick around and I will be available
on IRC for any questions I'm already not doing a good work with
neutron reviews,
Thank you for the patches. I merged them, released 1.1.0 and proposed [1]
Cheers!,
[1] //review.openstack.org/445884
On Wed, Mar 15, 2017 at 10:14 AM, Gorka Eguileor
wrote:
> On 14/03, Ihar Hrachyshka wrote:
> > Hi all,
> >
> > the patch that started to produce log index file for logstash [1]
Nate, it was a pleasure working with you, you and your team made great
contributions to OpenStack and neutron. I'll be very happy if we ever have
the chance to work again together.
Best regards, and very good luck, my friend.
On Tue, Mar 7, 2017 at 4:55 AM, Kevin Benton wrote:
> Hi Nate,
>
> Th
On Wed, Feb 22, 2017 at 1:53 PM, Thomas Morin
wrote:
> Wed Feb 22 2017 11:13:18 GMT-0500 (EST), Anil Venkata:
>
>
> While relevant, I think this is not possible until br-int allows to match
> the network a packet belongs to (the ovsdb port tags don't let you do that
> until the packet leaves br-i
I have updated the spreadsheet. In the case of RH/RDO we're using the same
architecture
in the case of HA, pacemaker is not taking care of those anymore since the
HA-NG implementation.
We let systemd take care to restart the services that die, and we worked
with the community
to make sure that age
+1 :-)
On Mon, Feb 20, 2017 at 9:16 AM, John Davidge
wrote:
> +1
>
> On 2/20/17, 4:48 AM, "Carlos Gonçalves" wrote:
>
> >+1
> >
> >On Mon, Feb 20, 2017 at 9:17 AM, Kevin Benton
> > wrote:
> >
> >No problem. Keep sending in RSPVs if you haven't already.
> >
> >On Mon, Feb 20, 2017 at 2:59 AM, Fu
Lol, ack :)
On Mon, Feb 20, 2017 at 2:37 AM, Kevin Benton wrote:
> Clothes are strongly recommended as far as I understand it.
>
> On Mon, Feb 20, 2017 at 1:47 AM, Gary Kotton wrote:
>
>> What is the dress code J
>>
>>
>>
>> *From: *"Das, Anindita"
>> *Reply-To: *OpenStack List
>> *Date: *Mon
I believe those are traces left by the reference implementation of cinder
setting very high debug level on tgtd. I'm not sure if that's related or
the culprit at all (probably the culprit is a mix of things).
I wonder if we could disable such verbosity on tgtd, which certainly is
going to slow dow
Jeremy Stanley wrote:
> It's an option of last resort, I think. The next consistent flavor
> up in most of the providers donating resources is double the one
> we're using (which is a fairly typical pattern in public clouds). As
> aggregate memory constraints are our primary quota limit, this wou
On Fri, Feb 3, 2017 at 7:55 AM, IWAMOTO Toshihiro
wrote:
> At Wed, 1 Feb 2017 16:24:54 -0800,
> Armando M. wrote:
> >
> > Hi,
> >
> > [TL;DR]: OpenStack services have steadily increased their memory
> > footprints. We need a concerted way to address the oom-kills experienced
> in
> > the openstac
Armando, thank you very much for all the work you've done as PTL,
my best wishes, and happy to know that you'll be around!
Best regards,
Miguel Ángel.
On Wed, Jan 11, 2017 at 1:52 AM, joehuang wrote:
> Sad to know that you will step down from Neutron PTL. Had several f2f talk
> with you, and g
+1 Good work. :)
On Fri, Dec 16, 2016 at 11:59 AM, Rossella Sblendido
wrote:
> +1
>
> On 12/16/2016 09:25 AM, Ihar Hrachyshka wrote:
> > Armando M. wrote:
> >
> >> Hi neutrinos,
> >>
> >> I would like to propose Ryan and Nate as the go-to fellows for
> >> service-related patches.
> >>
> >> Both
+1 :)
On Fri, Dec 16, 2016 at 2:44 AM, Vasudevan, Swaminathan (PNB Roseville) <
swaminathan.vasude...@hpe.com> wrote:
> +1
>
>
>
> *From:* Armando M. [mailto:arma...@gmail.com]
> *Sent:* Thursday, December 15, 2016 3:15 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> o
It's been an absolute pleasure working with you on every single interaction.
Very good luck Henry,
On Fri, Dec 2, 2016 at 8:14 AM, Andreas Scheuring <
scheu...@linux.vnet.ibm.com> wrote:
> Henry, it was a pleasure working with you! Thanks!
> All the best for your further journey!
>
>
> --
> --
Sad to see you go Carl,
Thanks for so many years of hard work, as Brian said, OpenStack /
Neutron is better thanks to your contributions through the last years.
My best wishes for you.
On Fri, Nov 18, 2016 at 9:51 AM, Vikram Choudhary wrote:
> It was really a good experience working wi
I could be wrong, but I suspect we're doing it this way to be able to do
changes to several objects atomically, and roll back the transactions if at
some point in time what we're trying to accomplish is not possible.
Thoughts?
On Tue, Nov 15, 2016 at 10:06 AM, Gary Kotton wrote:
> Hi,
>
> It se
I probably won't be able to go, but if you plan to hangout in any
other place around after/before dinner, may be I'll join.
Cheers & Enjoy! :)
On Mon, Oct 17, 2016 at 12:56 PM, Nate Johnston wrote:
> I responded to Miguel privately, but I'll be there as well!
>
> --N.
>
> On Fri, Oct 14, 2016 a
+1!, even if my vote does not count :-)
On Tue, Oct 11, 2016 at 12:00 AM, Eichberger, German
wrote:
> +1 (even if it doesn’t matter)
>
>
>
> From: Stephen Balukoff
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
> Date: Monday, October 10, 2016 at 4:39 PM
> To: "Ope
Hi Sergey!,
This was my point of view on a possible solution:
https://bugs.launchpad.net/neutron/+bug/1403455/comments/12
"""
After much thinking (and quite little doing) I believe the option "2"
I proposed is a rather reasonable one:
2) Before cleaning a namespace blindly in the end, identify
I just found this one created recently, and I will try to build on top of it:
https://review.openstack.org/#/c/371807/12
On Wed, Sep 28, 2016 at 1:52 PM, Miguel Angel Ajo Pelayo
wrote:
> Refloating this thread.
>
> I posted this rfe/bug [1], and I'm planning to come up with an
de controllers)" - Rally is suitable for many kind of tests=)
> Especially for testing at scale! If you have any question how to use Rally
> feel free to ask Rally team!
>
> - Best regards, Roman Vasylets. Rally team member
>
> On Thu, Aug 11, 2016 at 11:46 AM, Miguel Angel Ajo
Ack, and thanks for the summary Ihar,
I will have a look on it tomorrow morning, please update this thread
with any progress.
On Tue, Sep 27, 2016 at 8:22 PM, Ihar Hrachyshka wrote:
> Hi all,
>
> so we started getting ‘Address already in use’ when trying to start dnsmasq
> after the previous i
Congratulations Ihar!, well deserved through hard work! :)
On Mon, Sep 19, 2016 at 8:03 PM, Brian Haley wrote:
> Congrats Ihar!
>
> -Brian
>
>
> On 09/17/2016 12:40 PM, Armando M. wrote:
>>
>> Hi folks,
>>
>> I would like to propose Ihar to become a member of the Neutron drivers
>> team [1].
>>
>
Option 2 sounds reasonable to me too. :)
On Tue, Sep 6, 2016 at 2:39 PM, Akihiro Motoki wrote:
> What releases should we support in API references?
> There are several options.
>
> 1. The latest stable release + master
> 2. All supported stable releases + master
> 3. more older releases too?
>
>
Hi Armando,
Thanks for the report, I'm adding some notes inline (OSC/SDK)
On Sat, Aug 27, 2016 at 2:13 AM, Armando M. wrote:
> Hi Neutrinos,
>
> For those of you who couldn't join in person, please find a few notes below
> to capture some of the highlights of the event.
>
> I would like to thank
vf 7 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
>
> I guess the problem is with the SR-IOV NIC/ driver you are using maybe you
> should contact them
>
>
> -Original Message-
> From: Moshe Levi
> Sent: Wednesday, August 10, 2016 5:59 PM
penstack.org/#/q/status:open+project:openstack/octavia+branch:master+topic:octavia_basic_lb_scenario
[2] https://review.openstack.org/#/c/172199/66..75/.testr.conf
> Stephen
>
> On Tue, Aug 9, 2016 at 5:40 AM, Miguel Angel Ajo Pelayo
> wrote:
>>
>> On Mon, Aug 8, 2016 at
@moshe, any insight on this?
I guess that'd depend on the nic internal switch implementation and
how the switch ARP tables are handled there (per network, or global
per switch).
If that's the case for some sr-iov vendors (or all), would it make
sense to have a global switch to create globally uni
he current options/tools we're considering?
>
> Cheers,
> Lubosz Kosnik
> Cloud Software Engineer OSIC
> lubosz.kos...@intel.com
>
>> On Aug 8, 2016, at 7:04 AM, Miguel Angel Ajo Pelayo
>> wrote:
>>
>> Recently, I sent a series of patches [1] to make it
Thank you!! :)
On Mon, Aug 8, 2016 at 5:49 PM, Michael Johnson wrote:
> Miguel,
>
> Thank you for your work here. I would support an effort to setup a
> multi-node gate job.
>
> Michael
>
>
> On Mon, Aug 8, 2016 at 5:04 AM, Miguel Angel Ajo Pelayo
> wrote:
&
Answers inline.
On Tue, Aug 9, 2016 at 8:08 AM, Antonio Ojea wrote:
> What do you think about openwrt images?
>
> They are small, have documentation to build your custom images, have a
> packaging system and have tons of networking features (ipv6, vlans, ...) ,
> also seems that someone has done
Recently, I sent a series of patches [1] to make it easier for
developers to deploy a multi node octavia controller with
n_controllers x [api, cw, hm, hk] with an haproxy in front of the API.
Since this is the way the service is designed to work (with horizontal
scalability in mind), and we want t
Awesome Sean!,
Keep us posted!! :)
On Sat, Aug 6, 2016 at 8:16 PM, Mooney, Sean K wrote:
> Hi just a quick fyi,
>
> About 2 weeks ago I did some light testing with the conntrack security group
> driver and the newly
>
> Merged upserspace conntrack support in ovs.
>
>
>
> I can confirm that a
The problem with the other projects image builds is that they are
based for bigger systems, while cirros is an embedded-device-like
image which boots in a couple of seconds.
Couldn't we contribute to cirros to have such module load by default [1]?
Or may be it's time for Openstack to build their
Ohhh, yikes, even though I'm late my vote would have been super +1!!
On Tue, Jul 26, 2016 at 5:04 PM, Jakub Libosvar wrote:
> On 26/07/16 16:56, Assaf Muller wrote:
>>
>> We've hit critical mass from cores interesting in the testing area.
>>
>> Welcome Jakub to the core reviewer team. May you en
Oh yikes, I was "hit by a plane" (delay) plus a huge jet lag and
didn't make it to the meeting, I'll be there next week. Thank you.
On Tue, Jul 12, 2016 at 9:48 AM, Miguel Angel Ajo Pelayo
wrote:
> I'd like to ask for some prioritization on this RFE [1], since it
I'd like to ask for some prioritization on this RFE [1], since it's blocking
one of the already existing RFEs for RFE (ingress bandwidth limiting),
and we're trying to enhance the operator experience on the QoS service.
It's been discussed on previous driver meetings, and it seems to have
some con
at 8:10 PM, Kevin Benton wrote:
> Yeah, no meetings in #openstack-neutron please. It leaves us nowhere to
> discuss development stuff during that hour.
>
> On Tue, May 17, 2016 at 2:54 AM, Miguel Angel Ajo Pelayo <
> majop...@redhat.com> wrote:
>
>> I agree, let&
I agree, let's try to find a timeslot that works.
using #openstack-neutron with the meetbot works, but it's going to generate
a lot of noise.
On Tue, May 17, 2016 at 11:47 AM, Ihar Hrachyshka
wrote:
>
> > On 16 May 2016, at 15:47, Takashi Yamamoto
> wrote:
> >
> > On Mon, May 16, 2016 at 10:25
Sounds good,
I started by opening a tiny RFE, that may help in the organization
of flows inside OVS agent, for inter operability of features (SFC,
TaaS, ovs fw, and even port trunking with just openflow). [1] [2]
[1] https://bugs.launchpad.net/neutron/+bug/1577791
[2] http://paste.openstack.o
Does governors ballroom in Hilton sound ok?
We can move to somewhere else if necessary.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
Please add me to whatsapp or telegram if you use that : +34636522569
El 27/4/2016 12:50, majop...@redhat.com escribió:
> Trying to find you folks. I was late
> El 27/4/2016 12:04, "Paul Carver" escribió:
>
>> SFC team and anybody else dealing with flow selection/classification
>> (e.g. QoS),
>>
>
Trying to find you folks. I was late
El 27/4/2016 12:04, "Paul Carver" escribió:
> SFC team and anybody else dealing with flow selection/classification (e.g.
> QoS),
>
> I just wanted to confirm that we're planning to meet in salon C today
> (Wednesday) to get lunch but then possibly move to a qu
ow
Classifiers, while we need to make the full pipeline of features
(externally pluggable) work together.
> On Thu, Apr 21, 2016 at 12:58 PM, IWAMOTO Toshihiro
> wrote:
>>
>> At Wed, 20 Apr 2016 14:12:07 +0200,
>> Miguel Angel Ajo Pelayo wrote:
>> >
>> > I th
Inline update.
On Mon, Apr 11, 2016 at 4:22 PM, Miguel Angel Ajo Pelayo
wrote:
> On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes wrote:
>> On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:
[...]
>> Yes, Nova's conductor gathers information about the requested networks
&g
I think this is an interesting topic.
What do you mean exactly by FC ? (feature chaining?)
I believe we have three things to look at: (sorry for the TL)
1) The generalization of traffic filters / traffic classifiers. Having
common models, some sort of common API or common API structure
availabl
Sorry, I just saw, FC = flow classifier :-), I made it a multi purpose
abrev. now ;)
On Wed, Apr 20, 2016 at 2:12 PM, Miguel Angel Ajo Pelayo
wrote:
> I think this is an interesting topic.
>
> What do you mean exactly by FC ? (feature chaining?)
>
> I believe we have three th
On Fri, Apr 15, 2016 at 7:32 AM, IWAMOTO Toshihiro
wrote:
> At Mon, 11 Apr 2016 14:42:59 +0200,
> Miguel Angel Ajo Pelayo wrote:
>>
>> On Mon, Apr 11, 2016 at 11:40 AM, IWAMOTO Toshihiro
>> wrote:
>> > At Fri, 8 Apr 2016 12:21:21 +0200,
>> > Miguel An
On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes wrote:
> Hi Miguel Angel, comments/answers inline :)
>
> On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:
>>
>> Hi!,
>>
>> In the context of [1] (generic resource pools / scheduling in nova)
>> and [2] (m
On Mon, Apr 11, 2016 at 11:40 AM, IWAMOTO Toshihiro
wrote:
> At Fri, 8 Apr 2016 12:21:21 +0200,
> Miguel Angel Ajo Pelayo wrote:
>>
>> Hi, good that you're looking at this,
>>
>>
>> You could create a lot of ports with this method [1] and a bit of ex
On Sun, Apr 10, 2016 at 10:07 AM, Moshe Levi wrote:
>
>
>
>
> *From:* Miguel Angel Ajo Pelayo [mailto:majop...@redhat.com]
> *Sent:* Friday, April 08, 2016 4:17 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.o
Hi!,
In the context of [1] (generic resource pools / scheduling in nova) and
[2] (minimum bandwidth guarantees -egress- in neutron), I had a talk a few
weeks ago with Jay Pipes,
The idea was leveraging the generic resource pools and scheduling
mechanisms defined in [1] to find the right hos
Hi, good that you're looking at this,
You could create a lot of ports with this method [1] and a bit of extra
bash, without the extra expense of instance RAM.
[1]
http://www.ajo.es/post/89207996034/creating-a-network-interface-to-tenant-network-in
This effort is going to be still more relevan
On Fri, Apr 8, 2016 at 11:28 AM, Ihar Hrachyshka
wrote:
> Kevin Benton wrote:
>
> I don't know if my vote counts in this area, but +1!
>>
>
> What the gentleman said ^, +1.
"me too ^" , +1 !
> __
> OpenStack Developmen
On Mon, Mar 21, 2016 at 3:17 PM, Jay Pipes wrote:
> On 03/21/2016 06:22 AM, Miguel Angel Ajo Pelayo wrote:
>>
>> Hi,
>>
>> I was doing another pass on this spec, to see if we could leverage
>> it as-is for QoS / bandwidth tracking / bandwidth guaran
Hi,
I was doing another pass on this spec, to see if we could leverage
it as-is for QoS / bandwidth tracking / bandwidth guarantees, and I
have a question [1]
I guess I'm just missing some detail, but looking at the 2nd scenario,
why wouldn't availability zones allow the same exactly if we
On Wed, Mar 9, 2016 at 4:16 PM, Doug Hellmann wrote:
> Excerpts from Armando M.'s message of 2016-03-08 15:43:05 -0700:
> > On 8 March 2016 at 15:07, Doug Hellmann wrote:
> >
> > > Excerpts from Armando M.'s message of 2016-03-08 12:49:16 -0700:
> > > > Hi folks,
> > > >
> > > > There's a featur
> On 26 Feb 2016, at 02:38, Sean McGinnis wrote:
>
> On Thu, Feb 25, 2016 at 04:13:56PM +0800, Qiming Teng wrote:
>> Hi, All,
>>
>> After reading through all the +1's and -1's, we realized how difficult
>> it is to come up with a proposal that makes everyone happy. When we are
>> discussing thi
Hi Masco!,
Thanks a lot for working on this, I’m not following the [Horizon] tag and I
missed
this. I’ve added the Neutron and QoS tags.
I will give it a try as soon as I can.
Keep up the good work!,
Cheers,
Miguel Ángel.
> On 10 Feb 2016, at 13:04, masco wrote:
>
>
> Hello All,
>
Regarding this conversation about QoS, [1] as Nate said, we
have every feature x4 ( x[API, OVS, LB, SR-IOV]) and I add: we
should avoid writing RFEs for any missing piece in the reference
implementations, if any of those is missing, that’s just a bug.
I guess I haven’t been communicating the sta
1 - 100 of 170 matches
Mail list logo