On 29 May 2018 at 14:53, Jeremy Stanley wrote:
> On 2018-05-29 15:25:01 -0500 (-0500), Jay S Bryant wrote:
> [...]
> > Maybe it would be different now that I am a Core/PTL but in the past I
> had
> > been warned to be careful as it could be misinterpreted if I was changing
> > other people's patc
If your nitpick is a spelling mistake or the need for a comment where
you've pretty much typed the text of the comment in the review comment
itself, then I have personally found it easiest to use the Gerrit online
editor to actually update the patch yourself. There's nothing magical
about the orig
On 28 December 2017 at 06:57, CARVER, PAUL wrote:
> It was a gating criteria for stadium status. The idea was that the for a
> stadium project the neutron team would have review authority over the API
> but wouldn't necessarily review or be overly familiar with the
> implementation.
>
> A project
Hey,
Can someone explain how the API definition files for several service
plugins ended up in neutron-lib? I can see that they've been moved there
from the plugins themselves (e.g. networking-bgpvpn has
https://github.com/openstack/neutron-lib/commit/3d3ab8009cf435d946e206849e85d4bc9d149474#diff-
In conjunction with the release of VPP 17.10, I'd like to invite you all to
try out networking-vpp 17.10(*) for VPP 17.10. VPP is a fast userspace
forwarder based on the DPDK toolkit, and uses vector packet processing
algorithms to minimise the CPU time spent on each packet and maximise
throughput
Since OVS is doing L2 forwarding, you should be fine setting the MTU to as
high as you choose, which would probably be the segment_mtu in the config,
since that's what it defines - the largest MTU that (from the Neutron API
perspective) is usable and (from the OVS perspective) will be used in the
s
In conjunction with the release of VPP 17.07, I'd like to invite you all to
try out networking-vpp 17.07.1 for VPP 17.07. VPP is a fast userspace
forwarder based on the DPDK toolkit, and uses vector packet processing
algorithms to minimise the CPU time spent on each packet and maximise
throughput.
On 7 July 2017 at 12:14, Ihar Hrachyshka wrote:
> > That said: what will you do with existing VMs that have been told the
> MTU of
> > their network already?
>
> Same as we do right now when modifying configuration options defining
> underlying MTU: change it on API layer, update data path with t
OK, so I should read before writing...
On 5 July 2017 at 18:11, Ian Wells wrote:
> On 5 July 2017 at 14:14, Ihar Hrachyshka wrote:
>
>> Heya,
>>
>> we have https://bugs.launchpad.net/neutron/+bug/1671634 approved for
>> Pike that allows setting MTU for netwo
On 5 July 2017 at 14:14, Ihar Hrachyshka wrote:
> Heya,
>
> we have https://bugs.launchpad.net/neutron/+bug/1671634 approved for
> Pike that allows setting MTU for network on creation.
This was actually in the very first MTU spec (in case no one looked),
though it never got implemented. The sp
I'm coming to this cold, so apologies when I put my foot in my mouth. But
I'm trying to understand what you're actually getting at, here - other than
helpful simplicity - and I'm not following the detail of you're thinking,
so take this as a form of enquiry.
On 14 May 2017 at 10:02, Monty Taylor
There are two steps to how this information is used:
Step 1: create a network - the type driver config on the neutron-server
host will determine which physnet and VLAN ID to use when you create it.
It gets stored in the DB. No networking is actually done, we're just
making a reservation here. Th
In conjunction with the release of VPP 17.04, I'd like to invite you all to
try out networking-vpp for VPP 17.04. VPP is a fast userspace forwarder
based on the DPDK toolkit, and uses vector packet processing algorithms to
minimise the CPU time spent on each packet and maximise throughput.
network
+1
On 21 February 2017 at 16:18, Ichihara Hirofumi wrote:
> +1
>
> 2017-02-17 14:18 GMT-05:00 Kevin Benton :
>
>> Hi all,
>>
>> I'm organizing a Neutron social event for Thursday evening in Atlanta
>> somewhere near the venue for dinner/drinks. If you're interested, please
>> reply to this email
On 25 January 2017 at 18:07, Kevin Benton wrote:
> >Setting aside all the above talk about how we might do things for a
> moment: to take one specific feature example, it actually took several
> /years/ to add VLAN-aware ports to OpenStack. This is an example of a
> feature that doesn't affect o
In conjunction with the release of VPP 17.01, I'd like to invite you all to
try out networking-vpp for VPP 17.01. VPP is a fast userspace forwarder
based on the DPDK toolkit, and uses vector packet processing algorithms to
minimise the CPU time spent on each packet and maximise throughput.
network
On 25 January 2017 at 14:17, Monty Taylor wrote:
> > Adding an additional networking project to try to solve this will only
> > make things work. We need one API. If it needs to grow features, it
> > needs to grow features - but they should be features that all of
> > OpenStack users get.
>
> WOR
I would certainly be interested in dicussing this, though I'm not currently
signed up for the PTG. Obviously this is close to my interests, and I see
Kevin's raised Gluon as the bogeyman (which it isn't trying to be).
Setting aside all the above talk about how we might do things for a moment:
to
I see this changes a function's argument types without changing the
function's name - for instance, in the proposed networking-cisco change,
https://review.openstack.org/#/c/409045/ . This makes it hard to detect
that there's been a change and react accordingly. What's the recommended
way to writ
+1
On 14 October 2016 at 11:30, Miguel Lavalle wrote:
> Dear Neutrinos,
>
> I am organizing a social event for the team on Thursday 27th at 19:30.
> After doing some Google research, I am proposing Raco de la Vila, which is
> located in Poblenou: http://www.racodelavila.com/en/index.htm. The men
On 6 October 2016 at 10:43, Jay Pipes wrote:
> On 10/06/2016 11:58 AM, Naveen Joy (najoy) wrote:
>
>> It’s primarliy because we have seen better stability and scalability
>> with etcd over rabbitmq.
>>
>
> Well, that's kind of comparing apples to oranges. :)
>
> One is a distributed k/v store. Th
We'd like to introduce the VPP mechanism driver, networking-vpp[1], to the
developer community.
networking-vpp is an ML2 mechanism driver to control DPDK-based VPP
user-space forwarders on OpenStack compute nodes. The code does what
mechanism drivers do - it connects VMs to each other and to othe
On 5 September 2016 at 17:08, Flavio Percoco wrote:
> We should probably start by asking ourselves who's really being bitten by
> the
> messaging bus right now? Large (and please, let's not bikeshed on what a
> Large
> Cloud is) Clouds? Small Clouds? New Clouds? Everyone?
> The we can start askin
On 1 September 2016 at 06:52, Ken Giusti wrote:
> On Wed, Aug 31, 2016 at 3:30 PM, Ian Wells wrote:
>
> > I have opinions about other patterns we could use, but I don't want to
push
> > my solutions here, I want to see if this is really as much of a problem
> as
On 31 August 2016 at 10:12, Clint Byrum wrote:
> Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
> > On 31 August 2016 at 11:57, Bogdan Dobrelya
> wrote:
> >
> > > I agree that RPC design pattern, as it is implemented now, is a major
> > > blocker for OpenStack in general. It
On 29 August 2016 at 03:48, Jay Pipes wrote:
> On 08/27/2016 11:16 AM, HU, BIN wrote:
>
>> So telco use cases is not only the innovation built on top of OpenStack.
>> Instead, telco use cases, e.g. Gluon (NFV networking), vCPE Cloud, Mobile
>> Cloud, Mobile Edge Cloud, brings the needed requireme
On 11 July 2016 at 12:52, Sam Yaple wrote:
> After lots of fun on IRC I have given up this battle. I am giving up
> quickly because frickler has purposed a workaround (or better solution
> depending on who you ask). So for all of you keeping track at home, if you
> want your vxlan and your vlan n
On 11 July 2016 at 11:49, Sean M. Collins wrote:
> Sam Yaple wrote:
> > In this situation, since you are mapping real-ips and the real world runs
> > on 1500 mtu
>
> Don't be so certain about that assumption. The Internet is a very big
> and diverse place
OK, I'll contradict myself now - th
On 11 July 2016 at 11:12, Chris Friesen wrote:
> On 07/11/2016 10:39 AM, Jay Pipes wrote:
>
> Out of curiosity, in what scenarios is it better to limit the instance's
>> MTU to
>> a value lower than that of the maximum path MTU of the infrastructure? In
>> other
>> words, if the infrastructure su
On 18 April 2016 at 04:33, Ihar Hrachyshka wrote:
> Akihiro Motoki wrote:
>
> 2016-04-18 15:58 GMT+09:00 Ihar Hrachyshka :
>>
>>> Sławek Kapłoński wrote:
>>>
>>> Hello,
What MTU have You got configured on VMs? I had issue with performance on
vxlan network with standard MTU (1500)
In general, while you've applied this to networking (and it's not the first
time I've seen this proposal), the same technique will work with any device
- PF or VF, networking or other:
- notify the VM via an accepted channel that a device is going to be
temporarily removed
- remove the device
- mi
On 27 January 2016 at 11:06, Flavio Percoco wrote:
> FWIW, the current governance model does not prevent competition. That's
> not to
> be understood as we encourage it but rather than there could be services
> with
> some level of overlap that are still worth being separate.
>
There should alwa
As I recall, network_device_mtu sets up the MTU on a bunch of structures
independently of whatever the correct value is. It was a bit of a
workaround back in the day and is still a bit of a workaround now. I'd
sooner we actually fix up the new mechanism (which is kind of hard to do
when the close
On 25 January 2016 at 07:06, Matt Kassawara wrote:
> Overthinking and corner cases led to the existing implementation which
doesn't solve the MTU problem and arguably makes the situation worse
because options in the configuration files give operators the impression
they can control it.
We are giv
;s a
> behavior change considering the current behavior is annoying. :)
> On Jan 24, 2016 23:31, "Ian Wells" wrote:
>
>> On 24 January 2016 at 22:12, Kevin Benton wrote:
>>
>>> >The reason for that was in the other half of the thread - it's not
>&
Actually, I note that that document is Juno and there doesn't seem to be
anything at all in the Liberty guide now, so the answer is probably to add
settings for path_mtu and segment_mtu in the recommended Neutron
configuration.
On 24 January 2016 at 22:26, Ian Wells wrote:
> On 24 Janu
one using the 1550+hacks and other methods of today will find their
system changes behaviour if we started setting that specific default.
Regardless, we need to take that documentation and update it. It was a
nasty hack back in the day and not remotely a good idea now.
> On Jan 24, 2016 23:
On 22 January 2016 at 10:35, Neil Jerram wrote:
> * Why change from ML2 to core plugin?
>
> - It could be seen as resolving a conceptual mismatch.
> networking-calico uses
> IP routing to provide L3 connectivity between VMs, whereas ML2 is
> ostensibly
> all about layer 2 mechanisms.
You've
Us in a mixed environment, at least if everything is working as
intended.
--
Ian.
[1]
https://github.com/openstack/neutron/blob/544ff57bcac00720f54a75eb34916218cb248213/releasenotes/notes/advertise_mtu_by_default-d8b0b056a74517b8.yaml#L5
> On Jan 24, 2016 20:48, "Ian Wells" wrote:
On 23 January 2016 at 11:27, Adam Lawson wrote:
> For the sake of over-simplification, is there ever a reason to NOT enable
> jumbo frames in a cloud/SDN context where most of the traffic is between
> virtual elements that all support it? I understand that some switches do
> not support it and tr
I wrote the spec for the MTU work that's in the Neutron API today. It
haunts my nightmares. I learned so many nasty corner cases for MTU, and
you're treading that same dark path.
I'd first like to point out a few things that change the implications of
what you're reporting in strange ways. [1] p
On 12 October 2015 at 21:18, Clint Byrum wrote:
> We _would_ keep a local cache of the information in the schedulers. The
> centralized copy of it is to free the schedulers from the complexity of
> having to keep track of it as state, rather than as a cache. We also don't
> have to provide a way
On 11 October 2015 at 00:23, Clint Byrum wrote:
> I'm in, except I think this gets simpler with an intermediary service
> like ZK/Consul to keep track of this 1GB of data and replace the need
> for 6, and changes the implementation of 5 to "updates its record and
> signals its presence".
>
OK, s
On 10 October 2015 at 23:47, Clint Byrum wrote:
> > Per before, my suggestion was that every scheduler tries to maintain a
> copy
> > of the cloud's state in memory (in much the same way, per the previous
> > example, as every router on the internet tries to make a route table out
> of
> > what i
On 9 October 2015 at 18:29, Clint Byrum wrote:
> Instead of having the scheduler do all of the compute node inspection
> and querying though, you have the nodes push their stats into something
> like Zookeeper or consul, and then have schedulers watch those stats
> for changes to keep their in-me
On 9 October 2015 at 12:50, Chris Friesen
wrote:
> Has anybody looked at why 1 instance is too slow and what it would take to
>
>> make 1 scheduler instance work fast enough? This does not preclude the
>> use of
>> concurrency for finer grain tasks in the background.
>>
>
> Currently we pull data
On 8 October 2015 at 13:28, Ed Leafe wrote:
> On Oct 8, 2015, at 1:38 PM, Ian Wells wrote:
> > Truth be told, storing that data in MySQL is secondary to the correct
> functioning of the scheduler.
>
> I have no problem with MySQL (well, I do, but that's not relevant to
On 8 October 2015 at 09:10, Ed Leafe wrote:
> You've hit upon the problem with the current design: multiple, and
> potentially out-of-sync copies of the data.
Arguably, this is the *intent* of the current design, not a problem with
it. The data can never be perfect (ever) so go with 'good enou
On 7 October 2015 at 22:17, Chris Friesen
wrote:
> On 10/07/2015 07:23 PM, Ian Wells wrote:
>
>>
>> The whole process is inherently racy (and this is inevitable, and
>> correct),
>>
>>
> Why is it inevitable?
>
It's inevitable because everythin
On 7 October 2015 at 16:00, Chris Friesen
wrote:
> 1) Some resources (RAM) only require tracking amounts. Other resources
> (CPUs, PCI devices) require tracking allocation of specific individual host
> resources (for CPU pinning, PCI device allocation, etc.). Presumably for
> the latter we woul
Can I ask a different question - could we reject a few simple-to-check
things on the push, like bad commit messages? For things that take 2
seconds to fix and do make people's lives better, it's not that they're
rejected, it's that the whole rejection cycle via gerrit review (push/wait
for tests t
Neutron already offers a DNS server (within the DHCP namespace, I think).
It does forward on non-local queries to an external DNS server, but it
already serves local names for instances; we'd simply have to set one
aside, or perhaps use one in a 'root' but nonlocal domain
(metadata.openstack e.g.).
It is useful, yes; and posting diffs on the mailing list is not the way to
get them reviewed and approved. If you can get this on gerrit it will get
a proper review, and I would certainly like to see something like this
incorporated.
On 21 July 2015 at 15:41, John Nielsen wrote:
> I may be in a
ion of routing for floating IPs is also a scheduling
> problem, though one that would require a lot more changes to how FIP are
> allocated and associated to solve.
>
> John
>
> [1] https://review.openstack.org/#/c/180803/
> [2] https://bugs.launchpad.net/neutron/+bug/1458890/c
On 21 July 2015 at 07:52, Carl Baldwin wrote:
> > Now, you seem to generally be thinking in terms of the latter model,
> particularly since the provider network model you're talking about fits
> there. But then you say:
>
> Actually, both. For example, GoDaddy assigns each vm an ip from the
> l
There are two routed network models:
- I give my VM an address that bears no relation to its location and ensure
the routed fabric routes packets there - this is very much the routing
protocol method for doing things where I have injected a route into the
network and it needs to propagate. It's a
On 20 July 2015 at 10:21, Neil Jerram wrote:
> Hi Ian,
>
> On 20/07/15 18:00, Ian Wells wrote:
>
>> On 19 July 2015 at 03:46, Neil Jerram > <mailto:neil.jer...@metaswitch.com>> wrote:
>>
>> The change at [1] creates and describes a new 'rout
On 19 July 2015 at 03:46, Neil Jerram wrote:
> The change at [1] creates and describes a new 'routed' value for
> provider:network_type. It means that a compute host handles data
> to/from the relevant TAP interfaces by routing it, and specifically
> that those TAP interfaces are not bridged.
On 11 June 2015 at 02:37, Andreas Scheuring
wrote:
> > Do you happen to know how data gets routed _to_ a VM, in the
> > type='network' case?
>
> Neil, sorry no. Haven't played around with that, yet. But from reading
> the libvirt man, it looks good. It's saying "Guest network traffic will
> be fo
On 11 June 2015 at 12:37, Richard Raseley wrote:
> Andrew Laski wrote:
>
>> There are many reasons a deployer may want to live-migrate instances
>> around: capacity planning, security patching, noisy neighbors, host
>> maintenance, etc... and I just don't think the user needs to know or
>> care t
On 11 June 2015 at 15:34, Michael Still wrote:
> On Fri, Jun 12, 2015 at 7:07 AM, Mark Boo wrote:
> > - What functionality is missing (if any) in config drive / metadata
> service
> > solutions to completely replace file injection?
>
> None that I am aware of. In fact, these two other options pr
I don't see a problem with this, though I think you do want plug/unplug
calls to be passed on to Neutron so that has the opportunity to set up the
binding from its side (usage >0) and tear it down when you're done with it
(usage <1).
There may be a set of races you need to deal with, too - what ha
dency.
> Thank you for sharing this,
> Irena
> [1] https://review.openstack.org/#/c/162468/
>
> On Tue, Jun 2, 2015 at 10:45 AM, Ian Wells wrote:
>
>> VIF plugging, but not precisely libvirt VIF plugging, so I'll tout this
>> to a hopefully interested audience
VIF plugging, but not precisely libvirt VIF plugging, so I'll tout this to
a hopefully interested audience.
At the summit, we wrote up a spec we were thinking of doing at [1]. It
actually proposes two things, which is a little naughty really, but hey.
Firstly we propose that we turn binding into
The fix should work fine. It is technically a workaround for the way
checksums work in virtualised systems, and the unfortunate fact that some
DHCP clients check checksums on packets where the hardware has checksum
offload enabled. (This doesn't work due to an optimisation in the way QEMU
treats
On 13 May 2015 at 10:30, Vinod Pandarinathan (vpandari)
wrote:
> - Traditional monitoring tools (Nagios, Zabbix, ) are necessary anyway
> for infrastructure monitoring (CPU, RAM, disks, operating system, RabbitMQ,
> databases and more) and diagnostic purposes. Adding OpenStack service
> check
On 20 April 2015 at 17:52, David Kranz wrote:
> On 04/20/2015 08:07 PM, Ian Wells wrote:
>
> Whatever your preference might be, I think it's best we lose the
> ambiguity. And perhaps advertise that page a little more widely, actually
> - I hadn't come across it in
On 20 April 2015 at 07:40, Boris Pavlovic wrote:
> Dan,
>
> IMHO, most of the test coverage we have for nova's neutronapi is more
>> than useless. It's so synthetic that it provides no regression
>> protection, and often requires significantly more work than the change
>> that is actually being a
On 20 April 2015 at 15:23, Matthew Treinish wrote:
> On Mon, Apr 20, 2015 at 03:10:40PM -0700, Ian Wells wrote:
> > It would be nice to have a consistent policy here; it would make future
> > decision making easier and it would make it easier to write specs if we
> > knew
On 20 April 2015 at 13:02, Kevin L. Mitchell
wrote:
> On Mon, 2015-04-20 at 13:57 -0600, Chris Friesen wrote:
> > > However, minor changes like that could still possibly break clients
> that are not
> > > expecting them. For example, a client that uses the json response as
> arguments
> > > to a
This puts me in mind of a previous proposal, from the Neutron side of
things. Specifically, I would look at Erik Moe's proposal for VM ports
attached to multiple networks:
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms .
I believe that you want logical ports hiding behind a conventi
7:48, Guo, Ruijing wrote:
> I am trying to understand how guest os use trunking network.
>
>
>
> If guest os use bridge like Linuxbride and OVS, how we launch it and how
> libvirt to support it?
>
>
>
> Thanks,
>
> -Ruijing
>
>
>
>
>
> *From:* Ian
On 24 March 2015 at 11:45, Armando M. wrote:
> This may be besides the point, but I really clash with the idea that we
> provide a reference implementation on something we don't have CI for...
>
Aside from the unit testing, it is going to get a test for the case we can
test - when using the stan
That spec ensures that you can tell what the plugin is doing. You can ask
for a VLAN transparent network, but the cloud may tell you it can't make
one.
The OVS driver in Openstack drops VLAN tagged packets, I'm afraid, and the
spec you're referring to doesn't change that. The spec does ensure th
On 22 March 2015 at 07:48, Jay Pipes wrote:
> On 03/20/2015 05:16 PM, Kevin Benton wrote:
>
>> To clarify a bit, we obviously divide lots of things by tenant (quotas,
>> network listing, etc). The difference is that we have nothing right now
>> that has to be unique within a tenant. Are there obj
On 20 March 2015 at 15:49, Salvatore Orlando wrote:
> The MTU issue has been a long-standing problem for neutron users. What
> this extension is doing is simply, in my opinion, enabling API control over
> an aspect users were dealing with previously through custom made scripts.
>
Actually, versi
ml
> [3]
> https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/mtu-selection-and-advertisement,n,z
> [4]
> https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/nfv-vlan-trunks,n,z
> [5] https://review.openstack.org/#/c/136760/
&g
On 19 March 2015 at 11:44, Gary Kotton wrote:
> Hi,
> Just the fact that we did this does not make it right. But I guess that we
> are starting to bend the rules. I think that we really need to be far more
> diligent about this kind of stuff. Having said that we decided the
> following on IRC:
>
Per the other discussion on attributes, I believe the change walks in
historical footsteps and it's a matter of project policy choice. That
aside, you raised a couple of other issues on IRC:
- backward compatibility with plugins that haven't adapted their API - this
is addressed in the spec, whic
There are precedents for this. For example, the attributes that currently
exist for IPv6 advertisement are very similar:
- added during the run of a stable Neutron API
- properties added on a Neutron object (MTU and VLAN affect network, but
IPv6 affects subnet - same principle though)
- settable,
On 18 March 2015 at 03:33, Duncan Thomas wrote:
> On 17 March 2015 at 22:02, Davis, Amos (PaaS-Core) <
> amos.steven.da...@hp.com> wrote:
>
>> Ceph/Cinder:
>> LVM or other?
>> SCSI-backed?
>> Any others?
>>
>
> I'm wondering why any of the above matter to an application.
>
The Neutron requiremen
On 12 March 2015 at 05:33, Fredy Neeser wrote:
> 2. I'm using policy routing on my hosts to steer VXLAN traffic (UDP
> dest. port 4789) to interface br-ex.12 -- all other traffic from
> 192.168.1.14 is source routed from br-ex.1, presumably because br-ex.1 is a
> lower-numbered interface than
On 11 March 2015 at 10:56, Matt Riedemann
wrote:
> While looking at some other problems yesterday [1][2] I stumbled across
> this feature change in Juno [3] which adds a config option
> "allow_duplicate_networks" to the [neutron] group in nova. The default
> value is False, but according to the s
On 11 March 2015 at 04:27, Fredy Neeser wrote:
> 7: br-ex.1: mtu 1500 qdisc noqueue state
> UNKNOWN group default
> link/ether e0:3f:49:b4:7c:a7 brd ff:ff:ff:ff:ff:ff
> inet 192.168.1.14/24 brd 192.168.1.255 scope global br-ex.1
>valid_lft forever preferred_lft forever
>
> 8: br-
On 6 March 2015 at 13:16, Sławek Kapłoński wrote:
> Hello,
>
> Today I found bug https://bugs.launchpad.net/neutron/+bug/1314614 because
> I
> have such problem on my infra.
>
(For reference, if you delete a port that a Nova is using - it just goes
ahead and deletes the port from Neutron and lea
With apologies for derailing the question, but would you care to tell us
what evil you're planning on doing? I find it's always best to be informed
about these things.
--
Ian.
(Why yes, it *is* a Saturday morning.)
On 6 March 2015 at 12:23, Michael Krotscheck wrote:
> Heya!
>
> So, a while ag
On 2 February 2015 at 09:49, Chris Friesen
wrote:
> On 02/02/2015 10:51 AM, Jay Pipes wrote:
>
>> This is a bug that I discovered when fixing some of the NUMA related nova
>> objects. I have a patch that should fix it up shortly.
>>
>
> Any chance you could point me at it or send it to me?
>
> T
On 28 January 2015 at 17:32, Robert Collins
wrote:
> E.g. its a call (not cast) out to Neutron, and Neutron returns when
> the VIF(s) are ready to use, at which point Nova brings the VM up. If
> the call times out, we error.
>
I don't think this model really works with distributed systems, and i
Lots of open questions in here, because I think we need a long conversation
on the subject.
On 23 January 2015 at 15:51, Kevin Benton wrote:
> It seems like a change to using internal RPC interfaces would be pretty
> unstable at this point.
>
> Can we start by identifying the shortcomings of t
Once more, I'd like to revisit the VIF_VHOSTUSER discussion [1]. I still
think this is worth getting into Nova's libvirt driver - specifically
because there's actually no way to distribute this as an extension; since
we removed the plugin mechanism for VIF drivers, it absolutely requires a
code ch
Sukhdev,
Since the term is quite broad and has meant many things in the past, can
you define what you're thinking of when you say 'L2 gateway'?
Cheers,
--
Ian.
On 2 January 2015 at 18:28, Sukhdev Kapur wrote:
> Hi all,
>
> HAPPY NEW YEAR.
>
> Starting Monday (Jan 5th, 2015) we will be kicking
Let me write a spec and see what you both think. I have a couple of things
we could address here and while it's a bit late it wouldn't be a dramatic
thing to fix and it might be acceptable.
On 15 December 2014 at 11:28, Daniel P. Berrange
wrote:
>
> On Mon, Dec 15, 2014 at 11:15
Hey Ryota,
A better way of describing it would be that the bridge name is, at present,
generated in *both* Nova *and* Neutron, and the VIF type semantics define
how it's calculated. I think you're right that in both cases it would make
more sense for Neutron to tell Nova what the connection endpo
On 10 December 2014 at 01:31, Daniel P. Berrange
wrote:
>
> So the problem of Nova review bandwidth is a constant problem across all
> areas of the code. We need to solve this problem for the team as a whole
> in a much broader fashion than just for people writing VIF drivers. The
> VIF drivers a
NFV for example? Neutron provides low level hooks
> and the rest is defined elsewhere. Maybe this could work, but there would
> probably be other issues if the actual implementation is not on the edge or
> outside Neutron.
>
>
>
> /Erik
>
>
>
>
>
> *From:* Ian W
On 4 December 2014 at 08:00, Neil Jerram wrote:
> Kevin Benton writes:
> I was actually floating a slightly more radical option than that: the
> idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does
> absolutely _nothing_, not even create the TAP device.
>
Nova always does something
On 1 December 2014 at 21:26, Mohammad Hanif wrote:
> I hope we all understand how edge VPN works and what interactions are
> introduced as part of this spec. I see references to neutron-network
> mapping to the tunnel which is not at all case and the edge-VPN spec
> doesn’t propose it. At a ve
On 1 December 2014 at 09:01, Mathieu Rohon wrote:
This is an alternative that would say : you want an advanced service
> for your VM, please stretch your l2 network to this external
> component, that is driven by an external controller, and make your
> traffic goes to this component to take benef
On 1 December 2014 at 04:43, Mathieu Rohon wrote:
> This is not entirely true, as soon as a reference implementation,
> based on existing Neutron components (L2agent/L3agent...) can exist.
>
The specific thing I was saying is that that's harder with an edge-id
mechanism than one incorporated int
On 27 November 2014 at 12:11, Mohammad Hanif wrote:
> Folks,
>
> Recently, as part of the L2 gateway thread, there was some discussion on
> BGP/MPLS/Edge VPN and how to bridge any overlay networks to the neutron
> network. Just to update everyone in the community, Ian and I have
> separately s
1 - 100 of 218 matches
Mail list logo