ugh all the user's networks and filter client-side
>>
>> How is the user supposed to be assembling this giant UUID list? I'd think
>> it would be easier for them to specify a query (e.g. "get usage data for
>> all my production subnets" or something).
&g
t;
> On Jan 19, 2016, at 4:59 PM, Shraddha Pandhe
> wrote:
>
> Hi folks,
>
>
> I am writing a Neutron extension which needs to take 1000s of network-ids
> as argument for filtering. The CURL call is as follows:
>
> curl -i -X GET '
> http://hostname:port
Hi folks,
I am writing a Neutron extension which needs to take 1000s of network-ids
as argument for filtering. The CURL call is as follows:
curl -i -X GET
'http://hostname:port/neutron/v2.0/extension_name.json?net-id=fffecbd1-0f6d-4f02-aee7-ca62094830f5&net-id=fffeee07-4f94-4cff-bf8e-a2aa7be59e
On Tue, Nov 24, 2015 at 7:39 AM, Jim Rollenhagen
wrote:
> On Mon, Nov 23, 2015 at 03:35:58PM -0800, Shraddha Pandhe wrote:
> > Hi,
> >
> > I would like to know how everyone is using maintenance mode and what is
> > expected from admins about nodes in maintenance. The
>
> 2015-11-24 11:20 GMT+07:00 Shraddha Pandhe :
>
>> Hi John,
>>
>> Thanks for letting me know. How do I setup fresh devstack with pluggable
>> IPAM enabled in the meantime?
>>
>> On Mon, Nov 23, 2015 at 5:34 PM, John Belamaric
>> wrote:
frame.
>
> John
>
> [1] https://bugs.launchpad.net/neutron/+bug/1516156
>
>
>
> On Nov 23, 2015, at 8:05 PM, Shraddha Pandhe
> wrote:
>
> Hi folks,
>
> What is the right way to use ipam reference implementation with devstack?
> When setup devstack, I didnt have
Hi folks,
What is the right way to use ipam reference implementation with devstack?
When setup devstack, I didnt have the setting
ipam_driver = internal
I changed it afterwards. But now when I try to create a port, I get this
error:
2015-11-23 21:23:00.078 ERROR neutron.ipam.drivers.neutron
Hi,
I would like to know how everyone is using maintenance mode and what is
expected from admins about nodes in maintenance. The reason I am bringing
up this topic is because, most of the ironic operations, including manual
cleaning are not allowed for nodes in maintenance. Thats a problem for us.
Hi Carl,
Please find me reply inline
On Mon, Nov 9, 2015 at 9:49 AM, Carl Baldwin wrote:
> On Fri, Nov 6, 2015 at 2:59 PM, Shraddha Pandhe <
> spandhe.openst...@gmail.com> wrote:
>>
>> We have a similar requirement where we want to pick a network thats
>>
k-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam
> db tables
>
>
>
> On 4 November 2015 at 13:21, Shraddha Pandhe
> wrote:
>
>> Hi Salvatore,
>>
>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>> make IPAM much more powerful.
at 10:55 AM, Jay Pipes wrote:
>>
>>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>>
>>>> Hi Salvatore,
>>>>
>>>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>>>> make IPAM much more powerful. Som
ion
[2]
http://www.dorm.org/blog/openstack-architecture-at-go-daddy-part-2-neutron/#Customizations_to_Abstract_Away_Layer_2
> Regards,
>Neil
>
>
>
> *From: *Shraddha Pandhe
> *Sent: *Friday, 6 November 2015 20:23
> *To: *OpenStack Development Mailing List (not for
Bumping this up :)
Folks, does anyone else have a similar requirement to ours? Are folks
making scheduling decisions based on networking?
On Thu, Nov 5, 2015 at 12:24 PM, Shraddha Pandhe <
spandhe.openst...@gmail.com> wrote:
> Hi,
>
> I agree with all of you about the REST
-cases are always
shared with the community.
On Thu, Nov 5, 2015 at 9:37 AM, Kyle Mestery wrote:
> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes wrote:
>
>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>
>>> Hi Salvatore,
>>>
>>> Thanks for
inding the right solution.
> On Nov 4, 2015 3:58 PM, "Shraddha Pandhe"
> wrote:
>
>>
>>
>> On Wed, Nov 4, 2015 at 1:38 PM, Armando M. wrote:
>>
>>>
>>>
>>> On 4 November 2015 at 13:21, Shraddha Pandhe <
>>>
On Wed, Nov 4, 2015 at 1:38 PM, Armando M. wrote:
>
>
> On 4 November 2015 at 13:21, Shraddha Pandhe
> wrote:
>
>> Hi Salvatore,
>>
>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>> make IPAM much more powerful. Some other
properties
> to allocation pools.
>
> Salvatore
>
> On 4 November 2015 at 21:46, Shraddha Pandhe
> wrote:
>
>> Hi folks,
>>
>> I have a small question/suggestion about IPAM.
>>
>> With IPAM, we are allowing users to have their own IPAM drivers so th
Hi folks,
I have a small question/suggestion about IPAM.
With IPAM, we are allowing users to have their own IPAM drivers so that
they can manage IP allocation. The problem is, the new ipam tables in the
database have the same columns as the old tables. So, as a user, if I want
to have my own logi
Hi folks,
James Penick from Yahoo! presented a talk on Thursday about how Yahoo uses
Neutron for Ironic. I would like to follow up on one particular use case
that was discussed: Multi-IP support.
Here's our use-case for Multi-ip:
For Ironic, we want user to specify the number of IPs on boot. Now
Hi folks,
James Penick from Yahoo! presented a talk on Thursday about how Yahoo uses
Neutron for Ironic. I would like to follow up on one particular use case
that was discussed: Multi-IP support.
Here's our use-case for Multi-ip:
For Ironic, we want user to specify the number of IPs on boot. Now
Hi Ionut,
I am working on a similar effort: Adding driver for neutron-dhcp-agent [1]
& [2]. Is it similar to what you are trying to do? My approach doesn't need
any extra database. There are two ways to achieve HA in my case:
1. Run multiple neutron-dhcp-agents and set agents_per_network >1 so mo
Hi,
I have added few more questions to the bug [1]. Please confirm my
understanding.
[1] https://bugs.launchpad.net/neutron/+bug/1470612/comments/12
On Tue, Jul 28, 2015 at 12:14 PM, Shraddha Pandhe <
spandhe.openst...@gmail.com> wrote:
> Hi,
>
> I started working on this p
#L293-L302
On Wed, Jul 1, 2015 at 11:28 AM, Shraddha Pandhe <
spandhe.openst...@gmail.com> wrote:
> Hi,
>
> I had a discussion about this with Kevin Benton on IRC. Filed a bug:
> https://bugs.launchpad.net/neutron/+bug/1470612
>
> Thanks!
>
>
> On Wed, Jul 1, 2015
Hi,
I had a discussion about this with Kevin Benton on IRC. Filed a bug:
https://bugs.launchpad.net/neutron/+bug/1470612
Thanks!
On Wed, Jul 1, 2015 at 11:03 AM, Shraddha Pandhe <
spandhe.openst...@gmail.com> wrote:
> Hi Shihan,
>
> I think the problem is slightly different.
Hi Shihan,
I think the problem is slightly different. Does your patch take care of the
scenario where a port was deleted AFTER agent restart (not when agent was
down)?
My problem is that, when the agent restarts, it loses its previous network
cache. As soon as the agent starts, as part of "__ini
Hi folks..
I have a question about neutron dhcp agent restart scenario. It seems like,
when the agent restarts, it recovers the known network IDs in cache, but we
don't recover the known ports [1].
So if a port that was present before agent restarted, is deleted after
agent restart, the agent won
er
> that.
> >
> > On Thu, Jun 11, 2015 at 3:34 PM, Shraddha Pandhe <
> > spandhe.openst...@gmail.com > wrote:
> >
> >
> >
> > The idea is to round-robin between gateways by using some sort of mod
> > operation
> >
> > So logical
Hi everyone,
Any thoughts on supporting multiple gateway IPs for subnets?
On Thu, Jun 11, 2015 at 3:34 PM, Shraddha Pandhe <
spandhe.openst...@gmail.com> wrote:
> The idea is to round-robin between gateways by using some sort of mod
> operation
>
> So logically it can l
hat gateway address do you give to regular clients via dhcp when you have
> multiple?
>
> On Jun 11, 2015 12:29 PM, "Shraddha Pandhe"
> wrote:
> >
> > Hi,
> > Currently, the Subnets in Neutron and Nova-Network only support one
> gateway. For provider ne
Hi,
Currently, the Subnets in Neutron and Nova-Network only support one
gateway. For provider networks in large data centers, quite often, the
architecture is such a way that multiple gateways are configured per
subnet. These multiple gateways are typically spread across backplanes so
that the prod
Hi,
Currently, the Subnets in Neutron and Nova-Network only support one gateway.
For provider networks in large data centers, quite often, the architecture is
such a way that multiple gateways are configured per subnet. These multiple
gateways are typically spread across backplanes so that the p
Hi Daniel,
I see following in your command
--dhcp-range=set:tag0,192.168.110.0,static,86400s
--dhcp-range=set:tag1,192.168.111.0,static,86400s
Is this expected? Was this command generated by the agent itself, or was
Dnsmasq manually started?
On Tuesday, June 9, 2015 4:41 AM, Kevin B
Hi folks,
I am working on nova-network in Havana. I have a very unique use case where I
need to add duplicate VLANs in nova-network. I am trying to add multiple
networks in nova-network with same VLAN ID. The reason is as follows:
The cluster that I have has an L3 backplane. We have been give
Hi folks,
I am working on nova-network in Havana. I have a very unique use case where I
need to add duplicate VLANs in nova-network. I am trying to add multiple
networks in nova-network with same VLAN ID. The reason is as follows:
The cluster that I have has an L3 backplane. We have been give
34 matches
Mail list logo