Re: [Openstack] Virtual Firewall Appliance

2016-02-16 Thread Georgios Dimitrakakis

Mark and Martinx thank you both for your suggestions.

I had tried to build PFSense in the past but without success.

Indeed my goal is to run the virtual firewall as an instance since I am 
on an older OpenStack version (IceHouse) with nova-networking and 
therefore I cannot have control over the outgoing connections.


Regards,

G.



For running it as an Instance?

You can try:

- PFSense;

- Zentyal;

However, youll need to make use of the Neutron feature called
"port_security_enabled = false" for the vNIC attached to the
"internal" subnet (behind the firewall).

Just a curiosity, why dont you use the Neutron native firewall that
resides on each L3 Router?

On 15 February 2016 at 15:56, Georgios Dimitrakakis  wrote:


Hi!

Can anyone suggest me of a virtual firewall appliance which is
compatible with OpenStack?

Best regards,

G.

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]
Post to     : openstack@lists.openstack.org [2]
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [3]




Links:
--
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[2] mailto:openstack@lists.openstack.org
[3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[4] mailto:gior...@acmac.uoc.gr


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Error while launching instance | unexpected error ['NoneType' object has no attribute 'status_code']

2016-02-16 Thread Tomas Vondra
John Depp  writes:

> 
> Hello Team,While trying to spawn a new CentOS instance on openstack, the
following error is being logged :url_helper.py[WARNING]: Calling
'http://169.254.169.254/2009-04-04/meta-data/instance-id  ' failed [0/120s]:
unexpected error ['NoneType' object has no attribute 'status_code']and then
:DataSourceEc2.py[CRITICAL]: Giving up on md from
['http://169.254.169.254/2009-04-04/meta-data/instance-id  '] after 126
secondsAfter
>  some time, when the DataSourceEc2.py error, the instance can be 
> connected to, but before that no services are being launched and hence 
> connection couldn't be established for some considerable time after 
> spawning a new instance.Kindly suggest remedial steps to rectify the
same.Thank you.
> 

Hi!
Have you tried to call the metadata service manually after you log in to an
instance using curl? Do other images work? Do you have
metadata_proxy_shared_secret set to the same value in nova.conf on all
nodes? And also in neutron/metadata_agent.ini.
Tomas
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] how to find the code of consumer in rabbitmq?

2016-02-16 Thread 郝启臣
hello,guys,now I want to know the the process of start a new instance,and I
find the code in nova/conductor/rpcapi.py,

cctxt = self.client.prepare(version=version)
 cctxt.cast(context, 'build_instances', **kw)

I know the code do a cast method,send the message to exchange,but which is
the exchange,where can i find the definition of the exchange.Also,the
exchange will send the message to queue,and i can use the command
'rabbitmqctl list_bindings' to find the queue which is binding with the
exchange,but where is the  code of the queue,how does it deal the message
in queue?

anybody can help me?thanks.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [OpenStack] [CINDER] how to get updated pool info when multi users create volumes on pool configured?

2016-02-16 Thread yang, xing
Sounds good.  Let me know how it goes.  

Thanks Dilip,
Xing


> On Feb 16, 2016, at 1:21 AM, Dilip Sunkum Manjunath 
>  wrote:
> 
> Hi Xing,
> 
> 
> Thanks for replay,
> 
> 
> 
> I tried because the use case was to support both in single pool.
> 
> I was thinking in same as to read the volume type in scheduler,  however 
> since it is a new requirement that affects everyone it might not be good to 
> change now.
> 
> I shall try with the other approach pools for thin /thick and update you.
> 
> 
> Thanks
> Dilip
> 
> 
> 
> 
> 
> 
> 
> 
> -Original Message-
> From: yang, xing [mailto:xing.y...@emc.com] 
> Sent: Friday, February 12, 2016 12:42 PM
> To: Dilip Sunkum Manjunath
> Cc: openstack@lists.openstack.org; itzdi...@gmail.com
> Subject: Re: [OpenStack] [CINDER] how to get updated pool info when multi 
> users create volumes on pool configured?
> 
> Hi Dilip,
> 
> I see.  If thin_provisioning is true and max_over_subscription_ratio is 
> valid, the scheduler will treat it as thin provisioning.  We do not prevent 
> driver from reporting both thin and thick support to be true.  However, I 
> think we need to make a change.
> 
> I suggest that you have one pool for thin and the other one for thick but 
> don't report both thin and thick support from the same pool.  That will avoid 
> this problem.
> 
> Another possible alternative is to require thin/thick provisioning to be in 
> extra specs and use that info in the scheduler, however that will be a new 
> requirement that affects everyone.  So I am not in favor of that approach.
> 
> Can you use one pool for thin and another for thick in your testing?
> 
> Thanks,
> Xing
> 
> 
> 
>> On Feb 12, 2016, at 12:05 AM, Dilip Sunkum Manjunath 
>>  wrote:
>> 
>> max_over_subscription_ratio
> The information contained in this e-mail message and in any
> attachments/annexure/appendices is confidential to the 
> recipient and may contain privileged information. 
> If you are not the intended recipient, please notify the
> sender and delete the message along with any 
> attachments/annexure/appendices. You should not disclose,
> copy or otherwise use the information contained in the
> message or any annexure. Any views expressed in this e-mail 
> are those of the individual sender except where the sender 
> specifically states them to be the views of 
> Toshiba Software India Pvt. Ltd. (TSIP),Bangalore.
> 
> Although this transmission and any attachments are believed to be
> free of any virus or other defect that might affect any computer 
> system into which it is received and opened, it is the responsibility
> of the recipient to ensure that it is virus free and no responsibility 
> is accepted by Toshiba Embedded Software India Pvt. Ltd, for any loss or
> damage arising in any way from its use.
> 

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Guest networking and magic IP

2016-02-16 Thread Andre Goree
I have some questions regarding the way that networking is handled via 
qemu/kvm+libvirt, namely I'm trying to replicate OpenStack's use of the 
magic IP on newly spun-up instances.  My apologies in advance if this is 
not the proper mailing list for such a question.  I've already been to 
the libvirt mailing list, but to no avail.


I am trying to determine how exactly I can manipulate traffic from a 
_guest's_ NIC using iptables on the _host_.  On the host, there is a 
bridged virtual NIC that corresponds to the guest's NIC.  That interface 
does not have an IP setup on it on the host, however within the vm 
itself the IP is configured and everything works as expected.  I was 
told on the libvirt list that nwfilter handles things like this, but 
after further discussion was able to determine that nwfilter does NOT 
handle a situation in which one would redirect traffic destined for one 
IP to another IP -- a situation that iptables would normally handle.


I'm wondering, in that case, how OpenStack is (seemingly) "magically" 
making this happen?  Because libvirt (via nwfilter) handles outbound 
traffic produced by a guest system (and thus, that traffic does not 
traverse iptables) that there would be no way to facilitate this...but 
as we all know, OpenStack does it :)


Any insight or pointing in the right direction would be so helpful, 
thanks in advance!



--
Andre Goree
-=-=-=-=-=-
Email - andre at drenet.net
Website   - http://www.drenet.net
PGP key   - http://www.drenet.net/pubkey.txt
-=-=-=-=-=-

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Nexus 9K - Nexus: Segment is an invalid type or not supported by this driver??

2016-02-16 Thread Michael Gale
Hello,

I am having issues getting my Liberty environment working with VXLAN
and N9K.

Currently I am getting the following errors in the logs on startup:
--snip--
2016-02-16 13:18:42.097 595 WARNING
networking_cisco.plugins.ml2.drivers.cisco.nexus.mech_cisco_nexus
[req-825a9891-0467-4958-86ca-c98486a7bf52 - - - - -] Nexus: Segment is an
invalid type or not supported by this driv
er. Network type = vxlan Physical network = None. Event not processed.
--snip--

When trying to launch an instance:
--snip--
ERROR neutron.plugins.ml2.managers
[req-d15ab080-7aa4-46e5-a5c3-b62a13c5646d d2b4e18cf27d41418845439f5d788523
eaa185709c79477fa1e3edfffa4e4c7f - - -] Failed to bind port
9b32f0e7-6b5b-4ced-84b7-262ea12e090c on host compute1

Nexus: Segment is None, Event not processed
--snip--

I am assuming I am missing something in the configuration file however I
can't figure it out. Any help is greatly appreciated.

Thanks
Michael

Here is my ml2_conf.ini

--snip--
# ML2 general
[ml2]
type_drivers = flat,vlan,nexus_vxlan,local
tenant_network_types = nexus_vxlan
mechanism_drivers = linuxbridge,l2population,cisco_nexus
extension_drivers = port_security
path_mtu = 0
segment_mtu = 0



# ML2 VLAN networks
[ml2_type_vlan]
network_vlan_ranges = physeth1:100:163

[ml2_mech_cisco_nexus:10.92.192.45]
infra1_neutron_agents_container-ee5293cb=1/17
infra1_neutron_server_container-ed083568=1/17
infra2_neutron_agents_container-65f32f70=1/18
infra2_neutron_server_container-1e0b996b=1/18
infra3_neutron_agents_container-2faafbe7=1/19
infra3_neutron_server_container-9eabc975=1/19
compute1=1/21
compute2=1/22
username=openstack
password=foo123
ssh_port=22
physnet=physeth1

[ml2_mech_cisco_nexus:10.92.192.46]
infra1_neutron_agents_container-ee5293cb=1/17
infra1_neutron_server_container-ed083568=1/17
infra2_neutron_agents_container-65f32f70=1/18
infra2_neutron_server_container-1e0b996b=1/18
infra3_neutron_agents_container-2faafbe7=1/19
infra3_neutron_server_container-9eabc975=1/19
compute1=1/21
compute2=1/22
username=openstack
password=foo123
ssh_port=22
physnet=physeth1

# ML2 VXLAN networks
[ml2_type_vxlan]
vxlan_group =
vni_ranges = 1:1000

[ml2_type_nexus_vxlan]
# Comma-separated list of : tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
vni_ranges=5:55000

# Multicast groups for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. Comma
separated
# list of min:max ranges of multicast IP's
# NOTE: must be a valid multicast IP, invalid IP's will be discarded
mcast_ranges=225.1.1.1:225.1.1.2

# Security groups
[securitygroup]
enable_security_group = True
enable_ipset = True

--snip--


and my linuxbridge_agent.ini:
--snip--
# Linux bridge agent physical interface mappings
[linux_bridge]

physical_interface_mappings = physeth1:eth11

# Linux bridge agent VXLAN networks
[vxlan]

enable_vxlan = True
vxlan_group =
# VXLAN local tunnel endpoint
local_ip = 10.96.2.141
l2_population = True


# Agent
[agent]
prevent_arp_spoofing = False

# Security groups
[securitygroup]
firewall_driver =
neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = True

--snip--
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Nexus 9K - Nexus: Segment is an invalid type or not supported by this driver??

2016-02-16 Thread Anthony T CHOW
Michael,

Are you using Linux Bridge or OvS?

There is a bug report: Linux bridge does not work with cisco_nexus ml2 plugins

https://bugs.launchpad.net/networking-cisco/+bug/1421024

anthony.

From: Michael Gale [mailto:gale.mich...@gmail.com]
Sent: Tuesday, February 16, 2016 12:42 PM
To: openstack@lists.openstack.org
Subject: [Openstack] Nexus 9K - Nexus: Segment is an invalid type or not 
supported by this driver??

Hello,

I am having issues getting my Liberty environment working with VXLAN and 
N9K.

Currently I am getting the following errors in the logs on startup:
--snip--
2016-02-16 13:18:42.097 595 WARNING 
networking_cisco.plugins.ml2.drivers.cisco.nexus.mech_cisco_nexus 
[req-825a9891-0467-4958-86ca-c98486a7bf52 - - - - -] Nexus: Segment is an 
invalid type or not supported by this driv
er. Network type = vxlan Physical network = None. Event not processed.
--snip--

When trying to launch an instance:
--snip--
ERROR neutron.plugins.ml2.managers [req-d15ab080-7aa4-46e5-a5c3-b62a13c5646d 
d2b4e18cf27d41418845439f5d788523 eaa185709c79477fa1e3edfffa4e4c7f - - -] Failed 
to bind port 9b32f0e7-6b5b-4ced-84b7-262ea12e090c on host compute1

Nexus: Segment is None, Event not processed
--snip--

I am assuming I am missing something in the configuration file however I can't 
figure it out. Any help is greatly appreciated.

Thanks
Michael

Here is my ml2_conf.ini

--snip--
# ML2 general
[ml2]
type_drivers = flat,vlan,nexus_vxlan,local
tenant_network_types = nexus_vxlan
mechanism_drivers = linuxbridge,l2population,cisco_nexus
extension_drivers = port_security
path_mtu = 0
segment_mtu = 0



# ML2 VLAN networks
[ml2_type_vlan]
network_vlan_ranges = physeth1:100:163

[ml2_mech_cisco_nexus:10.92.192.45]
infra1_neutron_agents_container-ee5293cb=1/17
infra1_neutron_server_container-ed083568=1/17
infra2_neutron_agents_container-65f32f70=1/18
infra2_neutron_server_container-1e0b996b=1/18
infra3_neutron_agents_container-2faafbe7=1/19
infra3_neutron_server_container-9eabc975=1/19
compute1=1/21
compute2=1/22
username=openstack
password=foo123
ssh_port=22
physnet=physeth1

[ml2_mech_cisco_nexus:10.92.192.46]
infra1_neutron_agents_container-ee5293cb=1/17
infra1_neutron_server_container-ed083568=1/17
infra2_neutron_agents_container-65f32f70=1/18
infra2_neutron_server_container-1e0b996b=1/18
infra3_neutron_agents_container-2faafbe7=1/19
infra3_neutron_server_container-9eabc975=1/19
compute1=1/21
compute2=1/22
username=openstack
password=foo123
ssh_port=22
physnet=physeth1

# ML2 VXLAN networks
[ml2_type_vxlan]
vxlan_group =
vni_ranges = 1:1000

[ml2_type_nexus_vxlan]
# Comma-separated list of : tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
vni_ranges=5:55000

# Multicast groups for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. Comma separated
# list of min:max ranges of multicast IP's
# NOTE: must be a valid multicast IP, invalid IP's will be discarded
mcast_ranges=225.1.1.1:225.1.1.2

# Security groups
[securitygroup]
enable_security_group = True
enable_ipset = True

--snip--


and my linuxbridge_agent.ini:
--snip--
# Linux bridge agent physical interface mappings
[linux_bridge]

physical_interface_mappings = physeth1:eth11

# Linux bridge agent VXLAN networks
[vxlan]

enable_vxlan = True
vxlan_group =
# VXLAN local tunnel endpoint
local_ip = 10.96.2.141
l2_population = True


# Agent
[agent]
prevent_arp_spoofing = False

# Security groups
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = True

--snip--

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Nexus 9K - Nexus: Segment is an invalid type or not supported by this driver??

2016-02-16 Thread Michael Gale
Hello,

I am using Linux Bridge, I did see that bug report however it is marked
as a duplicate of: https://bugs.launchpad.net/neutron/+bug/1433461 which
indicates the issue was fixed in kilo. If I understand the report correctly.

Michael

On Tue, Feb 16, 2016 at 1:53 PM, Anthony T CHOW <
anthony.c...@al-enterprise.com> wrote:

> Michael,
>
>
>
> Are you using Linux Bridge or OvS?
>
>
>
> There is a bug report: *Linux bridge does not work with cisco_nexus ml2
> plugins*
>
>
>
> https://bugs.launchpad.net/networking-cisco/+bug/1421024
>
>
>
> anthony.
>
>
>
> *From:* Michael Gale [mailto:gale.mich...@gmail.com]
> *Sent:* Tuesday, February 16, 2016 12:42 PM
> *To:* openstack@lists.openstack.org
> *Subject:* [Openstack] Nexus 9K - Nexus: Segment is an invalid type or
> not supported by this driver??
>
>
>
> Hello,
>
>
>
> I am having issues getting my Liberty environment working with VXLAN
> and N9K.
>
>
>
> Currently I am getting the following errors in the logs on startup:
>
> --snip--
>
> 2016-02-16 13:18:42.097 595 WARNING
> networking_cisco.plugins.ml2.drivers.cisco.nexus.mech_cisco_nexus
> [req-825a9891-0467-4958-86ca-c98486a7bf52 - - - - -] Nexus: Segment is an
> invalid type or not supported by this driv
>
> er. Network type = vxlan Physical network = None. Event not processed.
>
> --snip--
>
>
>
> When trying to launch an instance:
>
> --snip--
>
> ERROR neutron.plugins.ml2.managers
> [req-d15ab080-7aa4-46e5-a5c3-b62a13c5646d d2b4e18cf27d41418845439f5d788523
> eaa185709c79477fa1e3edfffa4e4c7f - - -] Failed to bind port
> 9b32f0e7-6b5b-4ced-84b7-262ea12e090c on host compute1
>
>
>
> Nexus: Segment is None, Event not processed
>
> --snip--
>
>
>
> I am assuming I am missing something in the configuration file however I
> can't figure it out. Any help is greatly appreciated.
>
>
>
> Thanks
>
> Michael
>
>
>
> Here is my ml2_conf.ini
>
>
>
> --snip--
>
> # ML2 general
>
> [ml2]
>
> type_drivers = flat,vlan,nexus_vxlan,local
>
> tenant_network_types = nexus_vxlan
>
> mechanism_drivers = linuxbridge,l2population,cisco_nexus
>
> extension_drivers = port_security
>
> path_mtu = 0
>
> segment_mtu = 0
>
>
>
>
>
>
>
> # ML2 VLAN networks
>
> [ml2_type_vlan]
>
> network_vlan_ranges = physeth1:100:163
>
>
>
> [ml2_mech_cisco_nexus:10.92.192.45]
>
> infra1_neutron_agents_container-ee5293cb=1/17
>
> infra1_neutron_server_container-ed083568=1/17
>
> infra2_neutron_agents_container-65f32f70=1/18
>
> infra2_neutron_server_container-1e0b996b=1/18
>
> infra3_neutron_agents_container-2faafbe7=1/19
>
> infra3_neutron_server_container-9eabc975=1/19
>
> compute1=1/21
>
> compute2=1/22
>
> username=openstack
>
> password=foo123
>
> ssh_port=22
>
> physnet=physeth1
>
>
>
> [ml2_mech_cisco_nexus:10.92.192.46]
>
> infra1_neutron_agents_container-ee5293cb=1/17
>
> infra1_neutron_server_container-ed083568=1/17
>
> infra2_neutron_agents_container-65f32f70=1/18
>
> infra2_neutron_server_container-1e0b996b=1/18
>
> infra3_neutron_agents_container-2faafbe7=1/19
>
> infra3_neutron_server_container-9eabc975=1/19
>
> compute1=1/21
>
> compute2=1/22
>
> username=openstack
>
> password=foo123
>
> ssh_port=22
>
> physnet=physeth1
>
>
>
> # ML2 VXLAN networks
>
> [ml2_type_vxlan]
>
> vxlan_group =
>
> vni_ranges = 1:1000
>
>
>
> [ml2_type_nexus_vxlan]
>
> # Comma-separated list of : tuples enumerating
>
> # ranges of VXLAN VNI IDs that are available for tenant network allocation.
>
> vni_ranges=5:55000
>
>
>
> # Multicast groups for the VXLAN interface. When configured, will
>
> # enable sending all broadcast traffic to this multicast group. Comma
> separated
>
> # list of min:max ranges of multicast IP's
>
> # NOTE: must be a valid multicast IP, invalid IP's will be discarded
>
> mcast_ranges=225.1.1.1:225.1.1.2
>
>
>
> # Security groups
>
> [securitygroup]
>
> enable_security_group = True
>
> enable_ipset = True
>
>
>
> --snip--
>
>
>
>
>
> and my linuxbridge_agent.ini:
>
> --snip--
>
> # Linux bridge agent physical interface mappings
>
> [linux_bridge]
>
>
>
> physical_interface_mappings = physeth1:eth11
>
>
>
> # Linux bridge agent VXLAN networks
>
> [vxlan]
>
>
>
> enable_vxlan = True
>
> vxlan_group =
>
> # VXLAN local tunnel endpoint
>
> local_ip = 10.96.2.141
>
> l2_population = True
>
>
>
>
>
> # Agent
>
> [agent]
>
> prevent_arp_spoofing = False
>
>
>
> # Security groups
>
> [securitygroup]
>
> firewall_driver =
> neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
>
> enable_security_group = True
>
>
>
> --snip--
>
>
>



-- 

“The Man who says he can, and the man who says he can not.. Are both
correct”
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Nexus 9K - Nexus: Segment is an invalid type or not supported by this driver??

2016-02-16 Thread Anthony T CHOW
Michael,

I am not a neutron expect but this bug 1433461 does not seem to be a duplicate 
of 1421024.

Bug 1433461 is for port binding while 1421024 is for Nexus switch not 
configured at all.

This is the fix for 1433461:

@@ -1337,7 +1337,7 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2,

1337

updated_port = self._make_port_dict(port)

1337

updated_port = self._make_port_dict(port)

1338

network = self.get_network(context,

1338

network = self.get_network(context,

1339

original_port['network_id'])

1339

original_port['network_id'])

1340

levels = db.get_binding_levels(session, port_id,

1340

levels = db.get_binding_levels(session, port.id,

1341

port.port_binding.host)

1341

port.port_binding.host)

1342

mech_context = driver_context.PortContext(

1342

mech_context = driver_context.PortContext(

1343

self, context, updated_port, network, port.port_binding,

1343

self, context, updated_port, network, port.port_binding,


It is to correctly passing the port id to db.get_binding_levels and not just 
the first 11 characters of the port id.

I am interested to find out too.

Anthony.

From: Michael Gale [mailto:gale.mich...@gmail.com]
Sent: Tuesday, February 16, 2016 12:57 PM
To: Anthony T CHOW
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] Nexus 9K - Nexus: Segment is an invalid type or not 
supported by this driver??

Hello,

I am using Linux Bridge, I did see that bug report however it is marked as 
a duplicate of: https://bugs.launchpad.net/neutron/+bug/1433461 which indicates 
the issue was fixed in kilo. If I understand the report correctly.

Michael

On Tue, Feb 16, 2016 at 1:53 PM, Anthony T CHOW 
mailto:anthony.c...@al-enterprise.com>> wrote:
Michael,

Are you using Linux Bridge or OvS?

There is a bug report: Linux bridge does not work with cisco_nexus ml2 plugins

https://bugs.launchpad.net/networking-cisco/+bug/1421024

anthony.

From: Michael Gale 
[mailto:gale.mich...@gmail.com]
Sent: Tuesday, February 16, 2016 12:42 PM
To: openstack@lists.openstack.org
Subject: [Openstack] Nexus 9K - Nexus: Segment is an invalid type or not 
supported by this driver??

Hello,

I am having issues getting my Liberty environment working with VXLAN and 
N9K.

Currently I am getting the following errors in the logs on startup:
--snip--
2016-02-16 13:18:42.097 595 WARNING 
networking_cisco.plugins.ml2.drivers.cisco.nexus.mech_cisco_nexus 
[req-825a9891-0467-4958-86ca-c98486a7bf52 - - - - -] Nexus: Segment is an 
invalid type or not supported by this driv
er. Network type = vxlan Physical network = None. Event not processed.
--snip--

When trying to launch an instance:
--snip--
ERROR neutron.plugins.ml2.managers [req-d15ab080-7aa4-46e5-a5c3-b62a13c5646d 
d2b4e18cf27d41418845439f5d788523 eaa185709c79477fa1e3edfffa4e4c7f - - -] Failed 
to bind port 9b32f0e7-6b5b-4ced-84b7-262ea12e090c on host compute1

Nexus: Segment is None, Event not processed
--snip--

I am assuming I am missing something in the configuration file however I can't 
figure it out. Any help

Re: [Openstack] Virtual Firewall Appliance

2016-02-16 Thread Martinx - ジェームズ
I don't think that you'll be able to do that in IceHouse, neither on Juno.

Only Kilo and Liberty have a native function to disable the port_security
per port. Without it, OpenStack Neutron (and also Nova Network, I guess)
will not allow the firewall Instance to work correctly. It will not see any
packets that are not destined to it and also, it will not be able to
forward packets, because the Neutron (and Nova Network), will drop the
packets soon as it leaves the firewall Instance.

I'm not aware of a solution nice for IceHouse...

On 16 February 2016 at 06:26, Georgios Dimitrakakis 
wrote:

> Mark and Martinx thank you both for your suggestions.
>
> I had tried to build PFSense in the past but without success.
>
> Indeed my goal is to run the virtual firewall as an instance since I am on
> an older OpenStack version (IceHouse) with nova-networking and therefore I
> cannot have control over the outgoing connections.
>
> Regards,
>
> G.
>
>
> For running it as an Instance?
>>
>> You can try:
>>
>> - PFSense;
>>
>> - Zentyal;
>>
>> However, youll need to make use of the Neutron feature called
>> "port_security_enabled = false" for the vNIC attached to the
>> "internal" subnet (behind the firewall).
>>
>> Just a curiosity, why dont you use the Neutron native firewall that
>> resides on each L3 Router?
>>
>> On 15 February 2016 at 15:56, Georgios Dimitrakakis  wrote:
>>
>> Hi!
>>>
>>> Can anyone suggest me of a virtual firewall appliance which is
>>> compatible with OpenStack?
>>>
>>> Best regards,
>>>
>>> G.
>>>
>>> ___
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]
>>> Post to : openstack@lists.openstack.org [2]
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [3]
>>>
>>
>>
>>
>> Links:
>> --
>> [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> [2] mailto:openstack@lists.openstack.org
>> [3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> [4] mailto:gior...@acmac.uoc.gr
>>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] add an extra external network

2016-02-16 Thread Priyanka

Hi,

I have an multinode openstack juno setup with VXLAN tunneling. I have an 
external network ext-net through which I assign  floating IPs to the 
VMs. I have limited IPs in the external network subnet. I want to assign 
an additional external network so that I can assign the IPs from this 
new external network to the new VMs that I create. The VMs are attached 
to the same internal network demo-net and router demo-router.


Thanks,


Priyanka

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] add an extra external network

2016-02-16 Thread Erik McCormick
Is the additional IP block contiguous with the existing one or at least on
the neighbirhood?

-Erik
On Feb 17, 2016 12:06 AM, "Priyanka"  wrote:

> Hi,
>
> I have an multinode openstack juno setup with VXLAN tunneling. I have an
> external network ext-net through which I assign  floating IPs to the VMs. I
> have limited IPs in the external network subnet. I want to assign an
> additional external network so that I can assign the IPs from this new
> external network to the new VMs that I create. The VMs are attached to the
> same internal network demo-net and router demo-router.
>
> Thanks,
>
>
> Priyanka
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack