The issue seems to be related to a neutron missconfiguration.
--> Unexpected vif_type=binding_failed
Please have a look at your neutron server config file in the network
node(s) and the l2 agent config files (ovs?). You should find additional
information there.
If this doesn't help, please prov
Hi Yngvi,
Can you please send us the nova-cpu logs?
On Mon, Jun 29, 2015 at 12:28 PM, Andreas Scheuring <
scheu...@linux.vnet.ibm.com> wrote:
> The issue seems to be related to a neutron missconfiguration.
> --> Unexpected vif_type=binding_failed
>
> Please have a look at your neutron server co
Hi Andreas
I'v attached those files from a network node:
Ml2_conf.ini
Neutron.conf
L3_agent.ini
Nova-compute.log
I've setup 3 vlans on one interface and configured bonding
It works and I have good connection between the servers:
root@network2:/# cat /proc/net/vlan/config
VLAN Dev name| VLAN
The problem is same, the Unexpected vif_type=binding_failed because of
which the instance failed to swapn.
On Mon, Jun 29, 2015 at 2:42 PM, Yngvi Páll Þorfinnsson
wrote:
> Hi Andreas
>
> I'v attached those files from a network node:
>
> Ml2_conf.ini
> Neutron.conf
> L3_agent.ini
> Nova-compute.l
Hi,
:~# glance image-list
Error finding address for
http://controler:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20:
('Connection aborted.', error(111, 'Connecti on refused'))
:~# tail -f /var/log/glance/glance-api.log
.
WARNING oslo_config.cfg [-] Option "userna
Hi
I should add to this information,
I'm using VLANs on compute, network and swift nodes.
But not on the controller node.
I'm not sure if that causes problems ?
Best regards
Yngvi
Hi Andreas
I'v attached those files from a network node:
Ml2_conf.ini
Neutron.conf
L3_agent.ini
Nova-compute.lo
Hi
Sorry for all this spamming.
I have ERROR in log file
/var/log/nova-conductor.log
On the controller server.
2015-06-29 11:09:47.638 2326 ERROR nova.scheduler.utils
[req-483ca3ce-c41b-4342-8f47-ef993ca03d85 None] [instance:
871c6af2-1673-4eb0-94a1-1ad07eb77ce5] Error from last host: comp
Great, thanks Avishay and Maish.
On Sun, Jun 28, 2015 at 7:36 PM, Maish Saidel-Keesing
wrote:
> Even easier - automate it.
>
> youtube-dl -ci -f best --dateafter 20150518 --datebefore 20150529
> https://www.youtube.com/user/OpenStackFoundation/videos.
>
> Full explanation can be found here [1]
Yes, that's correct, this was from the network node config.
Now, I've attached log file from the neutron server (controller node)
/var/log/neutron/server.log
I've attached the configuration file on the computer node:
/etc/neutron/plugins/ml2/ml2_conf.ini
It's not the same configuration file from
This message is a result of that some actions on the compute nodes
failed.
Could you please provide the log of the neutron-server as well? It
should provide additional information why the binding failed. You should
find it on the network (!!) node or on the controller node (!!). In
addition the lo
Hello all,
I ran tests under the following settings to measure the IO performance of MySQL
database. I used Sysbench as the client workload generator. I found that the
performance of MySQL (both resource utilization and application) has degraded
by more than 50% after switching from setting a)
does your test use the network? in B what neutron agent do you use for
networking?
openstack and libvirt are, at different level, control plane. you should look
at the differences in the data plane
On June 29, 2015 7:19:45 PM GMT+08:00, "Narayanan, Krishnaprasad"
wrote:
>Hello all,
>
>I ran
Hi,
I ran into a similar problem. Make sure that you include the
[ml2_type_vlan]
tenant_network_type = vlan
network_vlan_ranges = physnet1:1501:1509
on your controller network (as the controller needs this info to make a proper
decission on which VLAN IDs are available for
tenant networks).
O
Thanks for your reply.
The workload is a client server model where both the VMs are running on
different hosts. Yes it involves network as they are on different hosts but its
impact is negligible. The networking agent used is “neutron-openvswitch-agent”
on the compute node.
I will have a look
Hi
I'm now attaching those openvswitch log from the computer node.
I.e.
/var/log/openvswitch/ovs-vswitchd.log
/var/log/openvswitch/ovsdb-server.log
Yngvi
-Original Message-
From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
Sent: 29. júní 2015 11:17
To: Yngvi Páll Þorfinnsson
C
Oh, and one other thing:
please do a
# neutron agent-list
and for each entry with agent_type=="Open vSwitch agent" do a
# neutron agent-show
and provide that information. This way we can see the actual running
configuration…
Uwe
Am 29.06.2015 um 13:01 schrieb Yngvi Páll Þorfi
On 06/29/2015 07:19 AM, Narayanan, Krishnaprasad wrote:
Hello all,
I ran tests under the following settings to measure the IO performance
of MySQL database. I used Sysbench as the client workload generator. I
found that the performance of MySQL (both resource utilization and
application) has deg
OK Uwe
On the controller node , this is present in the [ml2_type_vlan]
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2_type_vlan]
tenant_network_types = vlan,gre
network_vlan_ranges = external:1101:2000
i.e. I added the line
tenant_network_types = vlan,gre
I rebooted the controller server, but I ge
Your configuration on host compute5 is missing the bridge mapping.
Am 29.06.2015 um 14:19 schrieb Yngvi Páll Þorfinnsson:
> OK Uwe
>
> On the controller node , this is present in the [ml2_type_vlan]
> /etc/neutron/plugins/ml2/ml2_conf.ini
>
> [ml2_type_vlan]
> tenant_network_types = vlan,gre
>
On 06/29/2015 08:25 AM, Narayanan, Krishnaprasad wrote:
Hi Jay,
The MySQL database version on both the VMs share the same version which is "mysql
Ver 14.14 Distrib 5.5.40, for debian-linux-gnu (x86_64) using readline 6.3". The
my.cnf settings are the same on both the VMs.
Please see my comm
Hi Jay,
The MySQL database version on both the VMs share the same version which is
"mysql Ver 14.14 Distrib 5.5.40, for debian-linux-gnu (x86_64) using readline
6.3". The my.cnf settings are the same on both the VMs.
Regards,
Krishnaprasad
-Original Message-
From: Jay Pipes [mailto:ja
Oh, I think I see where your confusion comes from.
You have to completely separate your thoughts of the way internal (tenant)
networks and external (provider) networks are managed.
Internal (tenant) networks are managed by Neutron. External (provider) networks
are outside of the configuration a
Uwe,
along the configuration files Yngvi is using gre networking.
So there should no bridge mapping be required at all for spawning an
instance, right?
The vlan tagging is done via a vlan device on the bond. So in fact it's
a static configured vlan. Openstack does gre tunneling on this vlan.
Y
Andreas,
Am 29.06.2015 um 15:07 schrieb Andreas Scheuring:
> Uwe,
> along the configuration files Yngvi is using gre networking.
> So there should no bridge mapping be required at all for spawning an
> instance, right?
You are probably right about that.
> The vlan tagging is done via a vlan de
You wouldn't want to configure br-ex on a compute host. br-ex is the bridge
that connects a network node to the outside provider
network.
If you wanted to use VLAN based tenant networks, then you would have to
configure a new, separate bridge. But that is not your case.
Please also make sure th
Correct, as the error messages indicated, get rid of the bridgemapping -
fine. Your agent is running now without error messages.
Could you please enable debug=true in your neutron.conf, in order to
also get the debug logs (restart neutron server and ovs-agent).
Can you trigger the start of anothe
Hi,
When live-migrating an instance with an attached volume, it seems that the QoS
associated with
the volume type is not applied on the target host. The correct iotune block is
in the target XML
and a reboot fixes things, but I guess that’s not the idea. Is that a known
issue? Anything I misse
Attempting to bind port 2bf4a49b-2ad6-4ead-a656-65814ad0724e on network
7a344656-815c-4116-b697-b52f9fdc6e4c
bind_port
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_agent.py:57
2015-06-29 14:28:55.924 5328 DEBUG
neutron.plugins.ml2.drivers.mech_agent
[req-9fe66e60-1a70-4ad6-b
OK,
This is the network list
root@controller2:/# neutron net-list
+--+-+-+
| id | name| subnets
|
+---
Yes, I'm migrating from a non-VLAN setup to a VLAN setup
But the old setup was on another controller server. i.e. another database
But I can easily do that drop this database and re-create it, yes ;-)
Best regards
Yngvi
-Original Message-
From: Uwe Sauter [mailto:uwe.sauter...@gmail.
Yes it's strange, this error is now reported
2015-06-29 16:03:38.779 5328 DEBUG neutron.plugins.ml2.drivers.mech_openvswitch
[req-60dac17f-5280-4e0a-988f-ee763df18632 None] Checking segment:
{'segmentation_id': 1102L, 'physical_network': u'external', 'id':
u'cf6489c4-7ed6-43dc-85aa-f4b8c6b501ca
If this is just a test setup it might be worth to drop and recreate the neutron
database, then recreate external and tenant
networks. This way you could get rid of any left-overs from before your switch.
Am 29.06.2015 um 18:06 schrieb Yngvi Páll Þorfinnsson:
> Yes it's strange, this error is no
thank you, Andreas for the information. I am not familiar with availability
zones. It is good to know we have this option.
Thanks,
Yang
> On Jun 26, 2015, at 9:07 AM, Andreas Scheuring
> wrote:
>
> One way would be to achieve this via "Availability zones". Just create 2
> host aggregates (and
Thank so much, James for detailed explanation. This all make sense to me now.
Thanks,
Yang
On Jun 27, 2015, at 1:10 PM, James Denton
mailto:james.den...@rackspace.com>> wrote:
Hi Yang,
Another confusion I have is about network_vlan_ranges. Is this network VLAN id
range?
Yes, it is. But the r
thank you, Uwe. our provider network actually is untagged, but I did specified
VLAN ID when I create our external network and everything still works. will
this cause issue later on?
eutron net-create --provider:network_type=vlan —provider:segmentation_id= --provider:physical_network=physnet1 --r
Once you dropped the database, follow the installation instructions on
docs.openstack.org and create a new database for neutron, grant the
privileges and sync it. Then restart your neutron-server service and
recreate the provider and tenant networks.
You don't have to recreate the user, service an
It depends on your switch… some drop tagged packets on an access port, others
allow tagged packets, if the packet VLAN
ID equals the configured VLAN ID.
I'd reconfigure the provider network type to "flat" but that's personal taste.
You could also reconfigure the switch
port to be a trunking port
HI Uwe
No, I did'nt drop the keystone ;-)
But is this the correct way to resync neutron ?
# neutron-db-manage --config-file /etc/neutron/neutron.conf \
# --config-file /etc/neutron/plugins/ml2/ml2_plugin.ini
I mean, how many config files is necessary to have in the cmd ?
best regards
Yngvi
---
Go to http://docs.openstack.org/ , select the OpenStack version, then the
installation guide
for your distribution and navigate to
6. Add a networking component
- OpenStack Networking (neutron)
- Install and configure controller node
and follow the database related stuff:
- create DB
- gra
Hi Uwe
I just ran this some minutes ago, i.e. did the "population of db" again,
According to the manual.
Should'nt this be enough ?
root@controller2:/# su -s /bin/sh -c "neutron-db-manage --config-file
/etc/neutron/neutron.conf \
> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade jun
Yes. Just keep in mind that if you extend your configuration with a new config
file, then you must change your init script / unit file to reference that file.
And it would probably be a good idea to re-sync the DB with that additional
file as an option. Or you keep your plugin configuration in a
Thanks a lot for your effort Uwe ;-)
It‘s relly helpful !
Now , I keep creating instances, and have the same error.
I still get a strange comment in the
Neutron server.log file when I try to create an instance:
2015-06-29 21:11:11.576 1960 DEBUG neutron.plugins.ml2.drivers.mech_openvswitch
[req
Can you check for ERRORs in:
Network node: neutron server log, neutron openvswitch agent log, openvswitch log
Nova controller node: nova api log, nova scheduler log
Compute node: nova compute log, neutron openvswitch agent log, openvswitch log
Also please list again neutron agent-show for the diff
OK, I only found one fresh error
Compute node; nova-compute.log ( as usually when I create instance)
grep ERR nova-compute.log
2015-06-29 21:11:11.801 4166 ERROR nova.compute.manager [-] [instance:
af901a2b-2462-4c19-b1f1-237371fd8177] Instance failed to spawn
I‘ve attached the neutron agent-sh
But maybe I‘m just having some wrong ideas about this whole thing.
I created the following networks ( i.e. for the external network, and then a
subnet to it)
neutron net-create ext_net1101 --provider:network_type vlan
--provider:physical_network external --provider:segmentation_id 1101
--router
In fact, I don‘t think I‘ll need more than one „external network“
so, am I on the wrong path here, i.e. when I‘m configuing the external network
as a VLAN ?
Best regards
Yngvi
From: Yngvi Páll Þorfinnsson
Sent: 29. júní 2015 21:57
To: Uwe Sauter; YANG LI
Cc: openstack@lists.openstack.org
Subject
As usual: it depends. But first things first: is there a reason why you didn't
configure your external network as shared?
Then to the question about several provider networks. Depending on your
company's network it can totally make sense to have different networks or just
one.
Think of differe
Network node:
root@network2:/# iptables -L -nv --line-numbers
Chain INPUT (policy ACCEPT 1286 packets, 351K bytes)
num pkts bytes target prot opt in out source
destination
1 1171 338K neutron-openvswi-INPUT all -- * * 0.0.0.0/0
0.0.0.0/0
OK, I was just following the manual when creating the external network,
I really don‘t know what it would imply to create it as shared ?
From: Uwe Sauter [mailto:uwe.sauter...@gmail.com]
Sent: 29. júní 2015 22:43
To: Yngvi Páll Þorfinnsson; YANG LI
Cc: openstack@lists.openstack.org
Subject: RE: [O
I'm not sure if there is something wrong but on both hosts I don't see any rule
that accepts GRE traffic. You need to allow GRE traffic on your internal
network so that tunneling can actually work. Without that it's like having your
network configured but not plugged in...
Am 30. Juni 2015 00:5
One more thing. Please provide
iptables -L -nv --line-numbers
for network and compute nodes.
Am 30. Juni 2015 00:25:45 MESZ, schrieb "Yngvi Páll Þorfinnsson"
:
>In fact, I don‘t think I‘ll need more than one „external network“
>so, am I on the wrong path here, i.e. when I‘m configuing the exter
Shared means that tenants can create a router between their own network and the
external one to
A) allow instances access the internet via NAT'ing
B) allow instances to be reached when associated with a floating IP that
belongs to the external network
Am 30. Juni 2015 00:52:01 MESZ, schrieb "Yn
Hm, I'm running out of ideas. Can you run those two commands to verify that GRE
traffic can pass the firewalls:
Network node: nmap -sO
Compute node: nmap -sO
In both cases, that's a big o, not a zero.
Am 30. Juni 2015 01:07:04 MESZ, schrieb "Yngvi Páll Þorfinnsson"
:
>OK, so I ran the comma
Hi,
It does not work ...
ON network node:
root@network2:/# nmap -sO 172.22.14.17
Starting Nmap 6.40 ( http://nmap.org ) at 2015-06-29 23:19 GMT
Warning: 172.22.14.17 giving up on port because retransmission cap hit (10).
root@network2:/# nmap -sO 172.22.15.17
Starting Nmap 6.40 ( http://nmap.or
Well this one finished finally
Should I use the tunnel or mgmt IP ?
root@compute5:/# nmap -sO 172.22.15.14
Starting Nmap 6.40 ( http://nmap.org ) at 2015-06-29 23:21 GMT
Warning: 172.22.15.14 giving up on port because retransmission cap hit (10).
Nmap scan report for 172.22.15.14
Host is up (0.0
It's taking up to 5 minutes to finish
root@compute5:/# nmap -sO 172.22.14.14
Starting Nmap 6.40 ( http://nmap.org ) at 2015-06-29 23:28 GMT
Warning: 172.22.14.14 giving up on port because retransmission cap hit (10).
Nmap scan report for network2.siminn.is (172.22.14.14)
Host is up (0.91s lat
root@network2:/# nmap -sO compute5
Starting Nmap 6.40 ( http://nmap.org ) at 2015-06-29 23:29 GMT
Nmap scan report for compute5 (172.22.14.17)
Host is up (0.00014s latency).
rDNS record for 172.22.14.17: compute5.siminn.is
Not shown: 245 closed protocols
PROTOCOL STATE SERVICE
1ope
You should use the same IP addresses that are configured for the tunnel. It is
expected that this scan takes some time as it iterates over all available
network protocols.
It is a bit concerning that GRE is listed as filtered. But now we are moving
into uncertain/unknown terrain. Perhaps somebo
I'm attaching a file
Neutron-server-log
It's logging while creating a instance (fails)
If it's giving any lead to this problem?
-Original Message-
From: Uwe Sauter [mailto:uwe.sauter...@gmail.com]
Sent: 29. júní 2015 23:49
To: Yngvi Páll Þorfinnsson; YANG LI
Cc: openstack@lists.openstac
Now all makes absolutely sense:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True
60 matches
Mail list logo