Public bug reported:
I enabled the l3 ha in neutron configuration, and I usually see the
following log in l3_agent.log:
2015-10-14 22:30:16.397 21460 ERROR neutron.agent.linux.external_process [-]
default-service for router with uuid 59de181e-8f02-470d-80f6-cb9f0d46f78b not
found. The process s
Public bug reported:
The configuration
neutron.conf:
[DEFAULT]
l3_ha = True
service_plugins = router
l3_agent.ini:
[DEFAULT]
...
agent_mode = legacy
...
The current code call the "get_dvr_sync_data" through the plugin support dvr
or not , it's better to judge "agent mode" here:
ht
Public bug reported:
When I rebooted the operation system which hosted the Linuxbridge agent
, Linuxbridge agent will try to recreate all the bridges according to
the updated tap devices.
But I have found that some bridge were not created, and I found the
weird log message:
In l3-agent.log: T
Public bug reported:
The configuration:
/etc/neutron/neutron.conf
[DEFAULT]
l3_ha = True
/etc/neutron/l3_agent.ini
[DEFAULT]
enable_metadata_proxy = False
There was an error in l3_agent's log:
2015-10-27 01:18:46.833 5494 TRACE neutron.agent.l3.agent
self.metadata_driv
Public bug reported:
Currently, when lb agent need to get bridge for a tap device, it will
iterate all the bridges and all the tap devices on each bridge to check
which bridge the tap device bound to . It takes too much time. code:
https://github.com/openstack/neutron/blob/master/neutron/plugi
Public bug reported:
This log message:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py#L975-L977
would better to be "WARNING" or "ERROR" level, currently is just
"DEBUG".
** Affects: neutron
Importance: Undecided
Public bug reported:
This issue is introducing a performance problem for the L2 agent
including LinuxBridge and OVS agent in Compute node when there are lots
of networks and instances in this Compute node (eg. 500 instances)
The performance problem reflect in two aspects:
1. When LinuxBridge age
Public bug reported:
When we try to boot multi instances from volume (with a large image
source) at the same time, we usually got a block device allocate error
as the logs in nova-compute.log:
2015-03-30 23:22:46.920 6445 WARNING nova.compute.manager [-] Volume id:
551ea616-e1c4-4ef2-9bf3-b0
Public bug reported:
When we use "lioadm" as cinder iscsi_helper, it will enable
authentication to each volume at this line:
https://github.com/openstack/cinder/blob/master/cinder/cmd/rtstool.py#L56
But, currently, VMware driver use "Dynamic Discovery" to discover the
iscsi target which won't
Public bug reported:
When I tried to use kilo controller to conduct juno compute nodes, the
juno nova-compute service start with the following two errors:
1. 2015-03-10 06:37:18.525 18900 TRACE nova.openstack.common.threadgroup
return self._update_available_resource(context, resources)
2015
Public bug reported:
When I attached a gpfs volume to an instance, the volume has been
successfully attached to the instance, but there were some error logs
in nova-compute log file as below:
2014-12-22 21:52:10.863 13396 ERROR nova.openstack.common.threadgroup [-]
Unexpected error while runnin
10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher
self.f(*self.args, **self.kw)
2014-10-20 10:50:07.917 21540 TRACE oslo.messaging.rpc.dispatcher File
"/usr/lib/python2.6/site-packages/oslo/vmware/api.py", line 423, in _poll_task
2014-10-20 10:50:07.917 21540 TRACE osl
12 matches
Mail list logo