Public bug reported:
In large scale environments, instances can fail to get their metadata.
Tests were performed in a 100 compute node environment creating 4000
vms. 15-20 vms will fail all 20 metadata request attempts. This has
been reproduced multiple times with similar results. All of the v
scale.
** Affects: neutron
Importance: Undecided
Assignee: Ed Bak (ed-bak2)
Status: New
** Changed in: neutron
Assignee: (unassigned) => Ed Bak (ed-bak2)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutro
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1422890
Title:
get_l3_agent_candidates returning incorrect agents for dvr_snat
route
Public bug reported:
get_l3_agent_candidates is returning incorrect l3_agents for the
dvr_snat router case. This allows l3-agent-router-add to incorrectly
succeed when attempting to add a router to a compute node.
** Affects: neutron
Importance: Undecided
Assignee: Ed Bak (ed-bak2
ports is the biggest
contributor to the poor scalability.
** Affects: neutron
Importance: Undecided
Assignee: Ed Bak (ed-bak2)
Status: New
** Changed in: neutron
Assignee: (unassigned) => Ed Bak (ed-bak2)
--
You received this bug notification because you are a member
Public bug reported:
WIth DVR enabled, performing a 'neutron router-gateway-clear' creates a
router namespace and a binding on every compute node.
root@Linux:~# neutron router-create router1
Created a new router:
+---+--+
| Field
Public bug reported:
create_port can fail with a Lock wait timeout error.
The transaction performed in create_port makes a call to
_process_port_bindings which then calls dvr_update_router_addvm. The
notification made within dvr_update_router_addvm can hang. The
transaction lock is held through
Public bug reported:
When deleting a vm, port_delete sometimes fails with a
RouterNotHostedByL3Agent exception. This error is created by a script
which boots a vm, associates a floating ip, tests that the vm is
pingable, disassociates the fip and then deletes the vm. The following
stack trace ha
etwork-add" and
observing that the dhcp port ip address will change.
** Affects: neutron
Importance: Undecided
Assignee: Ed Bak (ed-bak2)
Status: New
** Changed in: neutron
Assignee: (unassigned) => Ed Bak (ed-bak2)
--
You received this bug notification because yo
Public bug reported:
Instances appear to be pingable for a short time after a floating ip is
associated even though there is no ingress icmp security group rule.
tcpdump of the instance's tap device shows that the instance isn't
actually responding to the ping. It appears that the router gateway
(self.safe_configure_dhcp_for_network, network)
pool.waitall()
LOG.info(_('Synchronizing state complete'))
** Affects: neutron
Importance: Undecided
Assignee: Ed Bak (ed-bak2)
Status: New
** Changed in: neutron
Assignee: (unassigned) => Ed Bak (ed-bak2)
--
You received this bug
11 matches
Mail list logo