Btw, after enabling:
sql_dbpool_enable=True
The timeout issues go away.
On Tue, Mar 26, 2013 at 1:52 PM, Aaron Rosen wrote:
> Nope,
>
> Here's the full exception in a more readable way from nova-api (no errors
> on the quantum-server side):
>
> 2013-03-26 13:46:13.422 DEBUG nova.compute.api
>
Nope,
Here's the full exception in a more readable way from nova-api (no errors
on the quantum-server side):
2013-03-26 13:46:13.422 DEBUG nova.compute.api
[req-f6710ce4-26f1-4a64-839b-3868719f16c4 admin demo] Searching by:
{'deleted': False, 'project_id': u'756c1f42b59743d694a8cc7501ce53b3'} fr
Is db_pooling in Quantum enabled?
On Mar 26, 2013, at 4:44 PM, Aaron Rosen wrote:
> 2013-03-26 13:25:01.268 ERROR nova.scheduler.filter_scheduler
> [req-5240e1e5-448f-4b96-8cc6-5021a78afc1d admin demo] [instance:
> 37fb6bae-70e3-4e6f-951c-b6e05368c729] Error from last host: arosen-desktop
> (
2013-03-26 13:25:01.268 ERROR nova.scheduler.filter_scheduler
[req-5240e1e5-448f-4b96-8cc6-5021a78afc1d admin demo] [instance:
37fb6bae-70e3-4e6f-951c-b6e05368c729] Error from last host: arosen-desktop
(node arosen-desktop): [u'Traceback (most recent call last):\n', u' File
"/opt/stack/nova/nova/c
Thanks Sumit for reporting the libvirt XML!
I asked the same thing to the original reporter of the bug.
If the XML has two interfaces, this means that two ports are present
in network_info - which is produced by _allocate_network.
In that case we can exclude the re-scheduling issue, as the problem
We saw this issue in Grizzly as well. I investigated the Quantum logs
and I did not find anything bad. The VM actually does get two
interfaces in this case, so it seemed like some race condition on the
nova side:
Thanks,
~Sumit.
On Tue, Mar 26, 2013 at 10:24 AM, Salvato
The reschedule process is apparently safe (at least from my experience).
I'm not sure how much the sequentiality of the IPs might be a hint of
a different problem, as in the lp answer I see a case where the
duplicated addresses are not sequential.
Also, the script that is launching these VMs might
On Tue, Mar 26, 2013 at 9:36 AM, Gary Kotton wrote:
> Hi,
> I have seen something like this with stable folsom. We have yet to be able
> to reproduce it. In our setup we saw that there were timeouts with the
> quantum service. In addition to this we had 2 compute nodes. My gut feeling
> was that
Hi,
I have seen something like this with stable folsom. We have yet to be
able to reproduce it. In our setup we saw that there were timeouts with
the quantum service. In addition to this we had 2 compute nodes. My gut
feeling was that one of the nodes has a failer and the scheduler selects
ano
This is interesting. I'll be in customer meetings and flying for the next
few hours, so I thought I'd send it out in case anyone else has time to
investigate first.
https://bugs.launchpad.net/quantum/+bug/1160442
for details, see the associated question:
https://answers.launchpad.net/quantum/+qu
10 matches
Mail list logo