The bug for the windows timezone issues is 1231254.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1026621
Title:
nova-network gets release_fixed_ip events from someplace, but the
da
Hi! There seem to be two issues here in the one bug -- DHCP refreshes
are causing problems, and pain with windows instances because of how
they handle timezones and that causing extra DHCP refreshes. I'll
talk about each of those separately.
Windows timezones
=
I'm sorry to hear y
** Tags added: libvirt
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1026621
Title:
nova-network gets release_fixed_ip events from someplace, but the
database still keeps them assoc
** Changed in: nova
Assignee: Michael Still (mikalstill) => (unassigned)
** Changed in: nova
Status: In Progress => Triaged
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.ne
** Changed in: nova
Status: In Progress => Triaged
** Changed in: nova
Assignee: Michael Still (mikalstill) => (unassigned)
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.ne
Does the console log file exist at all?
** Summary changed:
- empty console log output with grizzley on centOS distribution
+ empty console log output with grizzly on centOS distribution
** Changed in: nova
Status: New => Incomplete
--
You received this bug notification because you are
** Changed in: nova
Assignee: (unassigned) => Michael Still (mikalstill)
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/832507
Title:
console.log grows indefinitely
To man
** Changed in: nova
Status: New => Confirmed
** Changed in: nova
Importance: Undecided => Critical
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1182624
Title:
Uncached ins
@Matthew -- are you still working on this one?
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1155458
Title:
500 response when trying to create a server from a deleted image
To manage
@Adam -- can we therefore remove the upstream tasks from this bug?
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1158563
Title:
After grizzly upgrade, EC2 API requests fail:Could not
** Tags added: ec2
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1158563
Title:
After grizzly upgrade, EC2 API requests fail:Could not find:
credential
To manage notifications abou
** Changed in: nova
Assignee: (unassigned) => Matthew Sherborne (msherborne+openstack)
** Changed in: nova
Status: New => Triaged
** Changed in: nova
Importance: Undecided => Low
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscr
Closing from lack of activity.
** Changed in: nova
Status: Confirmed => Won't Fix
** Changed in: nova
Importance: Low => Undecided
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bu
nova-volumes is gone now, so this is just a cinder bug.
** No longer affects: nova
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1028718
Title:
nova volumes are inappropriately cling
Yes, if there are more device files than that they will now be used as
well.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/861504
Title:
nova-compute-lxc limited by available nbd devi
It certainly seems like we should only send the last N lines of the
console to the user (although that might be computationally expensive to
generate on such a large file). That's a separate bug though I suspect.
I've filed bug 1081436 for that.
--
You received this bug notification because you a
What release was this canonistack region running at the time the problem
was seen?
** Changed in: nova
Status: New => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/100
** Changed in: nova
Importance: Undecided => High
** Changed in: nova/essex
Importance: Undecided => High
** Changed in: nova/essex
Importance: High => Medium
** Changed in: nova/folsom
Importance: Undecided => High
--
You received this bug notification because you are a member of
Upstream has chosen not to backport this fix to essex. Can we please
consider carrying this patch ourselves?
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1062314
Title:
do_refresh_se
** Also affects: nova
Importance: Undecided
Status: New
** Changed in: nova
Assignee: (unassigned) => Michael Still (mikalstill)
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.
I think the issue here is that nova.virt.firewall.py
IptablesFirewallDriver.instance_rules() is calling
get_instance_nw_info() which is causing rpcs to be fired off
_while_still_holding_the_iptables_lock. I suspect that the rpcs need to
happen outside the lock.
>From yet more instrumented code:
A
Public bug reported:
This is a bug against stable essex. I have made no attempt to determine
if this is still a problem in Folsom at this stage.
During a sprint this week we took a nova region which was previously
relatively idle and started turning up large numbers of instances using
juju. We st
** Tags added: ops
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1059899
Title:
nova fails to configure dnsmasq, resulting in DNS timeouts in
instances
To manage notifications abou
Public bug reported:
Hi. We have two regions configured in /etc/openstack-
dashboard/local_settings.py.
A user changed regions with the drop down, logged into the new region,
and started an instance. The instance started in the _previous_ region.
I'm not sure what debugging information to provid
** Attachment added: "login-bug.png"
https://bugs.launchpad.net/bugs/1036918/+attachment/3261487/+files/login-bug.png
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to horizon in Ubuntu.
https://bugs.launchpad.net/bugs/1036918
Title:
Public bug reported:
I have to regions configured in /etc/openstack-
dashboard/local_settings.py. If I switch between them in the drop down
at the top right of the screen, a login dialog appears at the bottom of
the page which is quite confusing. Some thoughts:
- credentials from the previous re
Another option would be to create a vhost for the dashboard.
** Tags added: canonistack
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to horizon in Ubuntu.
https://bugs.launchpad.net/bugs/1020313
Title:
openstack-dashboard hijacks th
Looking at the ec2 api code, this is pretty consistent for all these
calls -- you'll get the uuid (with keystone) or the project id (without
keystone) in all cases. This is consistent with the ec2 api
specification, which says this field should be:
"ownerId The ID of the AWS account that owns the
Bolke -- that's not currently the case. If you want this functionality
you should file a separate bug for it. However, with a shared instances
directory you're best off disabling the cache manager entirely at the
moment.
--
You received this bug notification because you are a member of Ubuntu
Ser
We have now observed this error on two testing clusters, so I don't
think this is because we're running precise-proposed in one any more.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/10
Public bug reported:
Running proposed on one of our clusters, I see the following with
instances started via juju. I have been unable to re-create the problem
with raw ec2 commands.
[instance: d6c8c7e9-aa9d-461c-b7a5-92b993382bb0] Ensuring static filters
[instance: d6c8c7e9-aa9d-461c-b7a5-92b9933
Public bug reported:
Before we turned on keystone, euca-describe-instances used to include
the names of user's projects in its output. It now lists the uuid of the
tenant instead, which isn't super helpful when trying to work out who
owns what. Can we please translate this back to a human readable
Public bug reported:
The latest nova-compute adds a new command which needs sudo privs:
2012-01-13 22:24:27,385 DEBUG nova.utils [-] Running cmd (subprocess):
sudo guestmount --rw -a /var/lib/nova/instances/instance-000c/disk
-m /dev/sda1 /tmp/tmpXrLkev from (pid=2743) execute /data/backups-
Public bug reported:
Nova now requires a policy.json file in /etc/nova/.
Update the packages to install this file which is in the source tree.
(This is an attempt to move https://bugs.launchpad.net/nova/+bug/915614
to the right place).
** Affects: nova (Ubuntu)
Importance: Undecided
I have just send a patch for review which implements the _base cleanup
aspects of the blueprint. Its integrated into the nova compute manager,
as opposed to being a separate script.
https://review.openstack.org/#change,2902
--
You received this bug notification because you are a member of Ubuntu
Public bug reported:
The nova base instance directory $instances_path/_base is never cleaned
up. This caused one of my compute nodes to run out of disk recently,
even though a bunch of the images there were no longer in use. There
appear to be homebrew cleanup scripts online, such as
https://githu
36 matches
Mail list logo