I was hitting problems with lxc on just normal package upgrades in vivid
earlier, but I was able to dpkg --configure -a this problem away and
finish out the release upgrade by hand (apt-get dist-upgrade (noop),
apt-get autoremove, reboot)
--
You received this bug notification because you are a me
Public bug reported:
A dist-upgrade failed in the maintainer scripts while trying to restart
the lxc networking, I think:
Reading package lists... Done
The following NEW packages will be installed:
linux-image-3.19.0-31-generic{a} linux-image-extra-3.19.0-31-generic{a}
linux-signed-image-3.1
Public bug reported:
When a MAAS cluster controller is configured to talk to a central region
controller, it should be trivial to do in one place and have everything
else fall out from that. RIght now the "generator" setting is a URL
that includes the string "localhost".
I just had to run around
Public bug reported:
MAAS's separation of regions and clusters allows network admins to
centralise policy and interface while erecting management presence in
isolated networks. This feature is most useful in situations where the
firewall policy walls off clusters more strictly.
MAAS 1.5 seems to
I just filed bug 1327202 upon discovering that 0.4.1 is a beta release,
and I have encountered bugs as a result of using the packaged version
that do not appear in 0.4 proper. We should probably check to make sure
that we haven't seen any nova bugs that can be traced back to the pre-
release versi
Public bug reported:
While trying to make use of code that relied on suds 0.4, I hit a bug
that was reproducible in 0.4.1 but not in 0.4. It appears that pip
only carries 0.4, and upstream list 0.4.1 as "beta".
It would appear that this is another upstream who uses the classic 1990s
Linux kerne
Getting rid of the '-' in the above 'su -' fixes this neatly for me. It
was caused partly by the /etc/login containing some bashisms, because I
never expected dash to be used as an interactive shell on that system
--
You received this bug notification because you are a member of Ubuntu
Server Te
It would appear most likely that this is caused by the line that runs:
su - $OWNER -c "sa-update --gpghomedir /var/lib/spamassassin/sa-
update-keys --import /usr/share/spamassassin/GPG.KEY"
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscri
Public bug reported:
When upgrading my precise mail server to trusty, spamassasin errored out
the upgrade by running a login shell somewhere. I could tell it
happened because it ran /usr/games/fortune and tried to use some bash
login files sourced from the official ones:
Setting up spamassassin
In addition, this seemed to hit three compute nodes (the full set in a
test deployment) in relatively quick succession after a few days of
normal operation.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.la
We just saw this with havana on precise.
ii libvirt0
1.1.1-0ubuntu8.5~cloud0 library for interfacing
with different virtualization systems
ii nova-compute-kvm
1:2013.2.2-0ubuntu1~cloud
** Tags added: canonical-is
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to juju-core in Ubuntu.
https://bugs.launchpad.net/bugs/834930
Title:
Need a way to manage SSH keys in a juju environment.
To manage notifications about this b
If anything, this looks related to bug #1178745
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1022612
Title:
private instance IPs can only reach public IPs in other regions, not
the
Joe Gordon: Are you unable to reproduce this? We found documentation of
this behaviour in openstack's official Web pages. Is that not enough?
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/b
Julian Edwards:
> The fixed package should be in the cloud archive.
Great! When will it make it to the LTS?
--
Nick Moffitt
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-tx-tftp in Ubuntu.
https://bugs.launchpad.net/b
How do we mark that this is still broken in the LTS? I can't figure out
how through the drop-downs.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to python-tx-tftp in Ubuntu.
https://bugs.launchpad.net/bugs/116
Title:
HP ProLiant
** Also affects: nova (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1064663
Title:
bash completion for /usr/bin/nova shipped incor
This bug describes and interaction between Openstack and Ubuntu, for the
most part.
** Project changed: nova => nova (Ubuntu)
** Changed in: nova (Ubuntu)
Status: Opinion => New
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to n
The precise-proposed maas packages did fix this issue for me. Looking
forward to their promotion to updates!
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to maas in Ubuntu.
https://bugs.launchpad.net/bugs/1204507
Title:
MAAS rejects
Sure enough, python-kombu 2.1.1-2ubuntu1~0.IS.12.04 came from a private
archive, as did python-celery. I'll track this down from here.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to maas in Ubuntu.
https://bugs.launchpad.net/bugs/1207
I get this on a fresh install of maas on precise:
Setting up maas-region-controller (1.2+bzr1373+dfsg-0ubuntu1~12.04.1) ...
Considering dependency proxy for proxy_http:
Enabling module proxy.
Enabling module proxy_http.
To activate the new configuration, you need to run:
service apache2 restart
Public bug reported:
When upgrading from quantal to raring, postfix fails to configure
because it lost its main.cf. This seems like it ought to be a file
installed by the package, but:
dpkg-query: no path found matching pattern /etc/postfix/main.cf.
ProblemType: Package
DistroRelease: Ubuntu 1
Right, so after a day spent with Daviey and a bunch of 30MB pcap files,
we think we've figured this out.
the key exchange that failed happens here:
7418 112.051626 10.55.200.9910.55.200.1 TFTPRead Request,
File: amd64/generic/quantal/commissioning/initrd.gz, Transfer type: o
This bug had me stymied for far too long. When using juju, hostnames
can get quite long. For instance, I have instances with names like
"juju-nick-testopenstack-lhr01-instance-24", and ntp in precise will
just hang on installation and prevent juju from even installing the
charm. This can be a ra
Public bug reported:
Daviey asked me to put a quick description of this in a bug:
Currently the options for MAAS reboot management are extremely limited.
You can do virsh, IPMI, or wake-on-lan. Real server environments tend
to have remotely controllable Power Distribution Units and Integrated
Li
** Description changed:
The quantal version of maas does not include quantal in the
import_pxe_files RELEASES variable by default
#RELEASES="oneiric precise"
+ RELEASES="precise"
#ARCHES="amd64/generic i386/generic armhf/highbank"
#LOCALE="en_US"
#IMPORT_EPHEMERALS=1
#IMPORT_SQUAS
but that does not appear to be in precise.
--
Nick Moffitt
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to asterisk in Ubuntu.
https://bugs.launchpad.net/bugs/1044351
Title:
upgrade from lucid to precise removes most of my gsm so
Public bug reported:
One of our PBXes uses the files in /usr/share/asterisk/*.gsm to provide
user feedback. When we upgraded from lucid to precise, many of these
files simply vanished!
-- Executing [s@macro-agentauth:2] BackGround("SIP/gubble-0242",
"please-enter-the&number&astcc-followed-
It happened again, just now.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to puppet in Ubuntu.
https://bugs.launchpad.net/bugs/924105
Title:
integer out of range errors for fact_values
To manage notifications about this bug go to:
h
I should probably clarify that there are no API requests matching this
release_fixed_ip event, and the instance owner explicitly did not want
the event to take place.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https
Public bug reported:
We've been seeing a lot of instances simply vanish from the network.
Usually people have been willing to work around this by simply rebooting
or re-creating their instances, but it's troubling for long-running
instances (especially those that have volumes associated).
Here's
The logs in the nova-api.log indicate that he's getting permission
denied for an AMI he created. We recently migrated glance from sqlite3
to mysql, and this may be caused by a data migration error or a more
subtle keystone/glance uuid problem.
--
You received this bug notification because you ar
Public bug reported:
Take three instances in two regions (A and B) such that they are named
A1, A2, and B1. A1 is the only instance with a public IP: A2 and B1
only have the standard private IPs they were given. In this situation:
A1 can reach both of its own interfaces.
B1 can reach A1's p
Our current hypothesis for how this situation happened in the first
place is that because nova-api returns success early, it's possible to
run the attach before the volume has actually been successfully created.
It looks like the attach needs to block internally to wait for the
volume creation to
Public bug reported:
root@novamanager:~# /usr/sbin/rabbitmqctl list_queues | awk '$1 ~
/^compute/ && $2 != 0 { print }'
compute.nodexyzzy 12
Occasionally on canonistack, we find that a compute node simply stops
processing its rabbit queues. A check of the logs will show n
This does not cause libvirtd to hang, by the way. "sudo virsh list"
does fine, and I'm able to kill instances manually with virsh destroy.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/
** Attachment added: "19 March 2012 Precise amd64 cloud-images.ubuntu.com image"
https://bugs.launchpad.net/bugs/960276/+attachment/2904109/+files/precise-server-cloudimg-amd64.tar.gz
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to
Public bug reported:
Using the attached image (and others) causes the entire compute node to
hang between the booting of the image and the configuration of
networking. The running image has a console ring buffer output file
(however problematic--often it looks like it never got a proper root
file
Public bug reported:
I've got a system where all the controller-type services (glance, api,
scheduler, rabbit, etc) are all on the same machine as the mysql DB, and
the compute nodes are on the same network.
When I reboot my precise machine, the logs are full of "could not
connect to mysql" error
That could work, provided that these hooks allow for the situation where
we don't want puppet to start up immediately after the upgrade has
completed.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to puppet in Ubuntu.
https://bugs.launch
Presumably even if there is i18n, you could force LANG=C or similar?
--
puppet in lucid does not support upstart status
https://bugs.launchpad.net/bugs/551544
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to puppet in ubuntu.
--
Ubuntu-s
i18n aside, that seems like an elegant workaround given the deadlines
involved.
--
puppet in lucid does not support upstart status
https://bugs.launchpad.net/bugs/551544
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to puppet in ubuntu.
-
** Attachment added: "Dependencies.txt"
http://launchpadlibrarian.net/42477382/Dependencies.txt
** Description changed:
Binary package hint: puppet
- Puppet does not currently have an "upstart" provider for the server
+ Puppet does not currently have an "upstart" provider for the service
Public bug reported:
Binary package hint: puppet
Puppet does not currently have an "upstart" provider for the service
resource. As such, it relies on upstart's sysV compatability, which is
somewhat limited.
The key problem here is that features such as "ensure => running" cannot
rely on "hassta
It works, so you can resolve/reject this bug as user error.
--
karmic netboot UEC install fails at registering nodes
https://bugs.launchpad.net/bugs/480125
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.
--
Ubuntu-s
Once I narrowed it down to debconf, Colin Watson immediately clued me to
the fact that I most likely had the wrong owner for my eucalyptus-cc
settings. In fact, I had just cut&wasted the d-i from the eucalyptus-
udeb settings and not even thought twice about it. Changing the owner
to "eucalyptus-
On a whim, I decided to query debconf with debconf-show eucalyptus-cc
before and after the purge/reinstall:
Nov 18 17:04:03 in-target: eucalyptus/mode: MANAGED-NOVLAN
Nov 18 17:04:03 in-target: * eucalyptus/publicips:
Nov 18 17:04:03 in-target: eucalyptus/dns: 172.24.1.1
Nov 18 17:04:03 in-tar
I just added a purge/install of eucalyptus-cc (which works when done
after first boot) into my late_command, and it does this:
Nov 18 15:39:12 in-target: The following NEW packages will be installed:
Nov 18 15:39:12 in-target: eucalyptus-cc
Nov 18 15:39:12 in-target: 0 upgraded, 1 newly installe
(also, my system comes up in much the same state as if I hadn't done the
reinstall in my late_command script)
--
karmic netboot UEC install fails at registering nodes
https://bugs.launchpad.net/bugs/480125
You received this bug notification because you are a member of Ubuntu
Server Team, which is
Ignoring for the moment that the subnet is all wrong for the IP range I
have chosen, here is the diff of /etc/eucalyptus before and after I
purged/reinstalled eucalyptus-cc
** Attachment added: "post-purge.diff"
http://launchpadlibrarian.net/35802128/post-purge.diff
--
karmic netboot UEC inst
I've attached a snippet of /var/log/installer/syslog from a system just after
PXE installation attempted to make a CC of it via the eucalyptus-udeb. You'll
notice that it's Setting up version 1.6~bzr931-0ubuntu7 of eucalyptus-cc. It
then runs ssh-keygen via an su to the eucalyptus user, and p
I pulled everything out that I thought could be interfering (mostly by
not specifying a preseed/late_command *anywhere*) but still no luck.
I'll poke around in the maintainer scripts to see if i can spot any
obvious reason why this stuff would fail based on the install
environment like this.
--
k
After debugging with Dustin in IRC some this evening, it looks like this
is the result of some failure of the eucalyptus-udeb within my netboot
environment (or of the ordinary eucalyptus packages) to make the cloud
controller. If I purge and reinstall everything works as you'd expect
(but without
And to add insult to injury, it would appear that I have no NC lines in
euca-describe-availability-zones verbose after all that. It seems that
autodiscovery only *appeared* to complete.
--
karmic cloud install fails at autodiscovery: wants both worlds (eucalyptus/root
users)
https://bugs.launch
Recently I've stopped clobbering the node-preseed.conf quite as much,
and even with the CC's late_command the whole setup still has this
problem.
--
karmic cloud install fails at autodiscovery: wants both worlds (eucalyptus/root
users)
https://bugs.launchpad.net/bugs/480125
You received this bug
To clarify, I never managed to get the --no-rsync versions working.
Ultimately I managed to hack around this by symlinking the eucalyptus
user's id_rsa into root's .ssh/ dir and re-running "sudo euca_conf
--discover-nodes".
** Description changed:
I have just installed two systems using karmic
Public bug reported:
We ran a slapd on Dapper for a long time, and it relied on an SSL cert
that we made root-owned 0400 for reasons of our own internal security.
Apache happily opens these certs as root and passes the file descriptor
along for after it drops privilege to the www-data user. The d
57 matches
Mail list logo