Version was LXD 2.3.0, and Openstack Mitaka. nova-compute-lxd nodes have
been removed so not able to replicate exact issue
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1630891
Title:
unable to star
Feedback in regards to why bug has been filed:
UnixBench parallel test final "System Benchmarks Index Score" run as root, on
different size
/flavor instances/containers, with different numbers of "copies" of UnixBench.
instance cores ->
parallel tests
c20 c40
x10 2967.7 2844.6
x20 3802.4 3152.3
x4
Public bug reported:
When lxd containers are launched, CPU constraints are either not being
applied or not being honored by the server/hypervisor
** Affects: nova-lxd (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubunt
This issue still exists. This also effects instances starting
automatically after a reboot
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1630891
Title:
unable to start lxd container instances after
Tested the package nova-compute-lxd 13.0.0.0b3.dev712.201610311523
.xenial-0ubuntu1 via the PPA. Have tested deleting, starting, stopping
instances etc and all appears to be working.
** Tags removed: verification-needed
** Tags added: verification-done
--
You received this bug notification beca
Public bug reported:
After deploying Openstack and spinning up lxd instances we are unable to
delete those instances
The following error appears in Horizon and in the nova-compute logs
Exception during message handling: Failed to communicate with LXD API
instance-005f: Error 400 - Profile is
The fix has been tested and is working. Several machine's have been
booted and commissioned successfully
** Tags removed: verification-needed
** Tags added: verification-done-xenial verification-needed-trusty
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
I've found a workaround for the issue to let me launch lxc containers
via juju
After the first container fails to launch, ssh to the hypervisor
edit /var/lib/lxc/juju-trusty-lxc-template/config
change br0 -> lxcbr0
After editing the template config all subsequent lxc containers will
launch via ju
Using 14.04, MAAS in a KVM, and Juju in a KVM. Same issue that lxe
container launches with br0 instead of lxcvr0
juju status -> https://pastebin.canonical.com/119755/
Unable to find failed container log output
juju machine agent log -> https://pastebin.canonical.com/119746/
cloudinit log -> https:
This is still a bug. Much like Yionel I had to kill everything related
to telepathy/keyring/empathy and then when I restarted empathy it
worked. I'd say this is quite a serious bug and one that has made Empaty
unusable for me personally
--
You received this bug notification because you are a memb
** Attachment added: "Panel now white after taking screenshot"
https://bugs.launchpad.net/ubuntu/+source/unity/+bug/1096954/+attachment/3478148/+files/screenshot-of-white-panel.png
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
ht
** Attachment added: "Screenshot with 'patched' Launcher.qml"
https://bugs.launchpad.net/ubuntu/+source/unity/+bug/1096954/+attachment/3478147/+files/screenshot-with-patched-Launcher.qml.png
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
xdpyinfo:
name of display::0
version number:11.0
vendor string:The X.Org Foundation
vendor release number:11103000
X.Org version: 1.11.3
maximum request size: 16777212 bytes
motion buffer size: 256
bitmap unit, bit order, padding:32, LSBFirst, 32
image byte order:LSBFirst
I can confirm the bug as well. Followed all the defaults from the wiki,
how-ever am using public IP's and not private IP's. Upon launch all
nodes got the same 'bad mirror archive'.
After editing the squid-deb-proxy acl the nodes were able to connect and
update
It would be great if the wiki could
14 matches
Mail list logo