The updated proposed fix is now working for me.

-Heather


> On Oct 21, 2016, at 12:42 PM, Heather Lanigan <hmlani...@gmail.com> wrote:
> 
> Update based on IRC conversation:  I tried the fix.  Turns out that (my) 
> Xenial install doesn’t have the systemd-virt-detect command.
> 
> -Heather
> 
>> On Oct 21, 2016, at 10:54 AM, Chuck Short <chuck.sh...@canonical.com 
>> <mailto:chuck.sh...@canonical.com>> wrote:
>> 
>> Hi,
>> 
>> I proposed a fix:
>> 
>> https://review.openstack.org/#/c/389740/ 
>> <https://review.openstack.org/#/c/389740/>
>> 
>> chuck
>> 
>> On Fri, Oct 21, 2016 at 10:27 AM, Adam Stokes <adam.sto...@canonical.com 
>> <mailto:adam.sto...@canonical.com>> wrote:
>> This looks like it's due to the way we deploy OpenStack with NovaLXD in all 
>> containers, this effectively breaks anyone wanting to do an all-in-one 
>> install on their system.
>> 
>> On Fri, Oct 21, 2016 at 10:22 AM Adam Stokes <adam.sto...@canonical.com 
>> <mailto:adam.sto...@canonical.com>> wrote:
>> So it looks like a recent change to the LXD charm, see here:
>> 
>> https://github.com/openstack/charm-lxd/commit/017246768e097c5fcd5283e23f19f075ff9f9d4e
>>  
>> <https://github.com/openstack/charm-lxd/commit/017246768e097c5fcd5283e23f19f075ff9f9d4e>
>> 
>> Chuck, are you aware of this issue?
>> 
>> On Fri, Oct 21, 2016 at 10:19 AM Heather Lanigan <hmlani...@gmail.com 
>> <mailto:hmlani...@gmail.com>> wrote:
>> Adam,
>> 
>> The entire container is not readonly.  Just /sys, the mount point for 
>> /dev/.lxc/sys.  I choose another charm (neutron-api) to look at, /sys on 
>> that unit is readonly as well.    Is that normal?
>> 
>> What would be different in my config?  My Xenial install is on a VM, but 
>> I’ve been running that way for weeks.  I did have the openstack-novalxd 
>> bundle successfully deployed on it previously using juju 2.0_rc1.
>> 
>> -Heather
>> 
>>> On Oct 20, 2016, at 11:30 PM, Adam Stokes <adam.sto...@canonical.com 
>>> <mailto:adam.sto...@canonical.com>> wrote:
>>> 
>>> Odd it looks like the container has a read only file system? I ran through 
>>> a full openstack-novalxd deployment today and one of the upstream 
>>> maintainers ran through the same deployment and didn't run into any issues.
>>> 
>>> 
>>> On Thu, Oct 20, 2016, 10:02 PM Heather Lanigan <hmlani...@gmail.com 
>>> <mailto:hmlani...@gmail.com>> wrote:
>>> 
>>> I used conjure-up to deploy openstack-novalxd on a Xenial system.  Before 
>>> deploying, the operating system was updated.  LXD init was setup with dir, 
>>> not xfs.  All but one of the charms has a status of “unit is ready"
>>> 
>>> The lxd/0 subordinate charm has a status of: hook failed: "config-changed”. 
>>>  See details below.
>>> 
>>> I can boot an instance within this OpenStack deployment.  However deleting 
>>> the instance fails. A side effect of the lxd/0 issues?
>>> 
>>> Juju version 2.0.0-xenial-amd64
>>> conjure-up version 2.0.2
>>> lxd charm version 2.0.5
>>> 
>>> Any ideas?
>>> 
>>> Thanks in advance,
>>> Heather
>>> 
>>> ++++++++++++++++++++++++++++++++++++++++++++++
>>> 
>>> The /var/log/juju/unit-lxd-0.log on the unit reports:
>>> 2016-10-21 01:09:33 INFO config-changed Traceback (most recent call last):
>>> 2016-10-21 01:09:33 INFO config-changed   File 
>>> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 140, in 
>>> <module>
>>> 2016-10-21 01:09:33 INFO config-changed     main()
>>> 2016-10-21 01:09:33 INFO config-changed   File 
>>> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 134, in 
>>> main
>>> 2016-10-21 01:09:33 INFO config-changed     hooks.execute(sys.argv)
>>> 2016-10-21 01:09:33 INFO config-changed   File 
>>> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/charmhelpers/core/hookenv.py", 
>>> line 715, in execute
>>> 2016-10-21 01:09:33 INFO config-changed     self._hooks[hook_name]()
>>> 2016-10-21 01:09:33 INFO config-changed   File 
>>> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/config-changed", line 78, in 
>>> config_changed
>>> 2016-10-21 01:09:33 INFO config-changed     configure_lxd_host()
>>> 2016-10-21 01:09:33 INFO config-changed   File 
>>> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/charmhelpers/core/decorators.py",
>>>  line 40, in _retry_on_exception_inner_2
>>> 2016-10-21 01:09:33 INFO config-changed     return f(*args, **kwargs)
>>> 2016-10-21 01:09:33 INFO config-changed   File 
>>> "/var/lib/juju/agents/unit-lxd-0/charm/hooks/lxd_utils.py", line 429, in 
>>> configure_lxd_host
>>> 2016-10-21 01:09:33 INFO config-changed     with open(EXT4_USERNS_MOUNTS, 
>>> 'w') as userns_mounts:
>>> 2016-10-21 01:09:33 INFO config-changed IOError: [Errno 30] Read-only file 
>>> system: '/sys/module/ext4/parameters/userns_mounts'
>>> 2016-10-21 01:09:33 ERROR juju.worker.uniter.operation runhook.go:107 hook 
>>> "config-changed" failed: exit status 1
>>> 
>>> root@juju-456efd-13:~# touch /sys/module/ext4/parameters/temp-file
>>> touch: cannot touch '/sys/module/ext4/parameters/temp-file': Read-only file 
>>> system
>>> root@juju-456efd-13:~# df -h /sys/module/ext4/parameters/userns_mounts
>>> Filesystem      Size  Used Avail Use% Mounted on
>>> sys                0     0     0    - /dev/.lxc/sys
>>> root@juju-456efd-13:~# touch /home/ubuntu/temp-file
>>> root@juju-456efd-13:~# ls /home/ubuntu/temp-file
>>> /home/ubuntu/temp-file
>>> root@juju-456efd-13:~# df -h
>>> Filesystem                   Size  Used Avail Use% Mounted on
>>> /dev/mapper/mitaka--vg-root  165G   47G  110G  30% /
>>> none                         492K     0  492K   0% /dev
>>> udev                          16G     0   16G   0% /dev/fuse
>>> tmpfs                         16G     0   16G   0% /dev/shm
>>> tmpfs                         16G   49M   16G   1% /run
>>> tmpfs                        5.0M     0  5.0M   0% /run/lock
>>> tmpfs                         16G     0   16G   0% /sys/fs/cgroup
>>> tmpfs                        3.2G     0  3.2G   0% /run/user/112
>>> tmpfs                        3.2G     0  3.2G   0% /run/user/1000
>>> 
>>> +++++++++++++++++++++++++++++++++++++++++
>>> 
>>> heather@mitaka:~$ nova boot --image d2eba22a-e1b1-4a2b-aa87-450ee9d9e492 
>>> --flavor d --nic net-name=ubuntu-net --key-name keypair-admin 
>>> xenial-instance
>>> heather@mitaka:~/goose-work/src/gopkg.in/goose.v1$ 
>>> <http://gopkg.in/goose.v1$> nova list
>>> +--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
>>> | ID                                   | Name            | Status | Task 
>>> State | Power State | Networks              |
>>> +--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
>>> | 80424b94-f24d-45ff-a330-7b67a911fbc6 | xenial-instance | ACTIVE | -       
>>>    | Running     | ubuntu-net=10.101.0.8 |
>>> +--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
>>> 
>>> heather@mitaka:~$ nova delete 80424b94-f24d-45ff-a330-7b67a911fbc6
>>> Request to delete server 80424b94-f24d-45ff-a330-7b67a911fbc6 has been 
>>> accepted.
>>> heather@mitaka:~$ nova list
>>> +--------------------------------------+-----------------+--------+------------+-------------+----------+
>>> | ID                                   | Name            | Status | Task 
>>> State | Power State | Networks |
>>> +--------------------------------------+-----------------+--------+------------+-------------+----------+
>>> | 80424b94-f24d-45ff-a330-7b67a911fbc6 | xenial-instance | ERROR  | -       
>>>    | Running     |          |
>>> +--------------------------------------+-----------------+--------+------------+-------------+----------+
>>> heather@mitaka:~$ nova show 80424b94-f24d-45ff-a330-7b67a911fbc6
>>> …
>>> | fault                                | {"message": "Failed to communicate 
>>> with LXD API instance-00000006: Error 400 - Profile is currently in use.", 
>>> "code": 500, "details": "  File 
>>> \"/usr/lib/python2.7/dist-packages/nova/compute/manager.py\", line 375, in 
>>> decorated_function |
>>> |                                     
>>> ...
>>> 
>>> 
>>> CONFIDENTIAL This electronic message contains information from TransCirrus, 
>>> Inc. which may be confidential, privileged or otherwise protected from 
>>> disclosure, and is to be used solely by the intended recipient. Any 
>>> unauthorized review, disclosure, copying, distribution or use of this 
>>> transmission or its contents is prohibited. If you have received this 
>>> transmission in error, please notify the sender immediately by reply email 
>>> to i...@transcirrus.com <mailto:i...@transcirrus.com>, and destroy all 
>>> copies of the original message. Thank you.
>> 
>> 
> 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju

Reply via email to