Hi again,

Sorry for late message.

Below problem is solved by:
Killing the controller and installing the Lts enablement stack as pointed out 
by Nicholas on below link.
Thank you Nicholas...

Such kernel did include the zfs module which is natively supported in Xenial.

Using the enablement stack, this has installed generic 4.8 kernel with zfs dkms 
and all its dependencies.

Low latency kernel is then installed on top of it by
$sudo dkpg -i Linux-headers-<ver>-lowlatency Linux-image-<ver>-lowlatency


Thanks again,

BR,
NS
On Jun 28, 2017 8:15 PM, "N. S." <n5pas...@hotmail.com> wrote:

Hi Nicholas and Andrew,


Thank you both for your help.

@Nicholas,
I downloaded the kernel 4.8 and 4.11 Low-latency (Requirement by the 
application) and installed it
sudo dpkg -i linux-header-4.8* linux-image-4.8*
sudo update-grub

But I am faced with a challenge that ZFS is not starting UP during boot. Cf 
attached.

Further digging, the status of the services is as follows:



$ systemctl status zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; static; vendor 
preset: enabled)
   Active: failed (Result: exit-code) since Wed 2017-06-28 19:54:33 EEST; 12min 
ago
  Process: 1032 ExecStartPre=/sbin/modprobe zfs (code=exited, status=1/FAILURE)

Jun 28 19:54:33 ns-HP systemd[1]: Starting Import ZFS pools by cache file...
Jun 28 19:54:33 ns-HP modprobe[1032]: modprobe: FATAL: Module zfs not found in 
directory /lib/modules/4.8.0-040800-lowlatency
Jun 28 19:54:33 ns-HP systemd[1]: zfs-import-cache.service: Control process 
exited, code=exited status=1
Jun 28 19:54:33 ns-HP systemd[1]: Failed to start Import ZFS pools by cache 
file.
Jun 28 19:54:33 ns-HP systemd[1]: zfs-import-cache.service: Unit entered failed 
state.
Jun 28 19:54:33 ns-HP systemd[1]: zfs-import-cache.service: Failed with result 
'exit-code'.




ns@ns-HP:/usr/src/linux-headers-4.4.0-81$ systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; static; vendor 
preset: enabled)
   Active: failed (Result: exit-code) since Wed 2017-06-28 19:54:34 EEST; 13min 
ago
  Process: 1042 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
 Main PID: 1042 (code=exited, status=1/FAILURE)

Jun 28 19:54:33 ns-HP systemd[1]: Starting Mount ZFS filesystems...
Jun 28 19:54:34 ns-HP zfs[1042]: The ZFS modules are not loaded.
Jun 28 19:54:34 ns-HP zfs[1042]: Try running '/sbin/modprobe zfs' as root to 
load them.
Jun 28 19:54:34 ns-HP systemd[1]: zfs-mount.service: Main process exited, 
code=exited, status=1/FAILURE
Jun 28 19:54:34 ns-HP systemd[1]: Failed to start Mount ZFS filesystems.
Jun 28 19:54:34 ns-HP systemd[1]: zfs-mount.service: Unit entered failed state.
Jun 28 19:54:34 ns-HP systemd[1]: zfs-mount.service: Failed with result 
'exit-code'.
ns@ns-HP:/usr/src/linux-headers-4.4.0-81$

I tried to run modprobe zfs as advised, but:

$ sudo -i
[sudo] password for ns:
root@ns-HP:~# /sbin/modprobe zfs
modprobe: FATAL: Module zfs not found in directory 
/lib/modules/4.8.0-040800-lowlatency
root@ns-HP:~#



I know this might be not directly related to JUJU, but to ubuntu kernel, but I 
appreciate if  you could help.


Thanks,

BR,

NS



Nicholas Skaggsnicholas.skaggs at 
canonical.com<mailto:juju%40lists.ubuntu.com?Subject=Re%3A%20Running%20KVM%20in%20addition%20to%20LXC%20on%20local%20LXD%20CLOUD&In-Reply-To=%3CCAHNOz3oXQF4t2x5Rz6R5P-zfuQ651PPJSJEQRf9Er-ZW_3pE4Q%40mail.gmail.com%3E>
Tue Jun 27 19:51:08 UTC 2017

If it's possible, I would simply run the hwe kernel on xenial which
provides 4.8+. Read more about running an updated stack here:

https://wiki.ubuntu.com/Kernel/LTSEnablementStack

This would solve your specific problem without worrying about running
kvm's.

Kernel/LTSEnablementStack - Ubuntu 
Wiki<https://wiki.ubuntu.com/Kernel/LTSEnablementStack>
wiki.ubuntu.com
Ubuntu Kernel Release Schedule. The following is a generic view of the Ubuntu 
release schedule, the kernels delivered, and the support time frames.





________________________________
From: Andrew Wilkins <andrew.wilk...@canonical.com>
Sent: Sunday, June 25, 2017 10:42 PM
To: N. S.; juju@lists.ubuntu.com
Subject: Re: Running KVM in addition to LXC on local LXD CLOUD

On Sat, Jun 24, 2017 at 9:14 PM N. S. 
<n5pas...@hotmail.com<mailto:n5pas...@hotmail.com>> wrote:
Hi,


I am running 10 machines on local LXD cloud, and it's fine.

My host is Ubuntu 16.04, kernel 4.4.0-81.

However, I have the following challenge:
One of the machines (M0) stipulates a kernel 4.7+


As it's known, unlike KVM, LXC makes use of same kernel of the host system and 
in this case (4.4.0-81) breaching thus the stipulation of M0 (4.7+).


I have read that starting Juju 2.0, KVM is no more supported.

Juju still supports kvm, but the old "local" provider which supported lxc/kvm 
is gone.

You could run a kvm container from within a lxd machine with the right apparmor 
settings. Probably the most straight forward thing to do, though, would be to 
create a KVM VM yourself, install Ubuntu on it, and then manually provision it 
using "juju add-machine ssh:<hostname>".


How could I prepare the stipulation of M0?

Thanks for your help
BR,
Nazih

--
Juju mailing list
Juju@lists.ubuntu.com<mailto:Juju@lists.ubuntu.com>
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju

Reply via email to