The current plan for SELinux/OverlayFS support is to have the OverlayFS
parts merged into the kernel for 4.8 kernel.  The

SELinux parts missed the cutoff and should be merged into the 4.9.  Paul
Moore the SELinux Kernel Maintainer will create test

kernels in Copr with the Overlayfs support as soon as 4.9 opens, (Next
week, since he is on Vacation).  At that point we could

start testing with his kernels.  He wants to test for 2 weeks and then
we could pull the patches into Rawhide and perhaps the

Centos Upstream Kernel.  Our goal then if everything is going well would
be to get these patches into the RHEL kernel some time

after 7.3 in a z stream release.  Hopefully before the end of the year.


On 08/08/2016 09:22 AM, Tim St. Clair wrote:
> Umm yes please.  
>
> It's really difficult to describe how much this would help the OOTB
> experience for folks getting started.  
>
> On Mon, Aug 8, 2016 at 8:06 AM, Colin Walters <walt...@verbum.org
> <mailto:walt...@verbum.org>> wrote:
>
>     Upstream docker has a decent page:
>     https://docs.docker.com/engine/userguide/storagedriver/selectadriver/
>     <https://docs.docker.com/engine/userguide/storagedriver/selectadriver/>
>
>     (One thing they don't mention explicitly though is the page cache
>     sharing that overlayfs has over devicemapper or btfs, which can be
>     substantial)
>
>     Though the patches to use overlayfs with SELinux are still
>     experimental, and
>     not yet in our CentOS stream, I'd like to lay the groundwork for it.
>
>     In particular, overlayfs has a significant reduction in
>     "administrative
>     cognitive overhead", since we can default to one LV+filesystem for
>     the OS that encompasses both the OS (yum/ostree) data and container
>     images, and hence not have to juggle LV sizes.
>
>     Another way to look at this is it makes "yum install docker" on
>     CentOS 7
>     work with a single disk default.
>
>     This is all related to single node - and there are a lot of
>     potentially
>     better way to manage images in a cluster, but the single node
>     experience
>     is important too.  It's relevant both for desktop systems that use
>     Docker and
>     Vagrant boxes, etc.
>
>     To that end:
>     https://github.com/CentOS/sig-atomic-buildscripts/pull/134
>     <https://github.com/CentOS/sig-atomic-buildscripts/pull/134>
>     landed, which ensures that newly formatted xfs filesystems are
>     compatible.
>
>     Our CI then updated the installer and cloud images, so I verified that
>     the vagrant-libvirt box here:
>     
> https://ci.centos.org/artifacts/sig-atomic/centos-continuous/images/cloud/latest/images/
>     
> <https://ci.centos.org/artifacts/sig-atomic/centos-continuous/images/cloud/latest/images/>
>     has:
>
>     # xfs_info /|grep ftype
>     naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
>
>     There's a number of TODOs here like making it easier to default to
>     overlayfs from Anaconda/kickstart.
>
>     But I think the important thing for us to flesh out better is
>     transition
>     paths.  Obviously, one can just reinstall a node.  Many
>     environments will
>     be set up to do that, but we should also support transitioning a
>     dm to overlay
>     when one doesn't want to reinstall.
>
>      I've verified that from a current CentOS AH Alpha, I can:
>
>     systemctl stop docker
>     rm /var/lib/docker/* -rf
>     # (configure docker to use overlay)
>     lvm lvremove atomicos/docker-pool
>     lvm lvcreate -n docker-images -L 10G atomicos # (TODO: also tweak
>     this to auto-grow?)
>     mkfs.xfs -n ftype=1 /dev/mapper/atomicos-docker--images
>     mount /dev/mapper/atomicos-docker--images /var/lib/docker
>     # (add that to /etc/fstab too)
>     systemctl start docker
>
>     This keeps the atomicos/root LV with an old-format XFS filesystem and
>     won't give you a unified storage pool, but does give you the runtime
>     advantages of overlay.
>
>
>
>
> -- 
> Cheers,
> Timothy St. Clair
> tstcl...@redhat.com <mailto:tstcl...@redhat.com>

Reply via email to