Quoting Pavel Emelyanov (xe...@parallels.com):
> On 06/03/2014 09:26 PM, Serge Hallyn wrote:
> > Quoting Pavel Emelyanov (xe...@parallels.com):
> >> On 05/29/2014 07:32 PM, Serge Hallyn wrote:
> >>> Quoting Marian Marinov (m...@1h.com):
> >>>> -----BEGIN PGP SIGNED MESSAGE-----
> >>>> Hash: SHA1
> >>>>
> >>>> On 05/29/2014 01:06 PM, Eric W. Biederman wrote:
> >>>>> Marian Marinov <m...@1h.com> writes:
> >>>>>
> >>>>>> Hello,
> >>>>>>
> >>>>>> I have the following proposition.
> >>>>>>
> >>>>>> Number of currently running processes is accounted at the root user 
> >>>>>> namespace. The problem I'm facing is that
> >>>>>> multiple containers in different user namespaces share the process 
> >>>>>> counters.
> >>>>>
> >>>>> That is deliberate.
> >>>>
> >>>> And I understand that very well ;)
> >>>>
> >>>>>
> >>>>>> So if containerX runs 100 with UID 99, containerY should have NPROC 
> >>>>>> limit of above 100 in order to execute any 
> >>>>>> processes with ist own UID 99.
> >>>>>>
> >>>>>> I know that some of you will tell me that I should not provision all 
> >>>>>> of my containers with the same UID/GID maps,
> >>>>>> but this brings another problem.
> >>>>>>
> >>>>>> We are provisioning the containers from a template. The template has a 
> >>>>>> lot of files 500k and more. And chowning
> >>>>>> these causes a lot of I/O and also slows down provisioning 
> >>>>>> considerably.
> >>>>>>
> >>>>>> The other problem is that when we migrate one container from one host 
> >>>>>> machine to another the IDs may be already
> >>>>>> in use on the new machine and we need to chown all the files again.
> >>>>>
> >>>>> You should have the same uid allocations for all machines in your fleet 
> >>>>> as much as possible.   That has been true
> >>>>> ever since NFS was invented and is not new here.  You can avoid the 
> >>>>> cost of chowning if you untar your files inside
> >>>>> of your user namespace.  You can have different maps per machine if you 
> >>>>> are crazy enough to do that.  You can even
> >>>>> have shared uids that you use to share files between containers as long 
> >>>>> as none of those files is setuid.  And map
> >>>>> those shared files to some kind of nobody user in your user namespace.
> >>>>
> >>>> We are not using NFS. We are using a shared block storage that offers us 
> >>>> snapshots. So provisioning new containers is
> >>>> extremely cheep and fast. Comparing that with untar is comparing a race 
> >>>> car with Smart. Yes it can be done and no, I
> >>>> do not believe we should go backwards.
> >>>>
> >>>> We do not share filesystems between containers, we offer them block 
> >>>> devices.
> >>>
> >>> Yes, this is a real nuisance for openstack style deployments.
> >>>
> >>> One nice solution to this imo would be a very thin stackable filesystem
> >>> which does uid shifting, or, better yet, a non-stackable way of shifting
> >>> uids at mount.
> >>
> >> I vote for non-stackable way too. Maybe on generic VFS level so that 
> >> filesystems 
> >> don't bother with it. From what I've seen, even simple stacking is quite a 
> >> challenge.
> > 
> > Do you have any ideas for how to go about it?  It seems like we'd have
> > to have separate inodes per mapping for each file, which is why of
> > course stacking seems "natural" here.
> 
> I was thinking about "lightweight mapping" which is simple shifting. Since
> we're trying to make this co-work with user-ns mappings, simple uid/gid shift
> should be enough. Please, correct me if I'm wrong.
> 
> If I'm not, then it looks to be enough to have two per-sb or per-mnt values
> for uid and gid shift. Per-mnt for now looks more promising, since container's
> FS may be just a bind-mount from shared disk.

per-sb would work.  per-mnt would as you say be nicer, but I don't see how it
can be done since parts of the vfs which get inodes but no mnt information
would not be able to figure out the shifts.

> > Trying to catch the uid/gid at every kernel-userspace crossing seems
> > like a design regression from the current userns approach.  I suppose we
> > could continue in the kuid theme and introduce a iiud/igid for the
> > in-kernel inode uid/gid owners.  Then allow a user privileged in some
> > ns to create a new mount associated with a different mapping for any
> > ids over which he is privileged.
> 
> User-space crossing? From my point of view it would be enough if we just turn
> uid/gid read from disk (well, from whenever FS gets them) into uids, that 
> would
> match the user-ns's ones, this sould cover the VFS layer and related syscalls
> only, which is, IIRC stat-s family and chown.
> 
> Ouch, and the whole quota engine :\
> 
> Thanks,
> Pavel
> _______________________________________________
> Containers mailing list
> contain...@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/containers
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to