Improving object mobility within the Hurd

2009-01-07 Thread Carl Fredrik Hammar
Hello,

I'll be starting my bachelor's thesis this semester and have decided to
pick up where I left off with libchannel.  (Not 100% sure if it's called
``bachelor's thesis'' in English, it's similar to a master's thesis except
that it for a bachelor's degree and gives half the amount of credits.)

I've been planning this for a while, but didn't announce it until now
because I was reluctant to start a discussion without the ability to
properly formulate the problem.  I felt that the previous discussions
were marred with bad terminology and that my formulations caused more
confusion than clarity.  And the problems continued when trying to
formulate a spec for this project, which has taken me some time.

But I feel much more confident in those areas now, especially since my
mentor has given the go to the project after reading my spec and because
I've got a proper start on the report.

Since the spec is in Swedish I won't post it here.  Instead I'll sum it
up and take advantage the fact that you're already acquainted with the
Hurd to leave out most of the background.  ;-)


The central thesis is improving support for Hurd mobile objects.  Where a
``Hurd object'' is an object used through any of the Hurd's interfaces
and ``mobility'' is the ability to transfer copies of an object from
one process to another for direct use à la libstore.

``improving'' because libstore and libchannel already provides
object mobility, but are limited in the types of objects which can
be implemented.  And also for which object configurations they work,
where as I will take special consideration to when sender, receiver or
both are in chroots or sub-Hurds.

I have dropped the channel concept and have opted to focus on arbitrary
Hurd objects.  I've adopted term ``mobile object'' from similar frameworks
such as Java's RMI, where it is usually used when the transfer may load
the objects' required code base.  It's short and sweet.

I've split the project into three main parts: improving authority
verification, improving code transfer and an object system that can
emulate Hurd objects.  This because while they reinforce each-other
in respect to mobility, they are orthogonal and might be useful in
other areas.

By ``authority verification'' I mean ability of the sender of the object
to verify that the recipient has the authority required to also receive
dependents of the object.  E.g., having access to a file does not mean
you should get access to the entire backing store.  This problem gets
much more interesting if you also consider chroots and sub-Hurds.

By ``code transfer'' I mean how the code base of an object is specified
so it can be found and loaded by the receiver.  The receiver must also be
able to determine if it can trust the code.  Again chroots and sub-Hurds
makes this more fun, e.g. what if the two parties use the same name for
different code bases?

By ``emulating of Hurd objects'' I mean a framework for implementing
objects that are similar to Hurd objects but can be used without RPCs.
Most notably this object system must support objects with an arbitrary
number of arbitrary interfaces.

Also it must be possible to wrap a port around an object so that it can
also be used through RPCs.  The opposite must also be possible so that
a port can be used as a fall-back if a transfer fails.  Interfaces that
aren't possible through RPCs that utilize the fact that the receiver
is in the same address space should also be possible, though naturally
these can't be used through a wrapper port.

Lastly, these parts will be tied together into a library, which I think
I'll call ``libmob''.  And I'll do some simple performance benchmarks,
which should show when using mobile objects instead of IPC is a good
idea if at all.  This will hopefully give a substantial result without
committing myself to find concrete use case.  Though optimization is
probably one of mobility's least interesting possibilities, it's the
one that instantly springs to mind.


What a long mail this turned out to be, hope your still with me.  ;-)

All comments are welcome of course, but don't feel pressured to comment
every detail, because I'll be sending out mails to discuss the individual
parts as I get to them.  Most likely in the order they were mentioned.

Regards,
  Fredrik




Report in Swedish, English or both?

2009-01-07 Thread Carl Fredrik Hammar
Hi,

As I was writing my previous mail, I suddenly realized something that
should of occured to me sooner.  I've started a bit on the report for
my bachelor's thesis, mostly writing up an introduction.  However since
I based it on my original specification it's all in Swedish.

However, I'm sure you all would prefer I write it in English.  Which would
have the added bonus of providing me with additional reviewers.

I'm slightly torn however, partly because I'm more proficient in Swedish.
And partly because Swedish is in such a poor state when it comes to
computer science.  Often terminology is borrowed form English even though
there is viable Swedish words that could replace them.  It would be nice
to make a contribution in this area.

One possibility would be writing one English report and one Swedish.
But I fear that would be far too much work given the time frame.

I figure it's either English or both, given the greater audience.
What do you think?

Regards,
  Fredrik




Re: Improving object mobility within the Hurd

2009-01-10 Thread Carl Fredrik Hammar
Hi,

On Sat, Jan 10, 2009 at 08:50:28AM +0100, olafbuddenha...@gmx.net wrote:
> > I have dropped the channel concept and have opted to focus on
> > arbitrary Hurd objects.  I've adopted term ``mobile object'' from
> > similar frameworks such as Java's RMI, where it is usually used when
> > the transfer may load the objects' required code base.  It's short and
> > sweet.
> 
> I don't know the term from other contexts, but indeed it seems pretty
> nice :-)

Perhaps I should add that the context in question is distributed systems.
The main difference from what I'm doing is that they usually assume
communication over network.

> > I've split the project into three main parts: improving authority
> > verification, improving code transfer and an object system that can
> > emulate Hurd objects.  This because while they reinforce each-other in
> > respect to mobility, they are orthogonal and might be useful in other
> > areas.
> 
> I must admit that I don't see it (yet)... Can you please explain how you
> imagine these to be useful in different contexts?

The first two parts could be used to improve mobility in libstore
directly, without porting stores to the new object system.  It could
also be useful for alternative mobility framework, which might want to
use a tailored object system.  Most likely they might consider my object
system overkill if they don't actually want to emulate Hurd objects.

The object system itself can be used in contexts outside of mobility.
For instance, libstores mechanism for specifying stores on the command
line.  Thus the process don't need other translators to provide the store
for it (using it remotely or transferring it).  This is useful for the
root file system translator during boot, e.g. if the file system is on
some RAID configuration.

The command line mechanism can ignore many of the issues that arise in
mobility, e.g. consistency between different copies.  Looser restrictions
means a larger class of objects can be implemented.

> > By ``code transfer'' I mean how the code base of an object is
> > specified so it can be found and loaded by the receiver.  The receiver
> > must also be able to determine if it can trust the code.  Again
> > chroots and sub-Hurds makes this more fun, e.g. what if the two
> > parties use the same name for different code bases?
> 
> As I said before, I believe it is best for the server to provide the
> object code directly (by an RPC), rather than letting the client look it
> up somewhere. This would entirely remove the naming problem, along with
> some others.

This is one of the methods I'm considering, i.e. ``naming'' the code with
a port to the .so file.  I do not have high hopes for this method though,
mostly because it's hard for the recipient to determine if it can trust
the code.  More on this in later mails.

> Regarding trust, I think this is complementary to authority. Probably
> they should be considered together.

Really?  Being allowed to do something is quite distinct from trusting
that it really does what it claims to do IMHO.

> > Interfaces that aren't possible through RPCs that utilize the fact
> > that the receiver is in the same address space should also be
> > possible, though naturally these can't be used through a wrapper port.
> 
> That means the objects can't migrate in a transparent fashion. What use
> cases do you see for non-transparent object migration?...

I thought you'd be cheering this feature. It allows for interfaces which
don't have to assume that the object is remote, e.g. it can use pointers,
globals, etc.

This is done at the cost of transparency.  I don't know a particular use
case where it would be worth losing it, I figure it's mostly a question
of not introducing any artificial limitations.

> > Lastly, these parts will be tied together into a library, which I
> > think I'll call ``libmob''.
> 
> LOL... Is the pun intended? :-) (herd vs. mob)

It is now!  ;-)

I was mostly associating it with its use in MUDs for monsters which can
roam from room to room, where it's short for Mobile OBject.  I like the
idea of little monsters running through the translators by themselves,
though technically this would be a mobile agent, not an object.

It is interesting to note that ``mob'' is derived from ``mobile vulgus'',
which is Latin for ``the easily movable crowd''.  (Thanks goes to
Wikipedia.)  So its also associated with mobility.

> > And I'll do some simple performance benchmarks, which should show when
> > using mobile objects instead of IPC is a good idea if at all.  This
> > will hopefully give a substantial result without committing myself to
> > find concrete use case.  Though optimization is probably one of
> > mobility's least interesting possibilities, it's the one that
> > instantly springs to mind.
> 
> What other possibilities do you see?

Maybe my statement was a bit premature, as currently my thoughts on this
are a bit vague.  But I have high hopes  :-)

One idea I have been toying with is using it 

Re: Report in Swedish, English or both?

2009-01-10 Thread Carl Fredrik Hammar
Hi,

My previous mail was a bit of a knee-jerk reaction.  I have come to
terms with writing the report in English and have started the translation
of the previous text.  Thanks to everyone for their input on the subject.

On Sat, Jan 10, 2009 at 08:57:02AM +0100, olafbuddenha...@gmx.net wrote:

> Obviously it will be much more useful if written in English -- it will
> actually be of use to the Hurd project, rather than just gathering dust
> on some shelve... If you are allowed, you should write it in English by
> all means. (Check with your advisor.)

My adviser was very positive to changing it to English.

> You can write it is Swedish too of course, if you really care so much...
> I for my part wouldn't bother :-)

I don't think I will.

Perhaps I'll redirect my efforts to improving localization of some free
software or something at a later date.

On Sat, Jan 10, 2009 at 09:58:28AM +0100, Michael Banck wrote:
> Thus, I would advise you to write this in english even if your english
> is not perfect and it will take longer; it will certainly be fruitful
> for the inevitable further publications from you in english, and the
> sooner you become familiar with writing scientific publications in
> english, the better for your scientific output, I think.

Yes, I agree with what you're saying.  But I'm not to sure about those
inevitable further publications.  ;-)

On Sat, Jan 10, 2009 at 12:26:03PM +0100, massimo s. wrote:
> Perfect, it's your chance to become more proficient in English too.

That's a nice way to spin it.  :-)

Regards,
  Fredrik




Re: Improving object mobility within the Hurd

2009-01-22 Thread Carl Fredrik Hammar
Hi,

On Fri, Jan 16, 2009 at 01:11:09PM +0100, olafbuddenha...@gmx.net wrote:
> BTW, it's rather sad that we already run into terminology issues
> again... We need to sort this out, to make a clear distinction between
> abstract objects and RPC objects.
> 
> (Judging by the terminology used in libstore, it seems that my use of
> "object" as a more abstract entity is closer to traditional use in Hurd:
> The store classes are clearly classes of abstract objects, not of RPC
> objects...)

Lets sort this terminology business out first then.

I actually think we agree on what an object is: a bundle of state and
code with a specific interface, i.e. what you call abstract objects.
The interface can be RPCs, function calls, direct state manipulation,
or some other way of using the object.

A /remote/ object is an object that can be called remotely.  A /local/
object is one that can be called locally.  These are the terms are used
by Java's RMI framework.

The biggest disadvantage is that they overload the terms local and remote,
as they can also be used for location.  However they usually coincide,
and if not it can be clarified e.g. `a remote object that is local to
the process' or `a local object that is remotely used'.

They are the best /pair/ of terms I've found so far.  `RPC object'
is more specific than remote, but I haven't been able to find a good
substitute for /local/, the best I have mustered is /C/ object.

A /Hurd object/ is a remote object implementing one of the Hurd's
interfaces.  This is the way it's used in the critique.  It is somewhat
confusing as it could be taken as /any/ object in the Hurd, e.g. including
stores.  I will try to avoid it in favor of remote object.

A mobile object is one that can be copied from one process to another,
code and all.  Note that both local and remote objects can both be mobile
or not.

Some might object (he he) to the fact that mobile objects are copied and
not moved.  However, movement is typically trivial to implement on top
of a copy: just remove the original object, and in case of RPC objects:
transfer the port read-right.

An /object system/ is a framework for implementing objects and controls
how they may be formed.  libstore is a trivial object system where all
objects have the same single interface.  Mach's IPC and MiG forms the
object system for remote objects. which allows objects with several
interfaces.

A /mob/ is an object specifically implemented through my future object
system.  Unless otherwise mentioned, a mob is assumed to be mobile as
it is the framework's primary purpose.

/Transparent/ in this context means that either a local or remote object
can be used with the same interface (using a wrapper).  This is to make
it possible to fall-back on using the object remotely if the object
can't be transferred.

> On Sat, Jan 10, 2009 at 06:56:15PM +0100, Carl Fredrik Hammar wrote:
> > On Sat, Jan 10, 2009 at 08:50:28AM +0100, olafbuddenha...@gmx.net
> > wrote:
> 
> > > > I've split the project into three main parts: improving authority
> > > > verification, improving code transfer and an object system that
> > > > can emulate Hurd objects.  This because while they reinforce
> > > > each-other in respect to mobility, they are orthogonal and might
> > > > be useful in other areas.
> > > 
> > > I must admit that I don't see it (yet)... Can you please explain how
> > > you imagine these to be useful in different contexts?
> > 
> > The first two parts could be used to improve mobility in libstore
> > directly, without porting stores to the new object system.  It could
> > also be useful for alternative mobility framework, which might want to
> > use a tailored object system.  Most likely they might consider my
> > object system overkill if they don't actually want to emulate Hurd
> > objects.
> 
> I see. It all hinges on the definition of "Hurd object"... I wasn't
> paying enough attention: By object, you mean strictly the server code
> serving RPCs on a port it seems. 

By Hurd object I mean remote object as explained above.

Also by `emulate' I mostly mean `look-like', not full emulation as to
make it possible to plug in a Hurd object implementation and have it
work directly (as I aimed for in our last discussion).

This mostly means that objects should be dynamically typed and be able
to implement several interfaces.

> Object migration would simply mean loading the code in the client
> instead, and pretending to do real RPCs on it.

This would be one possibility.  I'm trying to avoid making assumptions
on how interfaces might look like.

> I was thinking of objects in a somewhat more abstract sense: The actual
> functionality provided by t

Re: Improving object mobility within the Hurd

2009-01-27 Thread Carl Fredrik Hammar
Hi,

this is just a small correction to my previous mail.

On Thu, Jan 22, 2009 at 10:54:53AM +0100, Carl Fredrik Hammar wrote:
> Take copy store as an example.  The copy store makes a copy-on-write
> copy of another store and discards changes when closed.  For instance,
> a copy store over a zero store is useful for backing /tmp.
> 
> If a copy store where to migrate, then all modifications would also be
> copied.  Writes made to the copy would not be reflected in the original
> and vice versa.  Because of this, the copy store has the enforced flag set,
> which makes storeio refuse migration requests.

It seems I read a bit too much into a comment regarding the enforced
flag.  The actual mechanism that prevents migration is its marshalling
method that always returns EOPNOTSUPP.

Just so nobody gets hung up on this little miss.  ;-)

Regards,
  Fredrik




Re: Improving object mobility within the Hurd

2009-02-28 Thread Carl Fredrik Hammar
Hi,

sorry for the late reply.  I had written most of the mail quite a while
ago, except for the terminology discussion.  Then I fell ill, and was
unable to complete that part, which I felt required a lot of concentration.
Then it took additional time to get back up on the horse and get back
into the nitty-gritty details.  In hind-sight, I should of broken the
mail and replied to the different parts as I finished them.

On Fri, Jan 30, 2009 at 10:39:51AM +0100, olafbuddenha...@gmx.net wrote:
> On Thu, Jan 22, 2009 at 10:54:53AM +0100, Carl Fredrik Hammar wrote:
> > On Fri, Jan 16, 2009 at 01:11:09PM +0100, olafbuddenha...@gmx.net
> > wrote:
> > > On Sat, Jan 10, 2009 at 06:56:15PM +0100, Carl Fredrik Hammar wrote:
> 
> > I actually think we agree on what an object is: a bundle of state and
> > code with a specific interface, i.e. what you call abstract objects.
> > The interface can be RPCs, function calls, direct state manipulation,
> > or some other way of using the object.
> 
> I'm not sure we are talking about the same... By "abstract object", I
> mean a bundle of state and code, but not necessarily bound to a specific
> interface. It could have multiple interfaces, or a single internal one
> that can be mapped do different external interfaces (RPC, local function
> call etc.).

OK, so it was a distinct concept.  I see how it could be useful.  However,
abstract objects seem to be more of a policy and I'm more interested
in the underlying mechanism, we still need to be able to discuss how an
abstract object is to be implemented concretely.

Unless otherwise stated, I'm referring to ``concrete'' objects.

> Or rather, there is a single interface at an abstract level, but this
> can be implemented using different transport mechanisms or containers or
> whatever we call them.

Ah, an abstract interface.  I guess you can see related local and remote
interfaces as instances of a single abstract interface.

What you call transports, I have called wrapper objects.  I prefer
your terminology, but maintain that transports are distinct objects.

As an example: on server side a transparent object would be implemented
using a remote transport around a local object.  On client side, it would
be a local transport connected to the remote transport.  After migration,
the client would use a copy of the server's local object directly.

> > A /remote/ object is an object that can be called remotely.  A /local/
> > object is one that can be called locally.
> 
> I'm not sure remote object vs. local object is a meaningful distinction.
>
> We have the normal Hurd servers, where the objects are hard-wired to the
> RPC transport. We have the store framework (and hopefully a more generic
> framework in the future) for mobile objects, which can reside in the
> server and be accessed through the RPC transport, or be loaded into the
> client and accessed through local function calls. And we discussed the
> possibility of objects that can only reside in the client, and could be
> hard-wired to a local function call transport.

It isn't meaningful to distinguish objects that can be called remotely
using RPCs from objects that cannot?  Perhaps local is a bit redundant
since almost all abstract objects can at least be called locally from
the implementing process.  More on this bellow.

> > They are the best /pair/ of terms I've found so far.  `RPC object' is
> > more specific than remote, but I haven't been able to find a good
> > substitute for /local/, the best I have mustered is /C/ object.
> 
> I'm not sure whether this was clear: By "RPC object", I did not mean a
> specific subclass of some larger class I happened to call "abstract
> objects", but rather *any* abstract object, when accessed through the
> RPC transport/container. The *same* abstact object can be access as an
> RPC object, or as something else...

I've been quite torn over this issue.  This is part of the abstract object
discussion.  The question is where to draw the line of what constitutes
a single object.  While I consider transports as clearly separate objects
on a concrete level, it's less clear when considering normal Hurd objects.

Is the RPC handling part an additional interface or a separate proxy
object?  Or if preference is given to the first view, is there any point
in regarding the proxy as separate from the proxied object?

An object depends on its interfaces but is independent of its proxies.
While it controls which interfaces it implements, it has no control
over its proxies.  This makes proxies more flexible and dynamic than
interfaces, you can have several different or equivalent proxies to the
same object, and proxy proxies.

Consider an object bound to several ports, i.e. messages to e

Re: Unionmount. Basic details

2009-04-06 Thread Carl Fredrik Hammar
Hi,

On Mon, Apr 06, 2009 at 04:26:25PM +0300, Sergiu Ivanov wrote:
> Hello,
> 
> I would like to start a discussion about some basic details
> implementation of the unionmount project.
>
> Firstly, the implementation was suggested in two ways: as a stand-alone
> translator and as a series of extensions to lib{net,disk}fs
> libraries. These two approaches have there advantages and
> disadvantages. Implementing unionmount functionality in a stand-alone
> translator will involve an extra layer of translation (which would often
> mean an extra context switch in each operation), but will be more
> flexible in the meaning that to modify parts of functionality will
> require rebuilding a single translator. OTOH, implementing unionmount as
> extensions to translator libraries would mean faster operation and
> automatic inclusion of the functionality in *every* existing
> translators, but modifying something would require more effort. I am
> generally inclined to implement the functionality as a stand-alone
> translator first (though things might well show that this variant of
> implementation would be best (my personal opinion)), and moving things
> to lib{net,disk}fs later on.

I think the first approach is the better of the two.  However, you
might want to consider reusing unionfs instead of rewriting it.

> Let me first expose my understanding of the term ``unionmount
> functionality''. Usually (when doing settrans) the translator being set
> on a node (directory) foo/ obscures the directory structure lying under
> foo/. The essence of the unionmount idea here is to mount the translator
> is such a way that the filesystem the translator makes public *merge*
> with the underlying filesystem.
> 
> As far as the stand-alone implementation is concerned, I think we should
> borrow as much ideas as possible from unionfs. Firstly, unionmount
> should most probably be a libnetfs-based translator. Now let us go
> further: unionmount is expected to merge the filesystem on which it sits
> with the filesystem exposed by the translator it is asked to start in
> unionmount mode (further referred to as ``the Translator''). When
> unionmount is starting, it has (of course) a port to the underlying
> node, which means that it has full access to its underlying
> filesystem. Now, it can create a shadow node, mirroring the underlying
> node and then set the Translator on this shadow node. The purpose of
> this is to keep the Translator away from the real underlying node,
> giving it at the same time all the information it should require.

So the only difference between unionmount and unionfs is the setup and
the shadow node, right?

Then it might be possible to implement the shadow node in unionmount
and pass it to unionfs.  Just wrap a file descriptor around the port
and let unionfs inherit it, to make unionfs use it pass `/dev/fd/$FD'
as an argument.

If you're uncomfortable keeping around a process just to implement
a shadow node, consider implementing a dedicated shadow node server.
That just sitts on e.g. `/server/shadow' and passes out shadow nodes in
responce to RPCs of a new kind.

I think such a server might be a good idea in any case.  Shadow nodes are
already needed by nsmux (right?) and seem generally useful for creating
``anonymous'' file systems.  The only real question is wheather they
are independant enough to be put in their own server.

I have only followed discussions on nsmux lightly, so there might be
some missunderstandings.  Please, have this in mind.

> [snip]
> 
> As I have already mentioned, I am personally more inclined to implement
> the unionmount functionality as a stand-alone translator first, because
> this approach preserves modularity. I am aware of the performance issue
> about extra context switches, but if the unionmount translator will not
> give off ports to its own nodes, but ports to *external* nodes
> (underlying filesystem nodes or those published by the Translator), it
> will not take part in the frequent (and most time-critical) I/O
> operations and act as an initial source of ports only. I think it is
> reasonable for unionmount not to create proxy nodes (in nsmux
> terminology), because I cannot presently invent a use case where it will
> need control over the ports it gave off to the client.

Your resoning here seems solid to me, I say drop the library approach.
:-)

Regards,
  Fredrik




Re: Unionmount. Basic details

2009-04-09 Thread Carl Fredrik Hammar
Hi!

On Tue, Apr 07, 2009 at 12:20:40AM +0300, Sergiu Ivanov wrote:
> > Then it might be possible to implement the shadow node in unionmount
> > and pass it to unionfs.  Just wrap a file descriptor around the port
> > and let unionfs inherit it, to make unionfs use it pass `/dev/fd/$FD'
> > as an argument.
> >
> > If you're uncomfortable keeping around a process just to implement
> > a shadow node, consider implementing a dedicated shadow node server.
> > That just sitts on e.g. `/server/shadow' and passes out shadow nodes in
> > responce to RPCs of a new kind.
> >
> > I think such a server might be a good idea in any case.  Shadow nodes are
> > already needed by nsmux (right?) and seem generally useful for creating
> > ``anonymous'' file systems.  The only real question is wheather they
> > are independant enough to be put in their own server.
> 
> Yep, I'm really uncomfortable about keeping around processes, but in
> this case I'm inclined to think that spawning an extra process would be
> an overkill. I'm more inclined to the simpler variant of borrowing the
> code from unionfs, stripping off the unnecessary features and modifying
> the startup sequence to get a new translator.

Well it isn't simpler in the sense that we'd need to maintain two very
similar yet different code bases.  Improvements to one would likely get
ported to the other.

> This would yield faster code (no extra context switch, the shadow
> node is within the main translator) and more control over the merging
> functionality.

Isn't the shadow node used mostly by the mountee?  If so it will be used
through RPCs in any case.

I don't think there would be any extra context switches, they would
just be divided between different processes.  An extra process per mount
is no big deal IMHO.

> You see, I suppose that some time later we will be adding some specific
> merging rules, which would be very difficult (if not impossible)
> with the approach you are suggesting (about reusing unionfs as a whole).

OK, here we have a more concrete reason to fork unionfs's code.  However,
I can't think of any rules that wouldn't also be useful in unionfs.
Have you got anything in particular in mind?

I'm guessing that it's easier to turn a unionmount that uses unionfs to
one that uses code from unionfs, then the other way around.  I wouldn't
start by forking the code-base unless I *knew* that I would do it
eventually.

> I like the idea of a shadow-node server :-) However, I would rather keep
> shadow-nodes inside processes, because they are cheap as compared to
> RPCs and you have full control over them. Moreover, some translators
> (like unionfs and hence unionmount) would like to keep a list of nodes
> they own and drop nodes when they don't need them. This policy would
> require more effort if a shadow-node server is involved.

Does unionmount (or nsmux) need to do anything except keep them around and
destroy them when not needed anymore?  It seems easy enough to destroy
a node through RPCs, just unlink it or something.  If not it should be
easy enough to create a shadow_node_destroy RPC.

Also unionfs itself should only be interested in the mountee and the
underlying filesystem, not the shadow nodes (AFAICT).

Thinking about it, clean-up presents some problems with ditching a
unionmount process.

The mountee becomes effectively orphaned when unionfs dies.  With no
way to reach it from the file system, the user would have look up its PID
to kill it.

To handle fsys_goaway requires that there be a proxy for the ports that
receives it and forwards it to the mountee and, if that doesn't fail,
forwards it to unionfs.  You probably know, but I'm hoping there's only
one such port.  In a pinch, this could also be implemented by a server
shared by all unionmounts.

Now the question is what to do if unionfs crashes or is killed.
If there where a unionmount process it could detect it and ask
the mountee to go away.  Doing without is harder.  The only solution
I can come up with is letting unionfs inherit a port to a server that
kills the mountee when there are no send rights to the port.  But that
seems like a really weird hack.

Of course, implementing such servers would be a long term goal.  This is
just to convince you that it's possible to reuse unionfs without an
additional process per unionmount.  Admittedly, these solutions are
aren't very pretty, but then again I don't think an extra process per
unionmount is a problem at all.  ;-)

Regards,
  Fredrik




Re: Unionmount. Basic details

2009-04-09 Thread Carl Fredrik Hammar
Hi,

> > unionmount is expected to merge the filesystem on which it sits with
> > the filesystem exposed by the translator it is asked to start in
> > unionmount mode (further referred to as ``the Translator'').
> 
> Nah, I think there are various clearer ways to name it: e.g. "target
> translator", or perhaps "inferior" (like in a debugger), or "mountee"...
> :-)

My vote is on ``mountee'', as you might of noticed in my other mail.

> I don't think we should call it "shadow node": although there are some
> similarities, it seems to me that it's not quite the same as the shadow
> nodes in nsmux -- it would be confusing.
> 
> For now, I suggest calling it "internal node" or "hidden node". We can
> still change the name later when the exact role becomes clearer.

How about ``wedge node''?  I like the image it gives of prying apart the
mountee from the mount point.  :-)

I'll stick to ``shadow node'' until a decision is made.

> It is not fully clear right now -- I realized that there is another
> decision to make: should the unionmount translator be directly visible
> as the translator attached to the mount node; or should it serve as a
> proxy, forwarding all requests on the filesystem port to the target
> translator -- thus making itself more or less transparent, so it appears
> as if the target was attached to the mount node directly?
> 
> I tend towards the latter.

I think the latter makes a lot more sense.  I can't think of any reason
to let the mountee be aware that it's detached from the underlying
file system.  If anything it would just confuse it.

> One question is, how does the user request the unioning functionality? A
> possible way would be adding options to the actual translators -- but
> this would probably have to be handled explicitely by each translator,
> making it rather ugly.
> 
> Adding an option to settrans seems a more logical approach. However, we
> will need a way to pass this information to the translator somehow -- it
> might require changes to the translator startup procedure...

I'd go with a settrans switch or a new but similar command, i.e.
unionmount.

Regards,
  Fredrik




Re: Unionmount. Basic details

2009-04-09 Thread Carl Fredrik Hammar
Hi,

On Wed, Apr 08, 2009 at 07:10:26PM +0200, olafbuddenha...@gmx.net wrote:
> On Mon, Apr 06, 2009 at 06:58:23PM +0200, Carl Fredrik Hammar wrote:
> > On Mon, Apr 06, 2009 at 04:26:25PM +0300, Sergiu Ivanov wrote:
> 
> > So the only difference between unionmount and unionfs is the setup and
> > the shadow node, right?
> 
> Well, not quite. unionmount needs additional functionality for the
> internal node and related stuff; but on the other hand, the actual
> merging (unioning) requirements are simpler: while unionfs must be able
> to handle an arbitrary number of merged locations, for unionmount it's
> always exactly two. This simplifies the implementation and policy
> decisions, and might even call for different optimization approaches --
> so it seems quite possible that the code base will diverge considerably
> after the fork...

Why stop at two?  Mounting several file systems at once seems perfectly
reasonable to me.

> > If you're uncomfortable keeping around a process just to implement a
> > shadow node, consider implementing a dedicated shadow node server.
> > That just sitts on e.g. `/server/shadow' and passes out shadow nodes
> > in responce to RPCs of a new kind.
> 
> While there are some servers in the current Hurd that work like that, I
> don't think this is a good approach. We really want to move away from
> such centralization -- it limits scalability, lowers robustness, and
> increases complexity...

Yes, you are right.  I got a bit caught up in the moment, and didn't think
it through.

> If we really feel the need to run several instances of a translator in a
> single process (to reduce overhead), this should happen as transparently
> as possible, and probably be handled by a generic framework -- similar
> to translator stacking...

Seems like a potential use case for libmob.  We have discussed similar use
cases, but I must admit that I haven't considered a generalized ``hosting
service'' for mobile objects.

Not sure its a good idea, but I like it.  :-)

> > I think such a server might be a good idea in any case.  Shadow nodes
> > are already needed by nsmux (right?) and seem generally useful for
> > creating ``anonymous'' file systems.
> 
> As I pointed out in the other mail, the requirements here are really
> different from the shadow nodes in nsmux. Yet the observation about
> anonymous file systems is extremely to the point!

Yes, I'm a bit surprised it hasn't come up before.  The concept seems
obvious in hindsight.  I haven't been able to keep my mind of it since.
:-)

> I actually realized a couple of days ago that unionmount could probably
> be done by a combination of nsmux and unionfs: I think it should be
> possible to do something like
> 
>settrans veth /hurd/unionfs veth veth,,eth-multiplexer
> 
> (I didn't want to bring this up yet, to avoid further confusion :-) )

Bootstrapping a unionmount of nsmux like might be tricky.  ;-)

> The funny thing is that while I was thinking about something like
> anonymous file systems multiple times in the past; and also had a vague
> realization that nsmux can serve most (if not all) use cases for that;
> and even was aware that this is what the example above uses -- somehow I
> never really consciously thought about the "anonymous" aspect as the
> common underlying idea up till now...

My first thoughts of anonymous file systems got me thinking of an
``anonfs'' translator that launched translators encoded in the paths.
But the thought of encoding paths in a path gave me shudders.

Then it got me thinking of a shell utility, e.g.:

   letfs foo "/hurd/nsmux foo" -- ... -- \
  settrans /hurd/unionfs foo %1 %2 ... %n

Where %i is replaced with /dev/fd/${fd to %i's root}.

This leaves the question of how and when to make the anonymous file
systems go away.  The only thing I can come up with is to do it when
the commands exits.  But this doesn't seem appropriate in general,
e.g. what if the command makes a server open an anonymous file system
and then exits.

However, for a unionmount it seems to make sense for them to go away
when unionfs does.  Just add a wait for unionfs to finish to make the
example above work this way.

Given that it seems natural for anonymous file system dependencies to go
away when the dependent one does.  Perhaps it is natural to add such a
``letfs'' feature to settrans itself (or a settrans variant).

> > The only real question is wheather they are independant enough to be
> > put in their own server.
> 
> Indeed, while we always try to think of the shadow nodes as separate
> translators, they are really quite intertwined with the rest of nsmux.
> But perhaps with anonymous translators as a more ge

Re: Unionmount. Basic details

2009-04-10 Thread Carl Fredrik Hammar
On Fri, Apr 10, 2009 at 07:18:34PM +0300, Sergiu Ivanov wrote:
> Carl Fredrik Hammar  writes:
> >> I actually realized a couple of days ago that unionmount could probably
> >> be done by a combination of nsmux and unionfs: I think it should be
> >> possible to do something like
> >> 
> >>settrans veth /hurd/unionfs veth veth,,eth-multiplexer
> >> 
> >> (I didn't want to bring this up yet, to avoid further confusion :-) )
> >
> > Bootstrapping a unionmount of nsmux like might be tricky.  ;-)
> 
> Hm, browsing through my memory I cannot remember why setting a
> unionmount translator via nsmux would be tricky. It's no trickier that
> setting it through settrans, IMHO :-)

It was just intended as a quick comment, but perhaps I should of
explained further.  In the example nsmux is used to do a unionmount,
but nsmux itself is a translator you'd want to unionmount.  So you'd
need an unionmount of nsmux to unionmount nsmux, in which case you'd
already have a nsmux instance you can use.  :-)

Regards,
  Fredrik




Re: Unionmount. Basic details

2009-04-11 Thread Carl Fredrik Hammar
Hello,

On Fri, Apr 10, 2009 at 08:35:07PM +0300, Sergiu Ivanov wrote:
> Carl Fredrik Hammar  writes:
> > On Tue, Apr 07, 2009 at 12:20:40AM +0300, Sergiu Ivanov wrote:
> >> > [...]
> >> > If you're uncomfortable keeping around a process just to implement
> >> > a shadow node, consider implementing a dedicated shadow node server.
> >> > That just sitts on e.g. `/server/shadow' and passes out shadow nodes in
> >> > responce to RPCs of a new kind.
> >> > [...]
> >> 
> >> Yep, I'm really uncomfortable about keeping around processes, but in
> >> this case I'm inclined to think that spawning an extra process would be
> >> an overkill. I'm more inclined to the simpler variant of borrowing the
> >> code from unionfs, stripping off the unnecessary features and modifying
> >> the startup sequence to get a new translator.
> >
> > Well it isn't simpler in the sense that we'd need to maintain two very
> > similar yet different code bases.  Improvements to one would likely get
> > ported to the other.
> 
> I understand this, but I'd rather say that reusing the whole unionfs
> when we just need to join 2 (or maybe more, if antrik says it's all
> right ;-) ) together is an overkill: unionfs implements a couple of
> extended functions (not required by unionmount) and modifications in
> which are also likely to get in the way of normal operation of some
> basic functions (as it sometimes happens). Also, the operation of
> launching unionfs in the way you suggest is fairly tricky, it seems to
> me.
> 
> Anyway, I'm not sure whether bringing some details in the code in
> unionmount up to date will require ``porting'', since I'm going to touch
> only a very small portion of the code, leaving the bulk of it intact.
> 
> Also, I'm not aware of anybody still doing any changes to unionfs :-)

I don't know the current state of unionfs myself, but I'm assuming it
still has bugs.  And I'm not (yet) convinced that any rule you'd add to
unionmount would be not be useful it unionfs or the other way around.  If
unionmount uses unionfs it benefits from improvements to it automatically.

Also, in many ways unionfs seems like an good candidate to make use of
libmob which I'm working on.  Making that that change would hopefully
not be too extensive, but it would not be trivial.

> >> This would yield faster code (no extra context switch, the shadow
> >> node is within the main translator) and more control over the merging
> >> functionality.
> >
> > Isn't the shadow node used mostly by the mountee?  If so it will be used
> > through RPCs in any case.
> >
> > I don't think there would be any extra context switches, they would
> > just be divided between different processes.  An extra process per mount
> > is no big deal IMHO.
> 
> This is true that in the greater part of the operation of unionmount
> there will be no difference in speed (the difference will be at startup,
> of course). However, I still don't have sufficiently compelling reasons
> to considering making the startup sequence more sophisticated.

I don't think the start up sequence will be very complicated:

  1) open a port to the mountee's root
  2) wrap it in a file descriptor (make sure it will be inherited.)
  4) fork and exec
 "settrans $st_args $mount_point \
  /hurd/unionfs $unionfs_args /dev/fd/$fd $mount_point"
  5) close port and file descriptor
  6) stack the go_away interceptor over (the new) $mount_point

Of course, you'll be the one stuck with handling the details.  In the end
it might be a lot more complicated than I think it is.

> Also, as antrik pointed out in some other mail, the node in unionmount
> is not really a shadow node conceptually.

Well I've never known exactly what a shadow node is in nsmux, so I wouldn't
know the difference.  No need to explain it to me though, I don't think it
has much bearing on this discussion.

> [snip]
>
> >> I like the idea of a shadow-node server :-) However, I would rather keep
> >> shadow-nodes inside processes, because they are cheap as compared to
> >> RPCs and you have full control over them. Moreover, some translators
> >> (like unionfs and hence unionmount) would like to keep a list of nodes
> >> they own and drop nodes when they don't need them. This policy would
> >> require more effort if a shadow-node server is involved.
> >
> > Does unionmount (or nsmux) need to do anything except keep them around and
> > destroy them when not needed anymore?  It seems easy enough to destroy
> >

Re: Unionmount. Basic details

2009-04-17 Thread Carl Fredrik Hammar
Hi!

On Mon, Apr 13, 2009 at 12:41:07AM +0300, Sergiu Ivanov wrote:
> [snip]
>
> >> This is true that in the greater part of the operation of unionmount
> >> there will be no difference in speed (the difference will be at startup,
> >> of course). However, I still don't have sufficiently compelling reasons
> >> to considering making the startup sequence more sophisticated.
> >
> > I don't think the start up sequence will be very complicated:
> >
> >   1) open a port to the mountee's root
> >   2) wrap it in a file descriptor (make sure it will be inherited.)
> >   4) fork and exec
> >      "settrans $st_args $mount_point \
> >       /hurd/unionfs $unionfs_args /dev/fd/$fd $mount_point"
> >   5) close port and file descriptor
> >   6) stack the go_away interceptor over (the new) $mount_point
> >
> > Of course, you'll be the one stuck with handling the details.  In the end
> > it might be a lot more complicated than I think it is.
> 
> Hm... I'm not sure I can fully assess the complexity of ``go_away
> interceptor'' (whose structure is a bit obscure to me). Also, I'm afraid
> that the interceptor might introduce an extra context switch in each
> RPC, what do you think?

That depends.  You'd need to proxy the control port, and any port from
which you can get the control port.  I was assuming that it's only possible
to get it from its mount point or perhaps its root.  But if its possible
to get it from any file node you'd need to proxy them all.  And the existence
of the file_getcontrol RPC seem to imply the latter.  :-(

> >> When unionmount is a fork off unionfs code base, it has *full* control
> >> over the mountee and can do whatever it wants to it. When unionmount is
> >> requested to quit, it can gracefully shut down the mountee, thus doing
> >> all the cleanup required.
> >
> > You'd still have to handle fsys_goaway differently than unionfs.  The
> > extra work in reusing unionfs would be to forward all other RPCs to the
> > node and setting it up.
> 
> I'm not sure I can understand completely your idea... What node do you
> refer to?

Any node involved in getting the control port.

> >> Having the mountee killed after unionmount is (forcibly) killed may not
> >> always be the desired effect, you know. I would rather have unionmount
> >> die on its own, but this is just an inclination, not a founded
> >> opinion. Personally, when I kill -9 a program, I am very much prepared
> >> to go after it and to collect all the garbage.
> >
> > Oh I agree.  The problem concerned with crashes and non-KILL signals to
> > unionfs, without a unionmount to clean up.  A unionmount process could
> > trap non-KILL signals and handle them gracefully.
> 
> Yes; and this is another thing that makes me an adept of the
> ``fork-the-code-base'' approach :-)

It is possible to handle them as long as there is a unionmount process.
This really has nothing to do with forking versus not, it was only a
problem when considering pushing the implementation to system servers
so that no extra unionmount process was needed.

> [snip]
>
> > There is one more route you may want to consider.  As I mentioned in
> > my replies to antrik, unionmount is basically anonymous file systems +
> > unionfs.  You could write a utility to handle anonymous file systems
> > instead. Even if it turns out that a specific unionmount with special
> > rules is needed, we'd still get a very useful utility out of the
> > process.
> 
> Yes, this is true. The idea of anonymous filesystems is very
> interesting; but I must acknowledge that I don't know many of the
> involved details. I guess there will be a need for a discussion about
> anonymous filesystems soon :-)

Well it all comes down to whether you are comfortable with doing it
or not.  We can always return to the idea later.  I would take it up,
but it will have to wait until I've done my thesis on libmob.

Regards,
  Fredrik




Authority verification

2009-04-23 Thread Carl Fredrik Hammar
Hi,

I guess this mail turned out to be more of a report on my findings rather
than to start a discussion.  So I'm mostly looking for feedback, e.g. if
what I'm saying doesn't make sense, if one of my assumptions are wrong,
or if I have missed something.  Anyways, here comes a long mail about
what I like to call ``authority verification''.

Though I am open to suggestions for alternatives to the name.  ;-)

Most objects in the Hurd depend on other objects to function.  In order
for a process to receive a mobile object from a sending process it must
gain access to those same, or equivalent, objects.  Simply bundling
the dependencies with the mobile object could grant the receiver direct
access to objects it wouldn't otherwise be able to gain access to.

The sender is of course free to do this.  However, that could effectively
punch a hole through the normal security setup.  Overriding security in
this manner could be part of its functionality.  But if that is not the
case, it would make users wary of loading mobile objects, making them
less effective overall.

However, if it's only used when the receiver already has the necessary
authority.  There wouldn't be any new security concerns.  Except perhaps
due to the increased complexity.

Also the sender shouldn't make assumptions based on the UIDs or GIDs
of the receiver, as its authority is based as much on context, i.e.
which capabilities it has.  It is easy enough for the sender to send the
dependencies if the receiver is run by the same user or root.  However,
for each such rule we make up, a security conscience user would have to
make sure it can't be exploited on a case-by-case basis.  I believe we
should avoid constructing new access policies as much as possible.

The server that implements the dependency is also responsible for
implementing the access control for accessing its objects.  Even if the
server advertises that it permits access, e.g. through file permission
bits, the fact remains that it may disallow access on other grounds.
For instance, there's nothing stopping a server from disallowing access to
according to an ACL, or during a certain time of day, or even completely
at random.

The only way to make /sure/ that the receiver has the necessary access
would be to request an equivalent object from the dependency server,
using only data and capabilities the receiver already has access to.

Note also that it's possible for the sender to trick the receiver to
use objects the sender itself does not have access to, in ways they
are not expected to be used.  Possibly leading to precious data being
overwritten or sensitive data being transmitted.  Thus we also need to
make sure the sender does indeed have the access it claims to have.

So we need to determine whether two objects are equal.  And this test
should be done in a server trusted by both parties, similar to the auth
server, so the parties does not expose either port to one another.

If they are used through the same port, they are the same object and
therefore equal.  However, most objects seem to be referenced indirectly
through session handles that remember the credentials of the client.
Not to mention the cursor state of normal file handles.  Given this,
it is unlikely the server would be give out the same port twice.

Task ports and IO identity ports are exceptions to this.  And I suspect
translator control ports and ports to devices implemented by Mach
might also be exceptions.  Thought I'll need to look it up to be sure.
So comparing ports does have some uses.

Other than the port comparison, the Hurd offers no general mechanism
for testing object equality.  The IO identity ports does offer a way to
test whether two handles uses the same underlying io object.  However,
it says nothing on whether the handles are identical.  The two handles
might permit different operations.  Or like /dev/random send different
data to both parties, leaving the question whether they truly can be
considered equivalent (which happens to be the case for /dev/random).

The short of it is that existing Hurd servers are not equipped to deal
with the problem at hand.  We could try reopening, using io_restrict_auth
on a duplicate, or some other trick. In the end, given the dynamic
nature of Hurd we can't be certain we end up with an equivalent object
in all cases.

The only way to make this work AFAICS, would require adding a equality
interface, or redesign interfaces to give out the same port for the same
objects.  Both require changes to servers to make it work.  The latter
would be a monumental task.  The former can be added one server at a
time, but still I consider it to be out of the question for until libmob
has stabilized.

It seems where out of luck when it comes to re-obtaining the dependencies
from their implementing servers.  However, there is source the receiver
may be able to get the very same ports used by the sender: the sender
itself.

Its not uncommon that processes already has access to memory

Re: Authority verification

2009-04-24 Thread Carl Fredrik Hammar
Hi,

On Fri, Apr 24, 2009 at 08:48:34AM +0200, olafbuddenha...@gmx.net wrote:
> [snip]
>
> > Note also that it's possible for the sender to trick the receiver to
> > use objects the sender itself does not have access to, in ways they
> > are not expected to be used.  Possibly leading to precious data being
> > overwritten or sensitive data being transmitted.  Thus we also need to
> > make sure the sender does indeed have the access it claims to have.
> 
> I think you are trying to do too much here.
> 
> The problem you described is exatly what is known as the "firmlink
> problem": the same situation can occur whenever an untrusted server
> gives out unauthenticated ports, and the client reauthenticates them
> against it's own permissions. It is a fundamental limitation of the Hurd
> auth mechanism.
> 
> While it would be nice to find a solution to this problem, this
> shouldn't and in fact can't be fixed within the mobility framework.
> Fixing it here doesn't help at all, if the problem is still open
> elsewhere. All we need to ensure is that the mobility framework doesn't
> make it worse.
> 
> You have considered this stuff very thoroughly, and it's quite possible
> that you might be able to find a generic solution to this issue. But it
> is orthogonal to the mobility framework.

Well, you're right, I shouldn't try to fix the firmlink problem.
I'm actually confused as to why I included that paragraph in the first
place, since the firmlink problem isn't really relevant.  :-(

What would have been a more appropriate problem to describe, would have
been that if equality test were to be done in the sender (or the receiver
for that matter), the sender could make the receiver send capabilities to
it that it didn't have access to before.  Really, it's the same reason
we have an auth server and don't send auth ports directly to the server
during authentication.

I think I managed to confuse the two problems at the time.  Sorry about
that.

> Anyways, this consideration made me realize that most probably we should
> just do *exacly* what is done for authentication in general: Let the
> sender give out unauthenticated capabilities for the dependencies, and
> the receiver reauthenticate them. I'm pretty sure this is the most
> useful approach: It is straigtforward; uses existing mechanisms; is
> guaranteed to align with the rest of the system design; the security
> implications are known.

Ignoring that this would undermine confinement of processes through
chroot and (partial) sub-Hurds.  (Your suggestion makes it seem as if
you're fine with that.)  There is no way to know that the reauthenticated
object is equal to the original, in particular, we can't determine that
it permits the same operations.

My suggestion to get capabilities from the senders task port, however,
seems both extremely secure and robust.  And naturally reflects the
case where the receiver dominates the sender.  The only downside
being that it's a much stricter requirement.

> In general, I fear that you are a bit too ambitious with this project...
> You are trying to consider things in a too general, too abstract way.
> This requires too much consideration; opens a field that is too wide --
> I don't think it can be finished in any useful time frame. I strongly
> suggest you try to focus more on specific situations. It's not realistic
> nor useful, trying to be more generic than the rest of the existing
> system design is...
> 
> I already hinted at this in various other discussions (not only with
> you): I'm generally of the opinion that it's better to start with
> specifics, and generalize later, than trying to think about all possible
> implications up front. Same reason I was always sceptical about the
> ngHurd stuff...

I agree with your critique.  I'm planning to refocus my effort, but
perhaps not in the way you'd want ;-).  However, this will be the
topic of another mail.

I do believe I've come to a simple and robust means to do authority
verification.  Even if I didn't really set out to do it initially
and only after I realized that what I initially wanted was impossible
(with out modification through out the Hurd, which I consider to be
out of the question).

> > Another possibility which I have considered, is ports that are
> > considered safe for sending directly.  For instance, libstore does
> > this when a store is opened read-write and the entire underlying
> > device is used. The reasoning being that accessing the store remotely
> > still makes it possible to access the entire store.
> [...]
> > Second, it would no longer be possible to revoke the access to the
> > underlying device provided by the store by killing the store
> > translator.
> 
> This is a general problem of object mobility though, not just the
> specific authentication approach, isn't it?.

No, if the client has access to the underlying store independent of the
server, then the server can't revoke that access.

>From a transparency point of view you are correct.

Re: Unionmount. Basic details

2009-05-02 Thread Carl Fredrik Hammar
Hello,

On Sun, Apr 26, 2009 at 06:28:51AM +0200, olafbuddenha...@gmx.net wrote:
> On Thu, Apr 09, 2009 at 10:09:04PM +0200, Carl Fredrik Hammar wrote:
> > On Wed, Apr 08, 2009 at 07:10:26PM +0200, olafbuddenha...@gmx.net
> > wrote:
> 
> > > while unionfs must be able to handle an arbitrary number of merged
> > > locations, for unionmount it's always exactly two. This simplifies
> > > the implementation and policy decisions, and might even call for
> > > different optimization approaches -- so it seems quite possible that
> > > the code base will diverge considerably after the fork...
> > 
> > Why stop at two?  Mounting several file systems at once seems
> > perfectly reasonable to me.
> 
> I do not see much point in mounting several file systems at the same
> time. Just mount them one after the other. This is less efficient of
> course (as you get a whole stack of unionmount translators), but
> probably not very common anyways...

It's easier to manipulate a single translator than a stack.  Consider the
case of unioning three file systems.  To remove the middle one you'd have
to unmount the two top ones and then remount the top.  With a single
translator you can get away with using fsysopts, which could possibly
be atomic.

I'm not so sure it's uncommon case.  Though I guess time will tell.

> [snip]
> 
> > My first thoughts of anonymous file systems got me thinking of an
> > ``anonfs'' translator that launched translators encoded in the paths.
> > But the thought of encoding paths in a path gave me shudders.
> 
> Isn't that more or less what nsmux does?...

Almost, I was thinking of more of a /anonfs directory to create anonymous
file systems, e.g. `/anonfs/unionfs foo bar/baz'.  But would get ugly
if foo or bar contained `/'.

> > Then it got me thinking of a shell utility, e.g.:
> > 
> >letfs foo "/hurd/nsmux foo" -- ... -- \ settrans /hurd/unionfs foo
> >%1 %2 ... %n
> > 
> > Where %i is replaced with /dev/fd/${fd to %i's root}.
> 
> I don't really get the idea...

It would launch a number of translators, passing file descriptors to their
roots to a command where variables are expanded to paths specifying these
file descriptors using glibc's /dev/fd functionality.  In my example,
translators are separated by `--', each having settrans like syntax
(not sure why I put quotes in there), and after the last `--' comes the
command with replacement variables.

It's reminiscent of bash's process substitution where you can pass a
pipe as an argument, e.g. ``ls|head'' becomes ``head <(ls)''.  Indeed,
it's where I got the idea in the first place.

> Anyways, what I want is basically a way to launch a translator as part
> of another command which is using it -- a bit like traditional UNIX
> pipes. I think nsmux can cover many of the use cases; though it probably
> would get rather icky for the more complicated ones... Some more generic
> anonymous translator mechanism could be useful -- but I have no clue how
> it would work exactly.

I think letfs would do exactly what you want.  :-)

> > This leaves the question of how and when to make the anonymous file
> > systems go away.
> 
> This is actually quite easy for anonymous translators: They can go away
> as soon as all clients have closed the connection (and any outstanding
> processing is finished) -- as it is anonymous, nobody other than the
> initial client should be able to connect to it, so we know it won't be
> needed afterwards. In this regard too it is very similar to programs
> connected with anonymous pipes...
> 
> Deciding when a translator should go away is much more tricky for normal
> static translators -- we don't know whether they will be still needed
> after all previous clients closed the connection.

Well, yes.  But all current translators assume that they are static
and thus doesn't go away when there are no connections (though some do
timeout eventually).  Extending all translators to also support this is
not a good option IMHO.

We could try to detect when it's no longer used.  But this seems tricky,
and requires that ports are proxied.  Worse, since we don't know which
interfaces a translator implements we'd have to intercept all messages
and proxy all incoming and outgoing ports.  And with such a heavy
handed approach, the translator might never die if some ports are used as
weak references and should not keep the translator from quiting.

Another, much more basic option might be to stop it at a predictable
event.  For letfs the natural event would be when the sub-commands exit.
This isn't perfect, e.g. the command might pass a port to another process.
This might seem like an extravagant use case, but this would happen in
my letfs example with settrans.

A third option might be to continually ask it to go away until it does.
But I'm no fan of polling like this, but might be acceptable in addition
to the previous option.

Regards,
  Fredrik




Re: Unionmount. Basic details

2009-05-04 Thread Carl Fredrik Hammar
Hi,

On Sun, Apr 26, 2009 at 10:48:13PM +0200, olafbuddenha...@gmx.net wrote:
> On Sat, Apr 11, 2009 at 03:03:45PM +0200, Carl Fredrik Hammar wrote:
> > On Fri, Apr 10, 2009 at 08:35:07PM +0300, Sergiu Ivanov wrote:
> 
> > > Also, I'm not aware of anybody still doing any changes to unionfs
> > > :-)
> [...]
> > Also, in many ways unionfs seems like an good candidate to make use of
> > libmob which I'm working on.  Making that that change would hopefully
> > not be too extensive, but it would not be trivial.
> 
> The changes necessary to handle mobility most likely won't touch the
> actual merging code, but rather all the other stuff regarding startup
> etc. -- i.e. exactly the stuff that will differ between unionmount and
> unionfs anyways.

I was actually referring to making the merging code mobile, as well
as loading mobile objects from the unioned translators.  And both
changes would touch the merging code, loading objects would be fairly
superficial, but making code mobile might require more substantial
changes.  For instance, avoiding use of non-constant globals.

> Also, as a general rule, it is a *very* bad idea to base current design
> decisions on possible future prospects... A typical case of YAGNI.

Oh, I agree.  I only mentioned it to counter the assumption that no
changes will be made to unionfs.  Sharing code is a good thing, perhaps
not initially, but certainly eventually.

Regards,
  Fredrik




Re: Networking Problems

2009-05-05 Thread Carl Fredrik Hammar
Hi,

> However, when I try to ping www.google.com, I get the following:
> 
> # ping www.google.com
> PING www.l.google.com (74.125.79.99): 56 data bytes
> --- www.l.google.com ping statistics --
> 4 packets transmitted, 0 packets received, 100% packet loss

I'm no expert on networking, but I did notice that it manages to resolve
the address.

You've probably done so already, but I think pings are handled specially,
so you might also want to try using a more ``normal'' networking program
such as lynx or wget.

Atleast I know that something that appears to be broken but have actually
worked all along, is just the kind of thing I'd run into.  ;-)

Regards,
  Fredrik




[bug #26585] Assertion failure when signaling zombie pthread

2009-05-15 Thread Carl Fredrik Hammar

URL:
  

 Summary: Assertion failure when signaling zombie pthread
 Project: The GNU Hurd
Submitted by: hammy
Submitted on: Fri 15 May 2009 02:35:18 PM CEST
Category: Hurd
Severity: 3 - Normal
Priority: 5 - Normal
  Item Group: None
  Status: None
 Privacy: Public
 Assigned to: None
 Originator Name: 
Originator Email: 
 Open/Closed: Open
 Discussion Lock: Any
 Reproducibility: Every Time
  Size (loc): None
 Planned Release: None
  Effort: 0.00
Wiki-like text discussion box: 

___

Details:

Handling a signal in a thread that has exited and has not been
detached or joined with leads to an assert failing it glibc:


hurdsig.c:804: _hurd_internal_post_signal: Unexpected error: (ipc/send)
invalid destination port.


After this, the program hangs and doesn't respond to any signals
other than SIGKILL.  My guess is that signal handling has been
disabled and it hangs while waiting for the abort signal to be
handled.

Also ps stops working after this and hangs without outputting
anything.  So the proc server might also be affected.

A test has been attached.  Be sure to background it because ^C etc.
doesn't work, leaving your terminal useless.




___

File Attachments:


---
Date: Fri 15 May 2009 02:35:18 PM CEST  Name: sig.c  Size: 1kB   By: hammy



___

Reply to this item at:

  

___
  Message sent via/by Savannah
  http://savannah.gnu.org/





Re: [bug #26585] Assertion failure when signaling zombie pthread

2009-05-17 Thread Carl Fredrik Hammar
Hi,

On Sun, May 17, 2009 at 04:51:32AM +0200, olafbuddenha...@gmx.net wrote:
> 
> On Fri, May 15, 2009 at 12:35:20PM +, Carl Fredrik Hammar wrote:
> 
> > After this, the program hangs and doesn't respond to any signals other
> > than SIGKILL.  My guess is that signal handling has been disabled and
> > it hangs while waiting for the abort signal to be handled.
> > 
> > Also ps stops working after this and hangs without outputting
> > anything.  So the proc server might also be affected.
> 
> That last part is expected: ps queries the processes for certain
> information; if the signal thread of a process doesn't work, this query
> hangs.
> 
> Use ps -M to avoid the query.

Ok, thanks for the info.  This makes the test a little less dangerous,
in that you can find out the PID of the test and kill it from another
terminal if you forget to background it.

Regards,
  Fredrik




Re: Hurd Mission Statement

2009-05-27 Thread Carl Fredrik Hammar
Hi,

On Tue, May 26, 2009 at 02:40:34PM +0200, olafbuddenha...@gmx.net wrote:
> What do you think? Is this a good mission statement? If so, it should go
> on the Hurd web front page.

It's good, and your reasoning seems pretty spot-on to me.  I agree that
it should be on the front page.

Regards,
  Fredrik




Initial target(s) for libmob (was Re: Improving object mobility within the Hurd)

2009-05-29 Thread Carl Fredrik Hammar
Hi,

I was going to refute your suggestion of ioctl handlers, but in arguing
about it back and forth with myself, I ended up agreeing with you.  ;-)
As such this mail became a good bit longer than intended, feel free to
just skim through most of this first part.

I will answer the rest of your mail seperatly.

On Thu, May 21, 2009 at 06:44:59PM +0200, olafbuddenha...@gmx.net wrote:
>
> > > These stubs are individual for every ioctl: to support a new type of
> > > device, new stubs need to be added to libc -- which is obviously
> > > painful. Would be nice to have a mechanism that loads the stubs
> > > dynamically from some external source.
> > 
> > Yes, but transforming ioctls to RPCs seems totally independent of the
> > server.
> 
> No, not really. The more I think about it, the more I'm convinced that
> it is actually inherently bound to the server.
> 
> ioctl()s are always specific to a particular device class, and thus the
> server(s) implementing (or proxying) it. It makes perfect sense for a
> server implementing a specific device, also to provide the ioctl
> wrappers necessary to access it.

I agree that ioctls are device specific, but not that they are specific
to the server.  If the client makes use of an ioctl, then it must be
aware of the device class as well.  A header must be provided that
defines macros for the ioctl numbers, it does not seem unreasonable to
also provide a module with the corresponding handlers at the same time.

Investigating ioctls a bit closer it seems that glibc already provides
some provisions to register additional handlers.  I think its possible to
do everything needed from a linked object file by simply linking to it,
i.e. without any function calls.

(Curiously there is a hurd_register_ioctl_handler function declared to
do this at runtime as well, but it does not actually seem to be defined
anywhere. :-/)

The only real problem here is that we are required to link the application
with the specific handler library.  A step that isn't needed on other
OSes.  Rather it should be enough to link with glibc, and for easy
extensibility no change to glibc should be required to add new ioctls.
This means glibc has to dynamically link with an ioctl handler module.
Which leaves the question of how to find it.

Broadly speaking there are only two options; either the device specifies
it or a system service does.  The system service can be quite mundane,
as my original suggestion of simply loading a library based on the device
class name.  Note that the name would have to be encoded into the ioctl
number, or looked up in a file e.g. /etc/ioctltab, otherwise glibc would
need to be hacked in order to add new device classes.  It could also be
a more sophisticated system server, possibly giving out handlers using
libmob as in your suggestion.

Lets look at this from the perspective of a non-root user who wants to
provide a device of a new device class.  This is the sort of thing the
Hurd is all about.  ;-)

In the system case, the user would need to override the system service,
which could be as simple as exporting LD_LIBRARY_PATH, or pretty nasty
if the ioctl module must be looked up in a file or a server, which
would involve a chroot if there are no other provisions for overriding
the look-up.

In the device case, the user would need to make sure libmob can determine
that the module can be trusted, e.g. by defining LD_LIBRARY_PATH, or
twiddle with the file permissions, or whatever we decide.

At this point it is at least clear that we can rule out the system
case with name look up, as it would definitely be more complicated to
override it.  And the other two cases are either tied or the device case
is superior, depending on how libmob determines trust.

One issue with encoding the handler module name in the ioctl is that
it requires changing the current numbering convention.  (At least if
we want more than a single letter.)  I believe the numbering should be
changed anyway, as it's currently over-complicated.  But breaking glibc
ABI over ioctls is a hard pill to swallow...

Another interesting issue is ioctl clashes, i.e. that two distinct ioctls
share the same number but differ in semantics and possibly parameters
and their types.

Clashes are inherently bad.  Sending an ioctl to a device that supports a
clashing ioctl will at best confuse the client, and at worst make it make
it segfault.  Perhaps the latter is preventable, by somehow inspecting
the process' memory mapping or handling the signal, but I'm pretty sure
the confusion is unavoidable.

However, that they should be avoided does not necessarily mean they will
be avoided.  And if we let users provide their own ioctls, it is likely
it will happen at some point.  Such a situation should be fixed, but such
a situation might not be discovered immediately, and fixes takes time.
This leaves us with how to handle the interim period.

Currently, glibc seems to assume that there are no clashes, it just uses
the first handler it find

Re: [PATCH 1/1] Update the bug-reporting address.

2009-06-04 Thread Carl Fredrik Hammar
Hi,

On Fri, May 29, 2009 at 12:34:23AM +0300, Sergiu Ivanov wrote:
> >From 23763c18399985e62686282ec848c42c3a540066 Mon Sep 17 00:00:00 2001
> From: Sergiu Ivanov 
> Date: Fri, 29 May 2009 00:29:25 +0300
> Subject: [PATCH] Update the bug-reporting address.
> 
> * options.c (argp_program_bug_address): Change the value to
> .
> ---
>  options.c |2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/options.c b/options.c
> index ef29a02..d04d072 100644
> --- a/options.c
> +++ b/options.c
> @@ -211,7 +211,7 @@ const struct argp_child argp_children_startup[] =
> 
>  const char *argp_program_version = STANDARD_HURD_VERSION (unionfs);
>  const char *argp_program_bug_address =
> -"Gianluca Guida ";
> +"";
> 
>  #define ARGS_DOC "FILESYSTEMS ..."
>  #define DOC  "Hurd unionfs server"
> -- 
> 1.5.2.4

You could do this be simply linking to libhurdbugaddr.  Just drop
the definition of argp_program_bug_address and pass -lhurdbugaddr to
the linker in the Makefile.

Regards,
  Fredrik




Re: [PATCH 5/5] Changed argp parsing policy

2009-06-04 Thread Carl Fredrik Hammar
Hi,

On Sun, May 31, 2009 at 11:01:21AM +0200, olafbuddenha...@gmx.net wrote:
> 
> On Fri, May 29, 2009 at 12:09:04AM +0300, Sergiu Ivanov wrote:
> 
> > +/*---*/
> > +#include "unionmount.h"
> > +/*---*/
> > +
> > +/*---*/
> > +/*-Global 
> > Variables--*/
> > +/*The command line for starting the translator. */
> > +char * trans_argz;
> > +size_t trans_argz_len;
> > +/*---*/
> [...]
> 
> Please drop these crazy comment "lines"... I'm not going to discuss the
> aesthetic value of such ornaments :-) -- but please try to keep
> consistent with the style of the existing code.

Also the normal way to section code in the Hurd is the page feed
character (^L).  There are plenty of examples in the Hurd's source.

> Also, I don't think a comment saying "global variables" is exactly
> useful -- this is really obvious to any C programmer...

I think it's acceptable if used as a ``Global variables goes here''
statement.  Though ``Argument parsing variables'' would be more
meaningful.  An ``Argument parsing'' section with all code related to
it would be ideal if possible IMHO.

Regards,
  Fredrik




Re: Initial target(s) for libmob

2009-06-05 Thread Carl Fredrik Hammar
Hi,

On Sat, May 30, 2009 at 06:50:01AM +0200, olafbuddenha...@gmx.net wrote:
> 
> > > ioctl()s are always specific to a particular device class, and thus
> > > the server(s) implementing (or proxying) it. It makes perfect sense
> > > for a server implementing a specific device, also to provide the
> > > ioctl wrappers necessary to access it.
> > 
> > I agree that ioctls are device specific, but not that they are
> > specific to the server.  If the client makes use of an ioctl, then it
> > must be aware of the device class as well.  A header must be provided
> > that defines macros for the ioctl numbers, it does not seem
> > unreasonable to also provide a module with the corresponding handlers
> > at the same time.
> 
> Actually, the header is normally provided by the kernel, isn't it? And
> thus, the entity implementing the drivers... So, my point stands :-)

Normally yes.  But we're in a slightly different position than normal
kernels.  In particular it is likely that that we want to supply the
header before we actually have an implementation in place, so that
applications compile.

> > Broadly speaking there are only two options; either the device
> > specifies it or a system service does.  The system service can be
> > quite mundane, as my original suggestion of simply loading a library
> > based on the device class name.  Note that the name would have to be
> > encoded into the ioctl number, or looked up in a file e.g.
> > /etc/ioctltab, otherwise glibc would need to be hacked in order to add
> > new device classes.  It could also be a more sophisticated system
> > server, possibly giving out handlers using libmob as in your
> > suggestion.
> 
> Err...

This is was my conclusion as well.  ;-)

> > Lets look at this from the perspective of a non-root user who wants to
> > provide a device of a new device class.  This is the sort of thing the
> > Hurd is all about.  ;-)
> 
> ...that's just what I was thinking when reading the paragraph above :-)
> 
> As I already said in the unionmount discussion (regarding the hidden
> nodes), we generally want to avoid centralized services, unless really
> necessary. When the thing can be negotiated between the server and the
> client directly, why introduce a central service? It only complicates
> stuff and limits possibilities.

Yes, I have begun to appreciate this property more and more.

> > One issue with encoding the handler module name in the ioctl is that
> > it requires changing the current numbering convention.
> 
> No idea what you mean :-(

Well the current numbering convention has a field for which device
class the ioctl belongs to.  It encodes a single letter, and my
idea was to simply load ``ioctl-$DEVICE_CLASS_LETTER''.  I concluded
that a single letter was pretty confining and that a change would be
necessary.

Though, for some reason I only considered it relevant if we chose to
encode the module name in the ioctl number.  But this is affects all
possible implementations.  Looking closer, it seems that the field
only allows for 16 (!) device classes (the letters f to v).  This seems
unacceptably low if we want to support any meaningful number of ioctl
classes.

(Most of the number encodes parameter type information, which isn't very
useful IMHO.)

Changing it would be painful, as all applications that currently
uses ioctls will need to be recompiled.  If this is hard to detect,
it might mean that *all* applications will need to be recompiled in
practice.

This is less of a problem if we support clashing ioctls.  Because then
we could support two numbering schemes during an interim period.

Looking through our current ioctls there are some clashes.  I'm unsure
how this is handled currently, especially since they deviate from
the standard numbering convention.  Perhaps they are just aliases.
Perhaps they don't have any parameters and thus don't need handlers.
Or perhaps they aren't handled at all...

> > Another interesting issue is ioctl clashes, i.e. that two distinct
> > ioctls share the same number but differ in semantics and possibly
> > parameters and their types.
> 
> Is that possible on other systems? Otherwise, the only case where I
> could see that happen, would be if we want to emulate ioctl()s from more
> than one other system...

Well, Linux has documented clashes.  This doesn't really affect us
however, since we can just define the ioctl to use a different number.
Unless we want ABI compatibility that is.  But I've been assuming that
we don't.

I guess it's possible that other systems define ioctl macros with the
same name in the same header that have different semantics.  Then we
would be forced to use the same number.  I don't think this is the case,
but who knows?

> > However, I also want to get a thesis out of this and I don't think
> > server loaded ioctl handlers would be enough on their own.
> 
> I think they actually might -- but I don't know the expectations there,
> so I can't really tell...

I was considering a thesis limited t

cmp: the port comparison server

2009-06-10 Thread Carl Fredrik Hammar
Hi,

I've been making some progress code-wise with libmob.  In particular I
have written a working server for secure comparison of ports.  And now
I would like to ask a few questions that have popped up in the duration.

You can get the code here: git://gitorious.org/hurd-cmp/hurd-cmp.git

The repository will probably evolve to a hold libmob and all of it's
supporting code.  I didn't really have a plan for it originally, I just
went for it.  This means that the individual commits aren't always neatly
divided, and the commit messages usually only have headlines.

The plan is for libmob to be part of the Hurd eventually, but I think
its best for it to have a standalone repository, as oppose to a clone of
Hurd's, until we reach a point where we can port existing translators
to use libmob.  Mostly so that I don't feel the need to follow commit
conventions and such, and continue to just ``go for it''.  Let me know
if this isn't a good idea.

Is the name `cmp' alright, or should it perhaps be expanded to `compare'?
I'm not sure how I feel about using a verb as a name for a server.

For now I use 1234000 as the subsystem number.  We don't need to finalize
this until cmp is included in the Hurd, right?

In the copyright notices I state that the program is part of the Hurd
and that the copyright is assigned to FSF.  But technically it isn't
part of the Hurd yet, I hope this isn't a problem.

I based the translator on password originally, since it also is a trivial
translator whose main interface isn't IO.  So currently I state this as
well in the relevant copyright notices.  But this code is similar to a
lot of other trivfs translators, has diverged, and it is only a small
portion of the actual functionality.  With that in mind is it really
necessary to state that in the copyright?  (It is also based on auth,
but here I believe attribution is in order.)

Other than that feel free to nitpick to your hearts' content!  :-)

Regards,
  Fredrik




Re: [PATCH 1/1] Update the bug-reporting address.

2009-06-11 Thread Carl Fredrik Hammar
Hi,

On Thu, Jun 11, 2009 at 01:38:58PM +0300, Sergiu Ivanov wrote:
> Sorry for the very late reply :-(

No problem.  :-)

> On Thu, Jun 04, 2009 at 04:02:31PM +0200, Carl Fredrik Hammar wrote:
> > On Fri, May 29, 2009 at 12:34:23AM +0300, Sergiu Ivanov wrote:
> > > >From 23763c18399985e62686282ec848c42c3a540066 Mon Sep 17 00:00:00 2001
> > >
> > >  const char *argp_program_version = STANDARD_HURD_VERSION (unionfs);
> > >  const char *argp_program_bug_address =
> > > -"Gianluca Guida ";
> > > +"";
> > 
> > You could do this be simply linking to libhurdbugaddr.  Just drop
> > the definition of argp_program_bug_address and pass -lhurdbugaddr to
> > the linker in the Makefile.
> 
> Thank you for the information! However, I did exactly what Thomas told
> me to do, so it depends on him which way to choose to handle the bug
> address...

I'm guessing Thomas just forgot about good ol' libhurdbugaddr, but I
might have missed something about when it should be used.  So perhaps
it is best to wait for confirmation from him.

I'm just nitpicking the nitpicker in any case, this is not an important
issue.  ;-)

Regards,
  Fredrik




Code trust by reverse authentication

2009-06-11 Thread Carl Fredrik Hammar
Hi,

To load a mobile object we first need to load its code base that has
been specified by the sender of the object.  The ideal way to do this
would be to send a port to a .so file and then load that.

If we loaded the code module unconditionally, the sender could essentially
inject arbitrary code in the receiver.  So we need to determine who
has control of the module and only load it if it is a trusted user,
e.g. the same user or the root user.

But the FS interface does not provide any means to do this that isn't
easily faked.  Checking who the owner of the file is just gives you a
UID which is just a plain integer.  And there doesn't seem to be anyway
to check who controls the actual translator either.

It hit me that what we want is essentially reverse authentication.
That is, letting the sender authenticate against the receiver, which
would normally be the server and the client respectively.  After this
the receiver will know for sure who controls the module.

Of course implementing this operation in existing translators would be
a chore.  And I don't know if it will ever be useful for anything else,
though the concept is generic enough.

Instead we can provide a translator that provides this reverse
authentication but otherwise proxies its underlying node, or perhaps
just gives out a unproxied port to it directly.

This is has some other advantages.  It would be possible to ``bless'' code
modules as appropriate for loading on a case-by-case basis.  Instead of
loading any old file that happens to be owned by a trusted user.  For a
normal user to provide its own module, it would now be a simply a matter
of blessing the module and then pointing the mobile object provider at it.
For instance, settrans libfoo.so /hurd/bless-code.

Another interesting possibility would be to let the code modules be
translators themselves.  It would be kind of nice keeping it all the
needed functionality in a single file.  Though I'm not sure how it would
be implemented.  On the flip side it would mean that code would be shared
through a trivfs-like library, instead of in a separate program which
is usually prettier.

The only real problem with specifying the module by port is that the
receiver needs to load the exact same module and not a copy of it.
This is a reasonable requirement for optimization, for which the extra
assurance that the code is exactly the same increases reliability.
But I'm not so sure about ioctls...

Regards,
  Fredrik




Re: [PATCH 1/3] Add the ``--mount'' command line option

2009-06-11 Thread Carl Fredrik Hammar
Hi,

I skimmed through this patch and noticed some issues with the license.

On Thu, Jun 11, 2009 at 09:10:24PM +0300, Sergiu Ivanov wrote:
> >From d0f0f5c41d9046aec765a7264914c19642adead9 Mon Sep 17 00:00:00 2001
> From: Sergiu Ivanov 
> Date: Thu, 11 Jun 2009 15:22:24 +0300
> Subject: [PATCH] Add the ``--mount'' command line option.
> 
> +++ b/unionmount.c
> @@ -0,0 +1,28 @@
> +/* Hurd unionmount
> +   The core of unionmount functionality.
> +
> +   Copyright (C) 2009 Free Software Foundation, Inc.
> +
> +   Written by Sergiu Ivanov .
> +
> +   This program is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU General Public License version
> +   2 as published by the Free Software Foundation.

This should be:

   This program is free software; you can redistribute it and/or
   modify it under the terms of the GNU General Public License
   as published by the Free Software Foundation; either version 2
   of the License, or (at your option) any later version.

Note the ``or any later version'' part.

> +   You should have received a copy of the GNU General Public License
> +   along with this program; if not, write to the Free Software
> +   Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
> +   USA.  */

I remember being told that the address has changed when I was working
on libchannel.  And indeed the address is different in my libchannel
source:

   If not, write to the Free Software Foundation, 675 Mass Ave, Cambridge,
   MA 02139, USA.

I think your using the old address but I'm not sure.  Thomas will have
to clarify.  Perhaps there is a definitive source somewhere.

> +++ b/unionmount.h
> @@ -0,0 +1,31 @@
> +/* Hurd unionmount
> +   General information and properties for unionmount/unionfs.
> +
> +   Copyright (C) 2009 Free Software Foundation, Inc.
> +
> +   Written by Sergiu Ivanov .
> +
> +   This program is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU General Public License version
> +   2 as published by the Free Software Foundation.
> +
> +   This program is distributed in the hope that it will be useful, but
> +   WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program; if not, write to the Free Software
> +   Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
> +   USA.  */

Same here.

Regards,
  Fredrik




Re: Code trust by reverse authentication

2009-06-12 Thread Carl Fredrik Hammar
Hi,

On Thu, Jun 11, 2009 at 07:07:35PM +0200, Carl Fredrik Hammar wrote:
> Instead we can provide a translator that provides this reverse
> authentication but otherwise proxies its underlying node, or perhaps
> just gives out a unproxied port to it directly.
> 
> This is has some other advantages.  It would be possible to ``bless'' code
> modules as appropriate for loading on a case-by-case basis.  Instead of
> loading any old file that happens to be owned by a trusted user.  For a
> normal user to provide its own module, it would now be a simply a matter
> of blessing the module and then pointing the mobile object provider at it.
> For instance, settrans libfoo.so /hurd/bless-code.

I didn't think through what would happen when setting it as a passive
translator, which is the probably the normal use-case.  Then the
translator will run as the owner of the underlying node.  If the
underlying translator isn't run as root or the same user then the start-up
will fail.  Which is as it should be, so this doesn't cause any problems.

This differs slightly from what happens when it is set as a active
translator.  As in that case it will run the setting user, which might
differ from the owner.  Most likely the underlying translator will only
allow the owner or root to set the active translator, but who knows.
I don't think this inconsistency needs fixing, since it's pretty much
the normal inconsistency between passive and active translators.

Regards,
  Fredrik




Re: [PATCH 1/3] Add the ``--mount'' command line option

2009-06-13 Thread Carl Fredrik Hammar
Hi,

This time around I'll review your patch more thoroughly.

On Thu, Jun 11, 2009 at 09:10:24PM +0300, Sergiu Ivanov wrote:
> >From d0f0f5c41d9046aec765a7264914c19642adead9 Mon Sep 17 00:00:00 2001
> From: Sergiu Ivanov 
> Date: Thu, 11 Jun 2009 15:22:24 +0300
> Subject: [PATCH] Add the ``--mount'' command line option.
> 
> * Makefile: Add unionmount.c to the list of compiled
> object files.
> Update copyright information.
> 
> * options.h (OPT_MOUNT): Add the definition.
> (OPT_LONG_MOUNT): Likewise.
> Update copyright information.
> 
> * options.c (argp_common_options): Add option ``--mount''
> to the array of options.
> (argp_parse_common_options): Add the code for handling option
> ``--mount''.
> Update copyright information.
> ---
>  Makefile |8 ++--
>  options.c|   16 +++-
>  options.h|7 ++-
>  unionmount.c |   28 
>  unionmount.h |   31 +++
>  5 files changed, 86 insertions(+), 4 deletions(-)
>  create mode 100644 unionmount.c
>  create mode 100644 unionmount.h
> 
> diff --git a/Makefile b/Makefile
> index b180072..e65f29b 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -1,6 +1,10 @@
>  # Hurd unionfs
> -# Copyright (C) 2001, 2002, 2003, 2005 Free Software Foundation, Inc.
> +# Copyright (C) 2001, 2002, 2003, 2005, 2009 Free Software Foundation,
> +# Inc.

Break the line after the years instead, like so:

# Copyright (C) 2001, 2002, 2003, 2005, 2009
#   Free Software Foundation, Inc.

As is done in other parts of the Hurd.

> +#
>  # Written by Jeroen Dekkers .
> +#
> +# Adapted to unionmount by Sergiu Ivanov 

This sentence is missing a period.

>  # This program is free software; you can redistribute it and/or modify
>  # it under the terms of the GNU General Public License as published by
> @@ -25,7 +29,7 @@ CFLAGS += -Wall -g -O2 -D_FILE_OFFSET_BITS=64 -std=gnu99 \
>  LDFLAGS += -lnetfs -lfshelp -liohelp -lthreads \
> -lports -lihash -lshouldbeinlibc
>  OBJS = main.o node.o lnode.o ulfs.o ncache.o netfs.o \
> -   lib.o options.o pattern.o stow.o update.o
> +   lib.o options.o pattern.o stow.o update.o unionmount.o
>  
>  MIGCOMSFLAGS = -prefix stow_
>  fs_notify-MIGSFLAGS = -imacros ./stow-mutations.h
> diff --git a/options.c b/options.c
> index ef29a02..e2d8521 100644
> --- a/options.c
> +++ b/options.c
> @@ -1,7 +1,10 @@
>  /* Hurd unionfs
> -   Copyright (C) 2001, 2002, 2005 Free Software Foundation, Inc.
> +   Copyright (C) 2001, 2002, 2005, 2009 Free Software Foundation, Inc.
> +
> Written by Moritz Schulte .
>  
> +   Adapted to unionmount by Sergiu Ivanov 

Again missing period.

> This program is free software; you can redistribute it and/or
> modify it under the terms of the GNU General Public License as
> published by the Free Software Foundation; either version 2 of the
> @@ -23,6 +26,7 @@
>  
>  #include 
>  #include 
> +#include 
>  
>  #include "options.h"
>  #include "ulfs.h"
> @@ -33,6 +37,7 @@
>  #include "pattern.h"
>  #include "stow.h"
>  #include "update.h"
> +#include "unionmount.h"
>  
>  /* This variable is set to a non-zero value after parsing of the
> startup options.  Whenever the argument parser is later called to
> @@ -51,6 +56,8 @@ static const struct argp_option argp_common_options[] =
>"send debugging messages to stderr" },
>  { OPT_LONG_CACHE_SIZE, OPT_CACHE_SIZE, "SIZE", 0,
>"specify the maximum number of nodes in the cache" },
> +{ OPT_LONG_MOUNT, OPT_MOUNT, "MOUNTEE", 0,
> +  "start MOUNTEE and add the filesystem it publishes" },

"start the translator MOUNTEE and add it's filesystem" would be clearer
IMHO.

>  { 0, 0, 0, 0, "Runtime options:", 1 },
>  { OPT_LONG_STOW, OPT_STOW, "STOWDIR", 0,
>"stow given directory", 1},
> @@ -124,6 +131,13 @@ argp_parse_common_options (int key, char *arg, struct 
> argp_state *state)
>ulfs_match = 0;
>break;
>  
> +case OPT_MOUNT:
> +  /*TODO: Improve the mountee command line parsing mechanism. */

Please begin a comments with a space.  Also use two spaces after each
sentence, even if it's at the end of a comment.  For instance:

/* TODO: Improve the mountee command line parsing mechanism.  */

As is done in the rest of unionfs and the Hurd.

> +  err = argz_create_sep (arg, ' ', &mountee_argz, &mountee_argz_len);
> +  if (err)
> + error (EXIT_FAILURE, err, "argz_create_sep");
> +  break;
> +
>  case OPT_UNDERLYING: /* --underlying  */
>  case ARGP_KEY_ARG:
>  
> diff --git a/options.h b/options.h
> index eb74ce6..126759c 100644
> --- a/options.h
> +++ b/options.h
> @@ -1,7 +1,10 @@
>  /* Hurd unionfs
> -   Copyright (C) 2001, 2002 Free Software Foundation, Inc.
> +   Copyright (C) 2001, 2002, 2009 Free Software Foundation, Inc.
> +
> Written by Moritz Schulte .
>  
> +   Adapted to unionmount by Sergiu Ivanov 

Again add a period.

> This program is free software; you can redistribute it an

Re: [PATCH 1/3] Add the ``--mount'' command line option

2009-06-15 Thread Carl Fredrik Hammar
Hi,

On Mon, Jun 15, 2009 at 09:01:55PM +0300, Sergiu Ivanov wrote:
> On Sat, Jun 13, 2009 at 03:53:27PM +0200, Carl Fredrik Hammar wrote:
> > On Thu, Jun 11, 2009 at 09:10:24PM +0300, Sergiu Ivanov wrote:
> > > diff --git a/unionmount.c b/unionmount.c
> > > new file mode 100644
> > > index 000..e4aa043
> > > --- /dev/null
> > > +++ b/unionmount.c
> > 
> > Given that this is to implement the --mount option, I think mount.c
> > would be a better name.  The context of unionfs establishes that this
> > means unioning.
> > 
> > > @@ -0,0 +1,28 @@
> > > +/* Hurd unionmount
> > > +   The core of unionmount functionality.
> > 
> > Again the purpose of the file isn't really to implement unionmount,
> > but to implement the --mount option.
> 
> The idea is that the ``--mount'' option is just an intermediate step
> towards a ``stand-alone'' unionmount implementation. That's why I call
> this file in a more general way than just ``mount''.

What the final implementation of unionmount will look like is still
under discussion, e.g. whether it will be part of settrans, a helper to
settrans, or a standalone translator.  Even being an option to unionfs
is not totally out of the question, just very unlikely.

Even if it's certain that it will not be part of unionfs, we might still
release it as such.  So that people can use it and test it, until the
next implementation of unionmount is released.  So it should be treated
as an extension to unionfs unless that would require too much extra work
that will later be thrown away.

That said, I'm making a lot of fuzz of a very small issue here.  :-)

> Thanks a lot for your comments! :-)

I'm glad to help out.  :-)

I wanted to review the other patches that has more meat on them.
But I couldn't compile at the time because I messed up my nfs setup.
I'll see if I can get to them later.

Regards,
  Fredrik




Re: cmp: the port comparison server

2009-06-17 Thread Carl Fredrik Hammar
Hi,

On Mon, Jun 15, 2009 at 01:24:05AM +0200, olafbuddenha...@gmx.net wrote:
> On Wed, Jun 10, 2009 at 01:49:02PM +0200, Carl Fredrik Hammar wrote:
> 
> > I've been making some progress code-wise with libmob.  In particular I
> > have written a working server for secure comparison of ports.
> 
> Actually, I'm not sure where the comparision server fits in, in view of
> certain conclusions from the recent IRC discussions?...

The current plan is for the sender to give the dependencies if the
receiver holds its task port.  cmp will be used to prove this.  The sender
can send an arbitrary PID to the receiver, so if the receiver simply
gave the task port it got from proc to the sender, it could result in
privilege escalation for the sender.

> > Mostly so that I don't feel the need to follow commit conventions and
> > such, and continue to just ``go for it''.
> 
> As long as you consider you branch to be inofficial, you can do whatever
> you want -- it really doesn't matter whether you branched it from the
> Hurd repository, or build it standalone. The former is more convenient
> though IMHO.

I'll consider switching to a branch.  How do I go about this in practice
when the Hurd's repository is in migration limbo?  Initialize a git
repository with a CVS checkout?

> > For now I use 1234000 as the subsystem number.
> 
> I'm not sure what numbers new subsystems should use. It seems that the
> existing subsystems are below 10, where a reserved range begins.
> (For ioctls IIRC.) I guess new subsystems should go below the reserved
> range, though technically putting them above the end of the reserved
> range probably also works...

The ioctl range ends at 164200 and the highest subsystem over that is
88.  I'm well above either, so I should be `safe'.  I think I'll
wait to change it until I put it in the Hurd, and then I'll just pick
the next free subsystem number after the normal subsystems.

> > I based the translator on password originally, since it also is a
> > trivial translator whose main interface isn't IO.  So currently I
> > state this as well in the relevant copyright notices.  But this code
> > is similar to a lot of other trivfs translators, has diverged, and it
> > is only a small portion of the actual functionality.  With that in
> > mind is it really necessary to state that in the copyright?  (It is
> > also based on auth, but here I believe attribution is in order.)
> 
> As the copyright holder is the same, it's not really necessary to state
> where the code came from at all.

What about the copyright years?

Regards,
  Fredrik




Re: Moving to git

2009-06-19 Thread Carl Fredrik Hammar
Hi,

On Thu, Jun 18, 2009 at 03:02:41PM +0200, Thomas Schwinge wrote:
> Fetch the whole shebang from .
> Give it a try.  Unless someone finds any issues that really need to be
> corrected, these trees shall be the new basis for our collaboration!

Nicely done!  It worked for me, though I only cloned it and looked
through the log.

> Later, I'll push a few branches containing Hurd patches applied (libpager
> / ext2fs extensions, TLS support, ...), so that these can be easily
> merged into your local working branches.

This sounds like a good idea.

> Also, I'll add branches for the former GSoC projects -- are there any
> former GSoC people (CCed) who already have done their work somewhere
> else than in our CVS repository, or should I take the history contained
> in the CVS repository?

All my libchannel stuff is in CVS.

Regards,
  Fredrik




[patch #6856] Declare trivfs hooks external and provide defaults

2009-06-27 Thread Carl Fredrik Hammar

URL:
  

 Summary: Declare trivfs hooks external and provide defaults
 Project: The GNU Hurd
Submitted by: hammy
Submitted on: Sat 27 Jun 2009 08:47:14 PM CEST
Category: libtrivfs
Priority: 5 - Normal
  Status: Ready For Test
 Privacy: Public
 Assigned to: None
Originator Email: 
 Open/Closed: Open
 Discussion Lock: Any
 Planned Release: None
Wiki-like text discussion box: 

___

Details:

Here comes a patch that declares trivfs hooks as external so they
aren't defined in all files that include ``hurd/trivfs.h''.

I'm not currently setup to run the Hurd compiled with this patch.
But it does compile and individual translators such as hello,
firmlink, and symlink do run.




___

File Attachments:


---
Date: Sat 27 Jun 2009 08:47:14 PM CEST  Name:
0001-Declare-trivfs-hooks-external-and-provide-defaults.patch  Size: 7kB   By:
hammy



___

Reply to this item at:

  

___
  Message sent via/by Savannah
  http://savannah.gnu.org/





Re: cmp: the port comparison server

2009-06-29 Thread Carl Fredrik Hammar
Hi,

On Sun, Jun 28, 2009 at 11:21:09PM +0200, olafbuddenha...@gmx.net wrote:
> > On Mon, Jun 15, 2009 at 01:24:05AM +0200, olafbuddenha...@gmx.net
> > wrote:
> > > On Wed, Jun 10, 2009 at 01:49:02PM +0200, Carl Fredrik Hammar wrote:
> 
> > > Actually, I'm not sure where the comparision server fits in, in view
> > > of certain conclusions from the recent IRC discussions?...
> > 
> > The current plan is for the sender to give the dependencies if the
> > receiver holds its task port.  cmp will be used to prove this.  The
> > sender can send an arbitrary PID to the receiver, so if the receiver
> > simply gave the task port it got from proc to the sender, it could
> > result in privilege escalation for the sender.
> 
> I thought I already said this at some point, but maybe I wasn't really
> clear about it: I don't think that actually checking the task port is
> useful at all.
> 
> If the receiver has the task port, it can obtain the UID capabilities
> from the sender; and AIUI the reverse is also true. In other words,
> having the task port is effectively equivalent to having the same UIDs.
> And this can be safely checked using the existing auth mechanism.

Yes, it is effectively equivalent to the *current* access policy
implemented by the proc server.  However, that policy could change
in the future, and perhaps more importantly, a user-run proxy can
implement a different access policy.

So even if we implement the same access policy for mobile objects,
it would be a new and distinct access policy.  This would make in
harder for users to change it.  Reusing proc's access policy trumps
reusing the authentication mechanism, IMHO.

For example, proc could be proxied to secure a chroot so that the chrooted
processes can't access processes outside.  As has been discussed in
the past:
<http://lists.gnu.org/archive/html/bug-hurd/2005-05/msg00626.html>

> > I'll consider switching to a branch.  How do I go about this in
> > practice when the Hurd's repository is in migration limbo?  Initialize
> > a git repository with a CVS checkout?
> 
> Either that, or use the preliminary Git repository that is already
> online. In either case, you will have to rebase to the official
> repository once it is in place.
> 
> (This is a non-trivial use of rebase, but unless I'm mistaken, a single
> command invocation does the trick. Ask for help when you need to do it.)

I'll use the preliminary Git repository, now that it's in place.

Regards,
  Fredrik




Re: Code trust by reverse authentication

2009-06-29 Thread Carl Fredrik Hammar
Hi,

On Mon, Jun 29, 2009 at 12:05:26AM +0200, olafbuddenha...@gmx.net wrote:
> On Thu, Jun 11, 2009 at 07:07:35PM +0200, Carl Fredrik Hammar wrote:
> 
> > To load a mobile object we first need to load its code base that has
> > been specified by the sender of the object.  The ideal way to do this
> > would be to send a port to a .so file and then load that.
> > 
> > If we loaded the code module unconditionally, the sender could
> > essentially inject arbitrary code in the receiver.  So we need to
> > determine who has control of the module and only load it if it is a
> > trusted user, e.g. the same user or the root user.
> > 
> > But the FS interface does not provide any means to do this that isn't
> > easily faked.  Checking who the owner of the file is just gives you a
> > UID which is just a plain integer.
> 
> Any FS correctly implementing the protocol will ensure that only a user
> actually having the matching ID capability has control over the file...

Yes, but there's nothing stopping the sender from using a filesystem
that doesn't implement it properly.  It can even implement it itself.

> > And there doesn't seem to be anyway to check who controls the actual
> > translator either.
> 
> If that is the case, there is indeed no hope of using the file system to
> derive trust.

Precisely.

> > It hit me that what we want is essentially reverse authentication.
> > That is, letting the sender authenticate against the receiver, which
> > would normally be the server and the client respectively.  After this
> > the receiver will know for sure who controls the module.
> 
> I'm confused now. The way I read this, it sounds just like what I have
> been suggesting all along: Trust the code if we trust the sender...

Whoops, I meant letting the filesystem authenticate against the receiver.

> > Of course implementing this operation in existing translators would be
> > a chore.
> 
> Why? It would just be part of the migration framework...

Not if the functionality needs to be in the filesystem.  :-)

> > Instead we can provide a translator that provides this reverse
> > authentication but otherwise proxies its underlying node, or perhaps
> > just gives out a unproxied port to it directly.
> 
> This OTOH again sounds like a method for trusting code even if it's
> named by an untrusted sender...

Yes, this is what I meant.

> > This is has some other advantages.  It would be possible to ``bless''
> > code modules as appropriate for loading on a case-by-case basis.
> > Instead of loading any old file that happens to be owned by a trusted
> > user.
> 
> I don't see how this would help in any way...
> 
> Only the file owner can set a translator. And passive translators are
> started by the file system. All in all, exactly the same trust
> properties as with a "plain" file for all I can tell...

Well, it's more specific trust.  We can now explicitly determine that
the owner meant that the file can be trusted to load code from it.
Rather then say a file in `/tmp' that was accidentally made writable for
the sender, which would allow it to inject arbitrary code.  (I assume
the receiver is more careful with permissions on code modules.)

Not something I require of the mechanism, rather it's a nice side-effect.

> > Another interesting possibility would be to let the code modules be
> > translators themselves.  It would be kind of nice keeping it all the
> > needed functionality in a single file.  Though I'm not sure how it
> > would be implemented.  On the flip side it would mean that code would
> > be shared through a trivfs-like library, instead of in a separate
> > program which is usually prettier.
> 
> No idea what you mean...

Instead of proxying a normal .so file, the translator would allow
the client to load parts of the translator's own binary, which would
implement a mobile object.

As I said, this would be cool in some ways, but also weird.  :-)

> > The only real problem with specifying the module by port is that the
> > receiver needs to load the exact same module and not a copy of it.
> 
> I don't consider that a problem :-)

Consider Alice who wishes to use ioctls provided by Bob.  Now, Alice
trusts Bob enough to use his servers, but not enough to load code provided
by him.

After inspecting the source code of the ioctl handler, she trusts this
particular code provided by Bob.  So she copies the source code and
compiles it to an identical copy of the module used by Bob's server.
However, she still can't use the server since it provides a reference
to Bob's copy, which he can modify at any time.

Now I know you think th

Symbolic names vs. port references (was Re: Code trust by reverse authentication)

2009-06-30 Thread Carl Fredrik Hammar
Hi,

On Mon, Jun 29, 2009 at 11:23:34AM +0200, Carl Fredrik Hammar wrote:
> > > The only real problem with specifying the module by port is that the
> > > receiver needs to load the exact same module and not a copy of it.
> > 
> > I don't consider that a problem :-)
> 
> Consider Alice who wishes to use ioctls provided by Bob.  Now, Alice
> trusts Bob enough to use his servers, but not enough to load code provided
> by him.
> 
> After inspecting the source code of the ioctl handler, she trusts this
> particular code provided by Bob.  So she copies the source code and
> compiles it to an identical copy of the module used by Bob's server.
> However, she still can't use the server since it provides a reference
> to Bob's copy, which he can modify at any time.
> 
> Now I know you think that using other user's servers is a dubious
> use-case.  But code is just plain data, it should be enough to trust
> a copy.
> 
> I can't think of a way to support this use case if the module is specified
> by port.  It can only be done when specified by symbolic name, I think.
> If we want to make sure the code is identical, the sender could provide
> a SHA1 hash of the code it's using, or something.  Such a check would
> make it impossible to use a module compiled with different flags though...
> 
> Anyway, I keep going back and forth between symbolic names and port
> references.  I'm not really convinced either way yet.

After writing and sending this, I realized that I should've resolved
this conflict earlier.  Somehow I had convinced myself that port
references where the way to go without reasoning it through
properly.

To reiterate, the alternatives for specifying a code module are:

Symbolic names: simply load the module with a given name where
libraries are normally found.

Port reference: send a port to the module file and load it.

And I have already established that trust can be determined in both
cases.  So we need to look at other criteria to decide which to go
with.

The first one I have considered is the ability to load different
but compatible versions of the module.  For instance, so that one
can load a module with debugging symbols when debugging.  Or if
the module used by the sender is from a source not trusted by the
receiver, the receiver might locate a compatible version or copy it can
trust.

Using symbolic names handles this naturally, since the receiver's user
can simply install the modules of his choosing where they will be
found and override the system's modules.

Port references can't handle it properly.  In fact the only way would
be to use symbolic names as a fall-back mechanism if the port reference
can't be trusted.

Though this is mostly a convenience issue.  Because the module can be
seen as a part of the server, it is reasonable to state that the server
if the server can't provide a trusted module then it's a bug in the server
(or its setup).

The second criteria is whether the receiver can load the module if it
uses a different root directory than the sender.  This might sound like
exotic functionality at first, but it isn't for the ioctl use-case.
An application that chroots itself to an empty directory for security,
and then tries use an ioctl on a file opened before the chroot would fail
because it can't load the necessary ioctl handler module.

Port references handles this case naturally, since it never uses the
root directory.  And the only way to make it work with symbolic names,
is if the directories the modules are searched for are inherited from
parent processes the same way the root directory is.  Which would be
a very invasive change to the Hurd.

Given this it seems that port references is indeed the way to go.
If we want to be able to load alternative versions, we can add a
look-up with symbolic names later, using either it or port references
as a fall-back.

Regards,
  Fredrik




[bug #26960] firmlink opens target with client specified flags

2009-07-04 Thread Carl Fredrik Hammar

URL:
  

 Summary: firmlink opens target with client specified flags
 Project: The GNU Hurd
Submitted by: hammy
Submitted on: Sat 04 Jul 2009 06:05:50 PM CEST
Category: Hurd Servers
Severity: 3 - Normal
Priority: 5 - Normal
  Item Group: None
  Status: None
 Privacy: Public
 Assigned to: None
 Originator Name: 
Originator Email: 
 Open/Closed: Open
 Discussion Lock: Any
 Reproducibility: None
  Size (loc): None
 Planned Release: None
  Effort: 0.00
Wiki-like text discussion box: 

___

Details:

firmlink opens its target file with any client specified open
flags, except O_CREAT.  This makes it is possible for a client
to read or write to the target of a firmlink using the firmlink's
authority (io_restrict_auth is not enough).  It is also possible
for the client to halt firmlink's look-up midway through, using
O_NOLINK and O_NOTRANS.

A patch that fixes it has been attached.  Also a program that
exploits the security-hole, just run it on a firmlink to a target
that it should not be permitted to read.




___

File Attachments:


---
Date: Sat 04 Jul 2009 06:05:51 PM CEST  Name:
0001-Don-t-pass-client-flags-to-internal-firmlink-look-up.patch  Size: 1kB  
By: hammy


---
Date: Sat 04 Jul 2009 06:05:51 PM CEST  Name: firmlink-read.c  Size: 757B  
By: hammy



___

Reply to this item at:

  

___
  Message sent via/by Savannah
  http://savannah.gnu.org/





Re: Unionmount: proxying the control port

2009-07-07 Thread Carl Fredrik Hammar
Hi,

On Tue, Jul 07, 2009 at 08:55:37PM +0300, Sergiu Ivanov wrote:
> It was agreed that unionmount should forward some of the RPCs invoked
> on its control port to the mountee.  Most (if not all) of such RPCs
> are the fsys_* ones.  I've made up a list of RPCs which should be
> proxied in my opinion and I've also added a short explanation as to
> how this proxying should work:
> 
> * fsys_goaway: Both the mountee and unionmount should go away.
>
> * fsys_syncfs: Sync both the mountee and the underlying filesystem.

Yeah, these two are no-brainers.  The second one should also be in
unionfs.

> * fsys_set_options: This RPC should be forwarded to the mountee
>   completely.  unionmount does not have any command line switches that
>   would make much sense being altered at run-time.
> 
> * fsys_get_options: This RPC should be forwarded to the mountee
>   completely.  The reasoning is the same as for fsys_set_options.

This makes sense if we have unionmount in settrans or a stand-alone
translator with only a single mountee, but not with the current
unionfs implementation.

So you should wait to implement these until that has been decided.

> There also are some RPCs which I am not certain about:
> 
> * fsys_getfile: I don't really understand what this one does.

This (and file_getfh) enables a client to go back and forth between file
ports and symbolic identifiers unique to the file (for this translator).
Judging from file_getfh's comment this is only used by NFS.

Implementing this correctly is probably tricky, as you'd have three
(at least) sources of files: the underlying filesystem(s), the mountee, and
the unioned directories.  Note that you can only control the file handles
returned by the unioned directories, as the other files aren't proxied.

The question is which file handle goes to which of the unioned file
systems, this probably can't be determined reliably.  Also you might
not have the authority to obtain the control port of the underlying
file system.  Without this, it can't be implemented.

You'll want to study how extfs/diskfs implements file_getfh before you
reach any final conclusion though.

Also this should be implemented in unionfs as well (if its possible).

> * fsys_forward: From what antrik told me, this RPC should be forwarded
>   to the mountee completely; however, antrik also told me that this
>   RPC could be ignored, so I think I'll try to forward this completely
>   and see if it works.

Given that the only translators that implement this operation is
magic and new-fifo, I'd say you should skip it altogether.

If I'm not mistaken all this operation does is start a translator and
then forget about it, e.g. there's no need to keep track of it once its
started.  If this is the case there's no reason not to implement this
in libnetfs itself, and the other translator libraries.

And extending the translator libraries is not your current task.  :-)

> In any case, I think that the fact that these RPCs are so rarely used
> shows that unionmount can forward them completely to the mountee
> without bothering to handle them specially.

NFS support would be nice.  :-)

Regards,
  Fredrik




Re: How to clone a port right

2009-07-07 Thread Carl Fredrik Hammar
Hi,

On Tue, Jul 07, 2009 at 09:24:27PM +0300, Sergiu Ivanov wrote:
> In the latest working design unionmount creates a proxy node (by
> cloning the netfs_root_node of unionfs translator) and sets the
> mountee on this proxy.  I'm currently trying to implement cfhammar's
> idea about having the mountee run in orphan mode.  To achieve this I
> call only fshelp_start_translator, with no file_set_translator
> following.  When calling fshelp_start_translator, I have to give a
> pointer to a function, open_port, which in my case looks like this:
> 
>   /* Opens the port on which to set the mountee.  */
>   error_t
> open_port (int flags, mach_port_t * underlying,
>mach_msg_type_name_t * underlying_type, task_t task,
>void *cookie)
>   {
> err = 0;
> 
> /* Create a port to `np`.  */
> newpi = netfs_make_protid
>   (netfs_make_peropen (np, flags, NULL), user);
> if (!newpi)
>   {
> iohelp_free_iouser (user);
> return errno;
>   }
> 
> *underlying = underlying_port = ports_get_send_right (newpi);
> *underlying_type = MACH_MSG_TYPE_COPY_SEND;
> 
> ports_port_deref (newpi);
> 
> return err;
>   } /*open_port */
> 
> np is the pointer to the proxy node.  If I want to get rid of the
> proxy node I must somehow avoid keeping references to it.  However, in
> the above code I clearly add a reference to the proxy node by creating
> a port which goes to the mountee and thus, does not get destroyed
> immediately.

Is there any reason to have a proxy at all now that there's no
file_set_translator?  Why not simply pass unionfs's underlying node?

Regards,
  Fredrik




Re: How to clone a port right

2009-07-08 Thread Carl Fredrik Hammar
Hi,

On Wed, Jul 08, 2009 at 02:14:53PM +0300, Sergiu Ivanov wrote:
> My previous mail says that when I simply pass the unionfs's underlying
> node I get an error; the error is given in that mail :-) That's why
> the question in the subject of the post ;-)

I knew I shouldn't have replied when I was so tired.  ;-)

Regards,
  Fredrik




Re: How to clone a port right

2009-07-08 Thread Carl Fredrik Hammar
Hi,

On Tue, Jul 07, 2009 at 09:24:27PM +0300, Sergiu Ivanov wrote:
>   /* Opens the port on which to set the mountee.  */
>   error_t
> open_port (int flags, mach_port_t * underlying,
>mach_msg_type_name_t * underlying_type, task_t task,
>void *cookie)
>   {
> err = 0;
> 
> /* Create a port to `np`.  */
> newpi = netfs_make_protid
>   (netfs_make_peropen (np, flags, NULL), user);
> if (!newpi)
>   {
> iohelp_free_iouser (user);
> return errno;
>   }
> 
> *underlying = underlying_port = ports_get_send_right (newpi);
> *underlying_type = MACH_MSG_TYPE_COPY_SEND;
> 
> ports_port_deref (newpi);
> 
> return err;
>   } /*open_port */
> 
> [...]
>
> Now the question: can the reason for such failure be the fact that the
> port right stored in underlying_node is used both in unionmount and in
> the mountee?  If so, is there a way to clone the port right?

This issue has been resolved in IRC, but I'll answer the question
for the record.  The port right is copied by mach_msg, if its type
is MACH_MSG_TYPE_COPY_SEND.  And this is the type you passed in
UNDERLYING_TYPE, so it is copied.  To move it you use ..._MOVE_SEND,
you can also create a send right from a receive right with ..._MAKE_SEND.

Regards,
  Fredrik




Re: Unionmount: proxying the control port

2009-07-08 Thread Carl Fredrik Hammar
Hi,

On Wed, Jul 08, 2009 at 03:41:51PM +0300, Sergiu Ivanov wrote:
> On Tue, Jul 07, 2009 at 09:50:12PM +0200, Carl Fredrik Hammar wrote:
> > On Tue, Jul 07, 2009 at 08:55:37PM +0300, Sergiu Ivanov wrote:
> > > * fsys_set_options: This RPC should be forwarded to the mountee
> > >   completely.  unionmount does not have any command line switches that
> > >   would make much sense being altered at run-time.
> > > 
> > > * fsys_get_options: This RPC should be forwarded to the mountee
> > >   completely.  The reasoning is the same as for fsys_set_options.
> > 
> > This makes sense if we have unionmount in settrans or a stand-alone
> > translator with only a single mountee, but not with the current
> > unionfs implementation.
> 
> Well, the fact that currently unionmount functionality is implemented
> as additional option of unionfs should not influence the set of
> use-cases.  I mean that if unionmount is mainly about merging the
> underlying filesystem and the filesystem of the mountee, we don't
> really need to modify run-time options of unionmount, regardless of
> the way it works at the moment.

Perhaps I should clarify, I meant that it doesn't make sense to forward
the RPC completely if implemented in unionfs, since unionfs does have
run-time options that users might want to fiddle with.  That is,
forwarding should only be done for the --mount option.

> > > There also are some RPCs which I am not certain about:
> > > 
> > > * fsys_getfile: I don't really understand what this one does.
> > 
> > This (and file_getfh) enables a client to go back and forth between file
> > ports and symbolic identifiers unique to the file (for this translator).
> > Judging from file_getfh's comment this is only used by NFS.
> 
> Yeah, I came to a similar conclusion when I looked at file_getfh and
> fsys.h, but what really embarrasses me is that the file handle from
> which fsys_getfile extracts the port right is data_t; this tells me
> nothing about what this file handle is.  You call this ``symbolic
> identifier'', could you please give some details?

I don't know the details.  :-(

Actually you should study how nfsd makes use of the handle.  In particular
find out if it assumes that the handle is globally unique, if it does
then you can to.  Then you could pass it to all file systems and use
the first one that accepts the handle.

Thinking about it, nfsd would have a hard time with files from different
translators if the file handle isn't globally unique.  How it handles
this is probably key to how to solve this issue.

> > Implementing this correctly is probably tricky, as you'd have three
> > (at least) sources of files: the underlying filesystem(s), the mountee, and
> > the unioned directories.  Note that you can only control the file handles
> > returned by the unioned directories, as the other files aren't proxied.
> 
> This is true.  Since I don't really have control over anything but the
> unioned filesystem, I guess I should stick with it and drop the other
> two sources, what do you say?  OTOH, unionfs proxies directory nodes
> only and I wonder whether it makes sense to bother at all...

Doing it only for directories makes no sense at all.

> > The question is which file handle goes to which of the unioned file
> > systems, this probably can't be determined reliably.  Also you might
> > not have the authority to obtain the control port of the underlying
> > file system.  Without this, it can't be implemented.
> 
> Hm...  If one doesn't have the authority to obtain the control port of
> the underlying filesystem, how does have the right to set unionmount
> on that filesystem at all?  Or do I understand something wrong?

You can normally set a translator on files you own, no need to have
a control port to the underlying file system (just to the translator
you're setting).

> > You'll want to study how extfs/diskfs implements file_getfh before you
> > reach any final conclusion though.
> 
> I'd suppose you mean fsys_getfile ;-) Yeah, libdiskfs is a nice source
> of inspiration.

I meant file_getfh, so that you could perhaps gain info on how handles
are typically formed.  But studying how nfsd actually uses the handles
is more authoritative since that's the main use-case.

> > Also this should be implemented in unionfs as well (if its possible).
> 
> Looking at diskfs_S_fsys_getfile, I'm afraid that things are not so
> simple.  The idea is that both {diskfs,netfs}_S_fsys_getfile get a
> parameter char * handle.  diskfs_S_fsys_getfile converts this to a
> const union diskfs_fhandle* .  However libnetfs does not define
> anyt

Re: [PATCH 2/3] Add the code for starting up the mountee

2009-07-10 Thread Carl Fredrik Hammar
Hi,

On Fri, Jul 10, 2009 at 04:17:23AM +0200, olafbuddenha...@gmx.net wrote:
> > > > +  /*Opens the port on which to set the new translator */
> > > > +  error_t
> > > > +open_port
> > > > +(int flags, mach_port_t * underlying,
> > > > + mach_msg_type_name_t * underlying_type, task_t task, void *cookie)
> > > 
> > > AFAIK open_port should not be indented, and the parameter list should
> > > start on the same line.
> > 
> > I read in the GCS that emacs should be considered as an expert in GCS
> > indentation, and it indents things like this.  Which authority should
> > I comply with?
> 
> In general, the existing code is the authority. From what I've seen so
> far, this is handled very consistently in all existing Hurd code, and
> unionfs in fact has many examples.

Also most code I've seen in the hurd has the return type on the same line.
The reasoning to have the return type on a seperate line is to get the
function name in column 0 so it can be easily grepped.  But that
obviously can't be applied to nested functions.

Regards,
  Fredrik




Re: [bug #26960] firmlink opens target with client specified flags

2009-07-10 Thread Carl Fredrik Hammar
On Fri, Jul 10, 2009 at 03:27:32PM +0200, olafbuddenha...@gmx.net wrote:
> On Sat, Jul 04, 2009 at 04:05:52PM +0000, Carl Fredrik Hammar wrote:
> 
> > firmlink opens its target file with any client specified open flags,
> > except O_CREAT.  This makes it is possible for a client to read or
> > write to the target of a firmlink using the firmlink's authority
> > (io_restrict_auth is not enough).
> 
> This could be considered a feature in some situations :-)

It definetly shouldn't be the default though.  :-)

Regards,
  Fredrik




Re: Code trust by reverse authentication

2009-07-20 Thread Carl Fredrik Hammar
Hi,

On Thu, Jul 16, 2009 at 05:23:11AM +0200, olafbuddenha...@gmx.net wrote:
> As I already said on IRC, I do see some merit in the idea of reverse
> authentication, i.e. file objects authenticating against clients. It
> makes sense to consider files as active objects, which have access to
> certain user IDs. (The fact that a file system usually provides a whole
> bunch of individual objects, is just an implementation detail.)
> 
> The funny thing is that while there is no standard interface to expose
> the authority information, file systems nodes can indirectly prove they
> have access to certain user IDs, by starting a translator on the node.
> This is what your code blesser makes use of. It is rather hackish
> though, creating some practical problems -- if we ever mean to use
> reverse authentication seriously, we better provide a proper interface
> in the filesystems themselfs.
> 
> Anyways, while I see some merit in reverse authentication of files, I
> still don't think it is terribly useful for the migration framework.
> Having to explicitely bless every module is unrealistic; so in the
> standard case, we should just let the sender provide the authentication;
> and reserve explicit code blessing for the few cases where the sender
> can't, if at all.

Unrealistic?  Most likely the blessing would only require some changing
to a makefile's install rule.

Though in my experience mucking about with makefiles can easily turn
into a lot of work with little gain, so I'd rather postpone it as much
as possible.  So for this reason I will put the reverse authentication
in the sender for now.

The actual protocol will probably not need changing, except in name
only perhaps.  Now that I think about it the sender can probably forward
the request to a code blesser, I don't think it would even require any
additional RPCs than the original protocol.

I would say that it is inefficient to have so many separate translators,
but this could be alleviated using a translator that blesses all files
(or symlinks) under a directory.  This might even be more convenient,
just install the module under /hurd/modules and your set to go.

> In almost all cases, the sender itself is trusted. As I have said
> before, I don't think it's terribly likely that users really want to
> access servers run by other, untrusted users.  And the case that a user
> we don't trust runs a server which we want to access, but using modules
> we have blessed, *and* still really really needing migration, is even
> less likely -- I tend to consider it purely academical. Certainly not
> worth complicating the framework for it...

Well, ioctl handlers always require migration...

It is unlikely that another user uses a module blessed by one self.
However, it is likely that the user uses a module blessed by root,
especially if that user wants other users to be able to make use
of a service.

Also consider servers that drops authority for security reasons.
We could simply require that the server retain its authority is needed
to implement the server, but it seems like an unnecessary restriction.

Regards,
  Fredrik




Re: cmp: the port comparison server

2009-07-20 Thread Carl Fredrik Hammar
Hi,

On Thu, Jul 16, 2009 at 03:48:19AM +0200, olafbuddenha...@gmx.net wrote:
> On Mon, Jun 29, 2009 at 09:59:18AM +0200, Carl Fredrik Hammar wrote:
> > On Sun, Jun 28, 2009 at 11:21:09PM +0200, olafbuddenha...@gmx.net
> > wrote:
> > > If the receiver has the task port, it can obtain the UID
> > > capabilities from the sender; and AIUI the reverse is also true. In
> > > other words, having the task port is effectively equivalent to
> > > having the same UIDs. And this can be safely checked using the
> > > existing auth mechanism.
> > 
> > Yes, it is effectively equivalent to the *current* access policy
> > implemented by the proc server.  However, that policy could change in
> > the future, and perhaps more importantly, a user-run proxy can
> > implement a different access policy.
> 
> I don't think it's really useful to change the policy in proc, unless
> also changing the filesystems, and probably a number of other things...

Seeing as how proc and the filesystems are pretty decoupled I don't see
why you'd have to change filesystems if you change proc.  If that's
really the case however, a user still *must* change proc to change the
policy.

> I believe you are trying to be too smart here -- introducing new
> mechanisms in an attempt to be more generic than the Hurd itself is.

It isn't more generic then the Hurd currently is, it's making use of an
existing Hurd server, i.e proc.  The cmp server is only needed to make
this use possible, nothing else.  Actually, it's only really needed
for the possibility that the sender and receiver can use different proc
servers.

> I'm sure this creates more problems than it solves. All the existing Hurd
> mechanisms are built around the UNIX user mechanism; and probably even
> more importantly, all existing applications are taking it for granted.

Yes, including proc's access policy.  The receiver is still granted
access based on their owner, at least as long as standard proc is used.
I don't see how it can create new problems, only that it might expose
existing ones.

> As I have mentioned in other places, I am actually interested in
> restricted subenvironments, where applications can be run without having
> access to everything the user launching them has access to -- but I
> would base this on local subusers, i.e. using the existing user concept
> in a creative way, rather than attempting to change it...

Given that user identities are still used for determining access to
dependencies, I don't see how this applies.

> I think this in another case of YAGNI. You can't cover all possibilities
> anyways, and whatever you come up with now, most likely will *not* cover
> the cases that actually will be used in the future.

The only possibility I'm trying to cover is having the access policy to a
process memory and port rights in one place.  If I invent my own access
policy, that too will need to be changed in order to change this policy,
instead of keeping it in proc where it belongs (or rather where it's
already needed).

> *If* we create something using different policies one day, we can
> consider how the mobility framework fits in (just as we will have
> to consider all the other parts of the Hurd design); but for now,
> I think we should stick to what the rest of the Hurd does -- which
> ich the UNIX user concept.

But this is sticking with what the Hurd does, with what proc does to be
more precise.  Also any user is and should be free to implement its own
proc server, thus it may not be us who implements a different policy.

Regards,
  Fredrik




Re: [PATCH 3/4] Conditionally forward some fsys_* RPCs to the mountee.

2009-07-27 Thread Carl Fredrik Hammar
Hi,

This is just a thought that suddenly struck me, I figure I'll get it
out so I don't forget it.

On Fri, Jul 17, 2009 at 01:58:01PM +0300, Sergiu Ivanov wrote:
> +/* Shutdown the filesystem; flags are as for fsys_goaway.  */
> +error_t
> +netfs_shutdown (int flags)
> +{
> +  int nports;
> +  int err;
> +
> +  if ((flags & FSYS_GOAWAY_UNLINK)
> +  && S_ISDIR (netfs_root_node->nn_stat.st_mode))
> +return EBUSY;
> +
> +  /* Permit all current RPC's to finish, and then suspend any new ones.  */
> +  err = ports_inhibit_class_rpcs (netfs_protid_class);
> +  if (err)
> +return err;
> +
> +  nports = ports_count_class (netfs_protid_class);
> +  if (((flags & FSYS_GOAWAY_FORCE) == 0) && nports)
> +/* There are outstanding user ports; resume operations. */
> +{
> +  ports_enable_class (netfs_protid_class);
> +  ports_resume_class_rpcs (netfs_protid_class);
> +
> +  return EBUSY;
> +}
> +
> +  if (!(flags & FSYS_GOAWAY_NOSYNC))
> +{
> +  err = netfs_attempt_syncfs (0, flags);
> +  if (err)
> +return err;
> +}
> +
> +  /* If `shutting_down` is set, unionfs is going away because the
> + mounee has just died, so we don't need to attempt to shut it
> + down.  */
> +  if (!shutting_down)
> +{
> +  shutting_down = 1;
> +  err = fsys_goaway (mountee_control, flags);
> +  if (err)
> + return err;
> +}
> +
>return 0;
>  }

Shouldn't you resume operations if fsys_goaway returns EBUSY?

Regards,
  Fredrik




Re: Hiding nodes with unionmount

2009-07-30 Thread Carl Fredrik Hammar
Hi,

On Thu, Jul 30, 2009 at 10:14:06AM +0200, Arne Babenhauserheide wrote:
> Would it be possible to hide a node via unionmount? 
> 
> For example I might want to remove a file from the underlying node when I 
> replace it with a new but differently named one (i.e. lib.so.1 -> lib.so.3). 

Not really.  The main problem is knowing which file to be filtered out.
You would need to supply a list of files to exclude, which would
unnecessarily complicate unionmount IMHO.

A cleaner solution would be to first mount a hypothetical ``filterfs''
that removes the files, and then do a unionmount on top of that.  Also you
could just simply set a lib.so.1 -> lib.so.3 symlink in the mountee, which
would shadow the underlying lib.so.1.

Regards,
  Fredrik




Server provided ioctl handler details

2009-07-30 Thread Carl Fredrik Hammar
Hi,

I've gotten a basic implementation of server provided ioctl handlers
working.  In which I unconditionally load a server provided module
containing the handlers on every call to ioctl, and then immediately
unload the module.  I would like to discuss some details of the current
implementation.

Currently, attempts to handle a ioctl are done in the following order:

* Use server ioctl handler
* Look up a glibc ioctl handler
* Translate ioctl into an RPC

This allows the server ioctl handler to override glibc's ioctl handlers.
The question is whether this functionality is useful, it could be
potentially confusing if the well established ioctls defined by glibc
are overridden.  I tend to think that since the ioctl handler is provided
by a trusted user, we can trust it to do the sane thing whatever that
turns out to be.  ;-)

The server module provides a single handler, which has essentially
the same signature as ioctl() itself, e.g it takes a file descriptor,
a request number, and a void pointer as arguments and is expected to
return -1 and set errno on error.  This is the same as the glibc provided
ioctl handlers.

While a module can only provide a single handler, it is easy to see that
it can forward the call to other handlers if necessary.  But I'm going
to try to handle it the same as glibc's handlers, so that the handlers
also could be used by simply linking it to the application itself.

I'm also considering allowing the server to specify several modules
containing ioctl handlers.  This would be useful if we decide to use
code-blessers since in that case it would be impossible for an untrusted
user to provide a trusted module that aggregates several trusted handlers.

But then again a device that implements ioctls from several device classes
are probably very exotic (at least when excluding those already defined
by glibc).  What do you think?

Regards,
  Fredrik




[bug #27184] Memory leak in procfs

2009-08-05 Thread Carl Fredrik Hammar

URL:
  

 Summary: Memory leak in procfs
 Project: The GNU Hurd
Submitted by: hammy
Submitted on: Wed 05 Aug 2009 09:27:23 PM CEST
Category: Hurd Servers
Severity: 3 - Normal
Priority: 5 - Normal
  Item Group: None
  Status: None
 Privacy: Public
 Assigned to: None
 Originator Name: 
Originator Email: 
 Open/Closed: Open
 Discussion Lock: Any
 Reproducibility: None
  Size (loc): None
 Planned Release: None
  Effort: 0.00
Wiki-like text discussion box: 

___

Details:

Ironically `free' triggers a memory leak in `procfs', which uses
~0.5 megs more memory each time `free' is run.  Seeing one's memory
filling up on every `free' can be quite nerve wracking.





___

Reply to this item at:

  

___
  Message sent via/by Savannah
  http://savannah.gnu.org/





[PATCH 3/4] Reload fd ioctl handler on each call to ioctl

2009-08-14 Thread Carl Fredrik Hammar
* hurd/hurdioctl.c (_hurd_dummy_ioctl_handler): New function.
* hurd/hurd/ioctl.h (_hurd_dummy_ioctl_handler): Likewise.
* hurd/fd-ioctl-call.c: New file.
* hurd/hurd/fd.h: Update copyright years.
(_hurd_fd_call_ioctl_handler): New function declaration.
* hurd/Makefile: Update copyright years.
(user-interfaces): Add `ioctl_handlers'.
(dtable): Add `fd-ioctl-call'.
* sysdeps/mach/hurd/ioctl.c: Update copyright years.
(__ioctl): Call fd ioctl handler.
---
 hurd/Makefile |7 ++-
 hurd/fd-ioctl-call.c  |  122 +
 hurd/hurd/fd.h|6 ++-
 hurd/hurd/ioctl.h |5 ++
 hurd/hurdioctl.c  |9 +++-
 sysdeps/mach/hurd/ioctl.c |   12 -
 6 files changed, 155 insertions(+), 6 deletions(-)
 create mode 100644 hurd/fd-ioctl-call.c

diff --git a/hurd/Makefile b/hurd/Makefile
index ff6b7cb..b267e50 100644
--- a/hurd/Makefile
+++ b/hurd/Makefile
@@ -1,4 +1,4 @@
-# Copyright (C) 1991,92,93,94,95,96,97,98,99,2001,2002,2004,2006
+# Copyright (C) 1991,92,93,94,95,96,97,98,99,2001,2002,2004,2006,2009
 #  Free Software Foundation, Inc.
 # This file is part of the GNU C Library.
 
@@ -40,7 +40,7 @@ user-interfaces   := $(addprefix hurd/,\
   msg msg_reply msg_request \
   exec exec_startup crash interrupt \
   fs fsys io term tioctl socket ifsock \
-  login password pfinet \
+  login password pfinet ioctl_handlers \
   )
 server-interfaces  := hurd/msg faultexc
 
@@ -67,7 +67,8 @@ sig   = hurdsig hurdfault siginfo hurd-raise preempt-sig \
  thread-self thread-cancel intr-msg catch-signal
 dtable = dtable port2fd new-fd alloc-fd intern-fd \
  getdport openport \
- fd-close fd-read fd-write hurdioctl ctty-input ctty-output
+ fd-close fd-read fd-write hurdioctl ctty-input ctty-output \
+ fd-ioctl-call
 inlines = $(inline-headers:%.h=%-inlines)
 distribute = hurdstartup.h hurdfault.h hurdhost.h sysvshm.h \
 faultexc.defs intr-rpc.defs intr-rpc.h intr-msg.h Notes
diff --git a/hurd/fd-ioctl-call.c b/hurd/fd-ioctl-call.c
new file mode 100644
index 000..c4a6b10
--- /dev/null
+++ b/hurd/fd-ioctl-call.c
@@ -0,0 +1,122 @@
+/* Call descriptors ioctl handler.
+   Copyright (C) 2009 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, write to the Free
+   Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
+   Boston, MA 02110-1301, USA.  */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+
+/* Reauthenticate the port referenced by RESULT, and return the
+   reauthenticated handel in RESULT.  The original port is unconditionally
+   consumed.  */
+
+static error_t
+reauthenticate (io_t *result)
+{
+  error_t err;
+  io_t unauth;
+  mach_port_t ref;
+  error_t reauth (auth_t auth)
+{
+  return __auth_user_authenticate (auth, ref,
+  MACH_MSG_TYPE_MAKE_SEND,
+  result);
+}
+
+  unauth = *result;
+  ref = __mach_reply_port ();
+  err = __io_reauthenticate (unauth, ref, MACH_MSG_TYPE_MAKE_SEND);
+  if (! err)
+err = _hurd_ports_use (INIT_PORT_AUTH, &reauth);
+
+  __mach_port_destroy (__mach_task_self (), ref);
+  __mach_port_deallocate (__mach_task_self (), unauth);
+  return err;
+}
+
+
+/* Get PORT's ioctl handler module and load it, returning the linker map
+   in MAP as returned by `dlopen'.  */
+
+static error_t
+load_ioctl_handler (io_t port, void **map)
+{
+  io_t hio;
+  int hfd;
+  error_t err;
+
+  err = __ioctl_handlers (port, &hio);
+  if (!err)
+{
+  err = reauthenticate (&hio);
+  if (!err)
+   {
+ hfd = _hurd_intern_fd (hio, 0, 1);
+ if (hfd != -1)
+   {
+ char *hfd_name;
+ err = __asprintf (&hfd_name, "/dev/fd/%d", hfd);
+ if (err == -1)
+   err = errno;
+ else
+   {
+ *map = __libc_dlopen (hfd_name);
+ free (hfd_name);
+ err = 0;
+   }
+ __close (hfd);
+   }
+   }

[PATCH 2/4] Cast with ioctl_handler_t instead of its definition

2009-08-14 Thread Carl Fredrik Hammar
* hurd/hurd/ioctl.h (_HURD_HANDLE_IOCTLS_1): Cast to `ioctl_handler_t' type.
---
 hurd/hurd/ioctl.h |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/hurd/hurd/ioctl.h b/hurd/hurd/ioctl.h
index ee156f0..e5ab3dc 100644
--- a/hurd/hurd/ioctl.h
+++ b/hurd/hurd/ioctl.h
@@ -57,7 +57,7 @@ extern int hurd_register_ioctl_handler (int first_request, 
int last_request,
   static const struct ioctl_handler handler##_ioctl_handler##moniker \
__attribute__ ((__unused__)) =\
 { _IOC_NOTYPE (first), _IOC_NOTYPE (last),   \
-   (int (*) (int, int, void *)) (handler), NULL };   \
+   (ioctl_handler_t) (handler), NULL };  \
   text_set_element (_hurd_ioctl_handler_lists,   \
 handler##_ioctl_handler##moniker)
 #define_HURD_HANDLE_IOCTLS(handler, first, last)   
  \
-- 
1.6.3.3





[PATCH 1/4] Don't resolve FD's port and ctty twice for TIOCSCTTY

2009-08-14 Thread Carl Fredrik Hammar
* hurd/hurdioctl.c (tiocsctty): Only get FD ports, do work in...
(tiocsctty_internal): ...this new function.
---
 hurd/hurdioctl.c |   31 ---
 1 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/hurd/hurdioctl.c b/hurd/hurdioctl.c
index 96d910b..13a1a78 100644
--- a/hurd/hurdioctl.c
+++ b/hurd/hurdioctl.c
@@ -246,32 +246,41 @@ _hurd_setcttyid (mach_port_t cttyid)
 /* Make FD be the controlling terminal.
This function is called for `ioctl (fd, TCIOSCTTY)'.  */
 
-static int
-tiocsctty (int fd,
-  int request) /* Always TIOCSCTTY.  */
+static error_t
+tiocsctty_internal (io_t port, io_t ctty)
 {
   mach_port_t cttyid;
   error_t err;
 
-  /* Get FD's cttyid port, unless it is already ours.  */
-  err = HURD_DPORT_USE (fd, ctty != MACH_PORT_NULL ? EADDRINUSE :
-   __term_getctty (port, &cttyid));
-  if (err == EADDRINUSE)
+  if (ctty != MACH_PORT_NULL)
 /* FD is already the ctty.  Nothing to do.  */
 return 0;
-  else if (err)
-return __hurd_fail (err);
+
+  /* Get FD's cttyid port.  */
+  err =__term_getctty (port, &cttyid);
+  if (err)
+return err;
 
   /* Change the terminal's pgrp to ours.  */
-  err = HURD_DPORT_USE (fd, __tioctl_tiocspgrp (port, _hurd_pgrp));
+  err = __tioctl_tiocspgrp (port, _hurd_pgrp);
   if (err)
-return __hurd_fail (err);
+return err;
 
   /* Make it our own.  */
   install_ctty (cttyid);
 
   return 0;
 }
+
+static int
+tiocsctty (int fd,
+  int request) /* Always TIOCSCTTY.  */
+{
+  error_t err;
+
+  err = HURD_DPORT_USE (fd, tiocsctty_internal (port, ctty));
+  return __hurd_fail (err);
+}
 _HURD_HANDLE_IOCTL (tiocsctty, TIOCSCTTY);
 
 /* Dissociate from the controlling terminal.  */
-- 
1.6.3.3





[PATCH 4/4] Save handlers between calls to ioctl

2009-08-14 Thread Carl Fredrik Hammar
* hurd/hurd/ioctl.h (ioctl_handler_t): Move from here...
* hurd/hurd/fd.h (ioctl_handler_t): ...to here.
Change return type.
Add `d', `crit', and `result' arguments.
Change all callers and ioctl handlers.
(hurd_fd): Add ioctl handler members.
* hurd/fd-close.c: Update copyright years.
(_hurd_fd_close): Clear ioctl handler and deallocate linker map.
* hurd/fd-ioctl-call.c (_hurd_fd_call_ioctl_handler):
Change return type.
Add `d', `crit', and `result' arguments.
Handle descriptor locking.
Save ioctl handler between calls.
Change all callers.
* hurd/new-fd.c: Update copyright years.
(_hurd_new_fd): Initialize ioctl handler.
* hurd/port2fd.c: Update copyright years.
(_hurd_port2fd): Initialize ioctl handler.
* hurd/dtable.c: Update copyright years.
(init_dtable): Initialize ioctl handler.
* hurd/fd-ioctl-cleanup.c: New file.
* hurd/Makefile (dtable): Add `fd-ioctl-cleanup'.
* sysdeps/mach/hurd/ioctl.c (__ioctl): Lock descriptor.
---
 hurd/Makefile |2 +-
 hurd/dtable.c |8 ++-
 hurd/fd-close.c   |   13 -
 hurd/fd-ioctl-call.c  |  112 +++-
 hurd/fd-ioctl-cleanup.c   |   31 ++
 hurd/hurd/fd.h|   43 +-
 hurd/hurd/ioctl.h |7 +-
 hurd/hurdioctl.c  |  140 +
 hurd/new-fd.c |7 ++-
 hurd/port2fd.c|   11 +++-
 sysdeps/mach/hurd/ioctl.c |   43 --
 11 files changed, 349 insertions(+), 68 deletions(-)
 create mode 100644 hurd/fd-ioctl-cleanup.c

diff --git a/hurd/Makefile b/hurd/Makefile
index b267e50..bc6718f 100644
--- a/hurd/Makefile
+++ b/hurd/Makefile
@@ -68,7 +68,7 @@ sig   = hurdsig hurdfault siginfo hurd-raise preempt-sig \
 dtable = dtable port2fd new-fd alloc-fd intern-fd \
  getdport openport \
  fd-close fd-read fd-write hurdioctl ctty-input ctty-output \
- fd-ioctl-call
+ fd-ioctl-call fd-ioctl-cleanup
 inlines = $(inline-headers:%.h=%-inlines)
 distribute = hurdstartup.h hurdfault.h hurdhost.h sysvshm.h \
 faultexc.defs intr-rpc.defs intr-rpc.h intr-msg.h Notes
diff --git a/hurd/dtable.c b/hurd/dtable.c
index 125345e..bf5a9c1 100644
--- a/hurd/dtable.c
+++ b/hurd/dtable.c
@@ -1,4 +1,5 @@
-/* Copyright (C) 1991,92,93,94,95,96,97,99 Free Software Foundation, Inc.
+/* Copyright (C) 1991,92,93,94,95,96,97,99,2009
+   Free Software Foundation, Inc.
This file is part of the GNU C Library.
 
The GNU C Library is free software; you can redistribute it and/or
@@ -70,6 +71,11 @@ init_dtable (void)
  _hurd_port_init (&new->port, MACH_PORT_NULL);
  _hurd_port_init (&new->ctty, MACH_PORT_NULL);
 
+ /* Initialize the ioctl handler.  */
+ new->ioctl_handler = NULL;
+ new->ioctl_handler_map = NULL;
+ new->ioctl_handler_users = NULL;
+
  /* Install the port in the descriptor.
 This sets up all the ctty magic.  */
  _hurd_port2fd (new, _hurd_init_dtable[i], 0);
diff --git a/hurd/fd-close.c b/hurd/fd-close.c
index f497d75..f3d0aa5 100644
--- a/hurd/fd-close.c
+++ b/hurd/fd-close.c
@@ -1,4 +1,4 @@
-/* Copyright (C) 1994, 1995, 1997 Free Software Foundation, Inc.
+/* Copyright (C) 1994, 1995, 1997, 2009  Free Software Foundation, Inc.
This file is part of the GNU C Library.
 
The GNU C Library is free software; you can redistribute it and/or
@@ -17,6 +17,7 @@
02111-1307 USA.  */
 
 #include 
+#include 
 
 error_t
 _hurd_fd_close (struct hurd_fd *fd)
@@ -33,10 +34,20 @@ _hurd_fd_close (struct hurd_fd *fd)
 }
   else
 {
+  /* Clear the descriptor's ioctl handler, and close its liker map.  */
+  if (fd->ioctl_handler_map != NULL
+ && _hurd_userlink_clear (&fd->ioctl_handler_users))
+   __libc_dlclose (fd->ioctl_handler_map);
+
+  fd->ioctl_handler = NULL;
+  fd->ioctl_handler_map = NULL;
+  fd->ioctl_handler_users = NULL;
+
   /* Clear the descriptor's port cells.
 This deallocates the ports if noone else is still using them.  */
   _hurd_port_set (&fd->ctty, MACH_PORT_NULL);
   _hurd_port_locked_set (&fd->port, MACH_PORT_NULL);
+
   err = 0;
 }
 
diff --git a/hurd/fd-ioctl-call.c b/hurd/fd-ioctl-call.c
index c4a6b10..28e9680 100644
--- a/hurd/fd-ioctl-call.c
+++ b/hurd/fd-ioctl-call.c
@@ -93,30 +93,112 @@ load_ioctl_handler (io_t port, void **map)
 }
 
 
-/* Call D's ioctl handler, loading it from the underlying port if
-   necessary.  Arguments are the same as ioctl handlers.  */
+/* Load and install D's ioctl handler.  D should be locked and CRIT should
+   point to a critical section lock.  CRIT is unlocked whenever D is
+   unlocked and a new lock is returned in CRIT if D needs to be relocked.
+   D is unlocked while the handler is loaded.  If the underlying port
+   of D changes while it's unlocked the operation is retried with the
+   new port.  This is repeated until the port remains unchanged, or
+   if i

[PATCH 0/4] Load ioctl handlers from server

2009-08-14 Thread Carl Fredrik Hammar
Hi,

here comes a patch series that implements the brunt of the glibc side of
loading ioctl handlers from servers.  The only major missing part is the
reverse authentication to establishes trust in the server.  Though this is
pretty orthogonal and requires more changes to the Hurd than glibc.  So I
thought this was a nice point to send in what I have so far for review.

The first patch fixes a bug in one of the ioctl handlers.  The handler
extracts the descriptors underlying port twice, while in the mean time
the port could change.  The second patch is a very simple clean-up.

The fun starts in patch three, where we load an ioctl handler from the
server each time ioctl is called.  Of note here is how I load the port
specified module using the dlopen which operate on a path... which
isn't very pretty.  The only way to make it pretty involves making a
Hurd specific dlopen, which I'm not sure we'd want.

The last patch saves the ioctl handler between ioctl calls.  This required
that the descriptor is locked until a handler is found that accepts
the ioctl, which means that the lock must be held inside the handlers
and unlocked by the right handler before doing any RPCs.  This is quite
complicated and messy.

This can be made much cleaner for handlers which simply send an RPC,
which is the intended use-case for server provided ioctls.  As long
as ioctl() can tell that a such a handler will handle an ioctl before
calling it, it could simply extract the descriptors ports, unlock it,
and call a simple handler of type error_t (*) (io_t port, io_t ctty).

There is one ioctl handler that operate on the descriptor itself though,
and I suspect two handlers that operate on the entire descriptor table
that should hold the lock to the descriptor for the duration.  But I
don't see a use-case for overriding these ioctls, so perhaps they should
just be handled separately.  What do you think?

Also of note in the patch is that the descriptor is unlocked while the
ioctl handler is loaded.  Actually only  because _hurd_intern_fd also
attempts to lock it, but more importantly because holding a lock during
an RPC is a bad idea.  Note that if the underlying port changes during
the unlock, the load is retried.

I'm not sure if it's worth while sending in the accompanying Hurd patches
which just adds an interface and a test case.  I think I'll wait with
that until I have the reverse authentication going which is my next step.

Regards,
  Fredrik

Carl Fredrik Hammar (4):
  Don't resolve FD's port and ctty twice for TIOCSCTTY
  Cast with ioctl_handler_t instead of its definition
  Reload fd ioctl handler on each call to ioctl
  Save handlers between calls to ioctl

 hurd/Makefile |7 +-
 hurd/dtable.c |8 ++-
 hurd/fd-close.c   |   13 +++-
 hurd/fd-ioctl-call.c  |  204 +
 hurd/fd-ioctl-cleanup.c   |   31 +++
 hurd/hurd/fd.h|   45 ++-
 hurd/hurd/ioctl.h |   12 ++-
 hurd/hurdioctl.c  |  166 +++-
 hurd/new-fd.c |7 ++-
 hurd/port2fd.c|   11 ++-
 sysdeps/mach/hurd/ioctl.c |   51 ++-
 11 files changed, 497 insertions(+), 58 deletions(-)
 create mode 100644 hurd/fd-ioctl-call.c
 create mode 100644 hurd/fd-ioctl-cleanup.c





Re: [PATCH 2/3] Start the mountee in a lazy fashion.

2009-08-17 Thread Carl Fredrik Hammar
Hi,

On Mon, Aug 17, 2009 at 07:15:09PM +0300, Sergiu Ivanov wrote:
> > > +  mountee_node = netfs_make_node (netfs_root_node->nn);
> > > +  if (!mountee_node)
> > > +return ENOMEM;
> > > +
> > > +  /* Set the mountee on the new node.
> > > + Note that the O_READ flag does not actually limit access to the
> > > + mountee's filesystem considerably.  Whenever a client looks up a
> > > + node which is not a directory, unionfs will give off a port to
> > > + the node itself, withouth proxying it.  Proxying happens only for
> > > + directory nodes.  */
> > 
> > Why are you passing O_READ, anyways?...
> 
> The flags which I pass to start_mountee are used in opening the port
> to the root node of the mountee.  (I'm sure you've noticed this; I'm
> just re-stating it to avoid ambiguities).  Inside unionfs, this port
> is used for lookups *only*, so O_READ should be sufficient for any
> internal unionfs needs.  Ports to files themselves are not proxied by
> unionfs (as the comment reads), so the flags passed here don't
> influence that case.
> 
> Also, unionfs itself uses O_READ when opening directory nodes, too
> (well, it actually uses O_READ | O_NOTRANS, but that's unapplicable in
> our case).

You don't need O_READ to do lookup, only to read the entries of a
directory.  If you don't read the entries you should drop the O_READ,
and in unionfs itself if applicable.

(Note that permission to do lookups is determined entirely by the
*current* permission bits and the UIDs and GIDs the file handle has
been authenticated with, unlike read and write for which permissions
are checked only checked once at open.)

Regards,
  Fredrik




[PATCH] Cast to ioctl_handler_t instead of its definition

2009-08-26 Thread Carl Fredrik Hammar
* hurd/hurd/ioctl.h (_HURD_HANDLE_IOCTLS_1): Cast to `ioctl_handler_t'.
---
Hi,

This is an obvious fix to glibc that makes it easier to change
`ioctl_handler_t'.  I sent in this patch before as part of the server
provided ioctl handler patch series.  I'm resending it because I'm going
to resend the rest of the series aswell, however this time I'm sending
it stand-alone since this patch is useful in itself.

Regards,
  Fredrik
---
 hurd/hurd/ioctl.h |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/hurd/hurd/ioctl.h b/hurd/hurd/ioctl.h
index ee156f0..e5ab3dc 100644
--- a/hurd/hurd/ioctl.h
+++ b/hurd/hurd/ioctl.h
@@ -57,7 +57,7 @@ extern int hurd_register_ioctl_handler (int first_request, 
int last_request,
   static const struct ioctl_handler handler##_ioctl_handler##moniker \
__attribute__ ((__unused__)) =\
 { _IOC_NOTYPE (first), _IOC_NOTYPE (last),   \
-   (int (*) (int, int, void *)) (handler), NULL };   \
+   (ioctl_handler_t) (handler), NULL };  \
   text_set_element (_hurd_ioctl_handler_lists,   \
 handler##_ioctl_handler##moniker)
 #define_HURD_HANDLE_IOCTLS(handler, first, last)   
  \
-- 
1.6.3.3





[PATCH] Only resolve FD's port and ctty once for TIOCSCTTY

2009-08-26 Thread Carl Fredrik Hammar
* hurd/hurdioctl.c (tiocsctty): Only get FD ports, do work in...
(tiocsctty_internal): ...this new function.
---
Hi,

This is another stand-alone patch I have sent earlier.

This fixes the handler for TIOCSCTTY so it only resolves the underlying
port of the file descriptor once.  Since the descriptor isn't locked
between the seperete resolves the underlying port can change mid-call.

Regards,
  Fredrik
---
 hurd/hurdioctl.c |   31 ---
 1 files changed, 20 insertions(+), 11 deletions(-)

diff --git a/hurd/hurdioctl.c b/hurd/hurdioctl.c
index 96d910b..13a1a78 100644
--- a/hurd/hurdioctl.c
+++ b/hurd/hurdioctl.c
@@ -246,32 +246,41 @@ _hurd_setcttyid (mach_port_t cttyid)
 /* Make FD be the controlling terminal.
This function is called for `ioctl (fd, TCIOSCTTY)'.  */
 
-static int
-tiocsctty (int fd,
-  int request) /* Always TIOCSCTTY.  */
+static error_t
+tiocsctty_internal (io_t port, io_t ctty)
 {
   mach_port_t cttyid;
   error_t err;
 
-  /* Get FD's cttyid port, unless it is already ours.  */
-  err = HURD_DPORT_USE (fd, ctty != MACH_PORT_NULL ? EADDRINUSE :
-   __term_getctty (port, &cttyid));
-  if (err == EADDRINUSE)
+  if (ctty != MACH_PORT_NULL)
 /* FD is already the ctty.  Nothing to do.  */
 return 0;
-  else if (err)
-return __hurd_fail (err);
+
+  /* Get FD's cttyid port.  */
+  err =__term_getctty (port, &cttyid);
+  if (err)
+return err;
 
   /* Change the terminal's pgrp to ours.  */
-  err = HURD_DPORT_USE (fd, __tioctl_tiocspgrp (port, _hurd_pgrp));
+  err = __tioctl_tiocspgrp (port, _hurd_pgrp);
   if (err)
-return __hurd_fail (err);
+return err;
 
   /* Make it our own.  */
   install_ctty (cttyid);
 
   return 0;
 }
+
+static int
+tiocsctty (int fd,
+  int request) /* Always TIOCSCTTY.  */
+{
+  error_t err;
+
+  err = HURD_DPORT_USE (fd, tiocsctty_internal (port, ctty));
+  return __hurd_fail (err);
+}
 _HURD_HANDLE_IOCTL (tiocsctty, TIOCSCTTY);
 
 /* Dissociate from the controlling terminal.  */
-- 
1.6.3.3





[PATCH] Reopen file descriptor on lookup

2009-08-26 Thread Carl Fredrik Hammar
* hurd/lookup-retry.c (__hurd_file_name_lookup_retry) :
Reopen file descriptor before returning it.
---
Hi,

Another stand-alone patch, but this one is new.  It fixes what I think is
a bug in glibc.

The problem is that when opening file descriptors using FS_MAGIC_RETRY's
`fd/*' syntax, the descriptor isn't actually reopened, instead it acts as
a `dup'.  That is, it isn't possible to change open mode and the file
cursor is shared between the new and old file descriptor.

This could have been intentional, however reopening seems much more useful.
It is definitely odd that `open ("/dev/fd/4", O_READ)' can result in an
unreadable file descriptor.  In addition, this change makes the Hurd
consistent with Linux on this subject.

The behavior can easily be tested from the command line:

  echo "Hello world!" > foo
  cat /dev/fd/3 3>> foo

Which currently results in  ``cat: /dev/fd/3: Bad file descriptor'', but
will result in ``Hello world!'' with my fix (and on Linux).

Regards,
  Fredrik
---
 hurd/lookup-retry.c |   15 +++
 1 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/hurd/lookup-retry.c b/hurd/lookup-retry.c
index 96968f8..ce9eaf0 100644
--- a/hurd/lookup-retry.c
+++ b/hurd/lookup-retry.c
@@ -221,15 +221,14 @@ __hurd_file_name_lookup_retry (error_t (*use_init_port)
  errno = save;
  if (err)
return err;
- if (*end == '\0')
-   return 0;
+ /* Do a normal retry on the remaining components,
+or reopen the descriptor.  */
+ if (*end != '\0')
+   file_name = end + 1; /* Skip the slash.  */
  else
-   {
- /* Do a normal retry on the remaining components.  */
- startdir = *result;
- file_name = end + 1; /* Skip the slash.  */
- break;
-   }
+   file_name = end;
+ startdir = *result;
+ break;
}
  else
goto bad_magic;
-- 
1.6.3.3





[PATCH 0/2] Ioctl handler protocol patches

2009-08-26 Thread Carl Fredrik Hammar
Hi,

this patch series adds a MIG interface for getting an ioctl handler module
associated with an io object.  The first patch adds a plain and insecure
protocol, while the second does reverse authentication to establish the
identity of the module provider.

Regards,
  Fredrik

Carl Fredrik Hammar (2):
  Add ioctl-handler interface
  Reverse authenticating ioctl-handler protocol

 hurd/ioctl_handler.defs   |   68 +
 hurd/ioctl_handler_reply.defs |   46 +++
 hurd/subsystems   |1 +
 3 files changed, 115 insertions(+), 0 deletions(-)
 create mode 100644 hurd/ioctl_handler.defs
 create mode 100644 hurd/ioctl_handler_reply.defs





[PATCH 1/2] Add ioctl-handler interface

2009-08-26 Thread Carl Fredrik Hammar
* hurd/ioctl_handler.defs: New file.
* hurd/subsystems: Add ioctl_handler.
---
 hurd/ioctl_handler.defs |   35 +++
 hurd/subsystems |1 +
 2 files changed, 36 insertions(+), 0 deletions(-)
 create mode 100644 hurd/ioctl_handler.defs

diff --git a/hurd/ioctl_handler.defs b/hurd/ioctl_handler.defs
new file mode 100644
index 000..cd59a16
--- /dev/null
+++ b/hurd/ioctl_handler.defs
@@ -0,0 +1,35 @@
+/* Protocol for server provided ioctl handler.
+
+   Written by Carl Fredrik Hammar .
+
+   This file is part of the GNU Hurd.
+
+   Copyright (C) 2009 Free Software Foundation, Inc.
+
+   The GNU Hurd is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 2 of the License, or
+   (at your option) any later version.
+
+   The GNU Hurd is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License along
+   with the GNU Hurd; see the file COPYING.  If not, write to the Free
+   Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
+   MA 02110-1301 USA.  */
+
+subsystem ioctl_handler 39000;
+
+#ifdef IOCTL_HANDLER_IMPORTS
+IOCTL_HANDLER_IMPORTS
+#endif
+
+#include 
+
+routine ioctl_handler_get (
+   io: io_t;
+   RPT
+   out handlers: io_t);
diff --git a/hurd/subsystems b/hurd/subsystems
index c05895c..93abe7a 100644
--- a/hurd/subsystems
+++ b/hurd/subsystems
@@ -36,6 +36,7 @@ tape  35000   Special control operations for magtapes
 login  36000   Database of logged-in users
 pfinet 37000   Internet configuration calls
 password   38000   Password checker
+ioctl_handler  39000   Server provided ioctl handler
   10- First subsystem of ioctl class 'f' (lowest class)
 tioctl156000   Ioctl class 't' (terminals)
 tioctl156200 (continued)
-- 
1.6.3.3





[PATCH 2/2] Reverse authenticating ioctl-handler protocol

2009-08-26 Thread Carl Fredrik Hammar
* hurd/ioctl_handler.defs (ioctl_handler_get): Remove routine.
(ioctl_handler_request): New routine.
(ioctl_handler_reply): Allocate space for this routine.
* hurd/ioctl_handler_reply.defs: New file.
---
 hurd/ioctl_handler.defs   |   37 +++-
 hurd/ioctl_handler_reply.defs |   46 +
 2 files changed, 81 insertions(+), 2 deletions(-)
 create mode 100644 hurd/ioctl_handler_reply.defs

diff --git a/hurd/ioctl_handler.defs b/hurd/ioctl_handler.defs
index cd59a16..2930ea6 100644
--- a/hurd/ioctl_handler.defs
+++ b/hurd/ioctl_handler.defs
@@ -29,7 +29,40 @@ IOCTL_HANDLER_IMPORTS
 
 #include 
 
-routine ioctl_handler_get (
+/* The protocol specified in this file and its server-side equivalent,
+   , is used to securely obtain ioctl
+   handler code that is specific to an io object.  It is used as follows:
+
+* The client sends an `ioctl_handler_request' to the server,
+  with a rendezvous port.
+
+* The server sends an `ioctl_handler_acknowledge' in reply, this
+  is needed so that the client won't wait indefinitely for
+  `auth_server_authenticate' to return if the server does not support
+  this protocol.
+
+* The client sends an `auth_server_authenticate' with the rendezvous
+  port and a reply port to the auth server.  (Note the reversal of
+  the roles of client and server from the normal auth protocol.)
+
+* The server sends an `auth_user_authenticate' with the rendezvous
+  port to the auth server.
+
+* The auth server matches up the requests using the rendezvous port,
+  and returns the reply port to the server and the server's ID block
+  to the client.
+
+* The server sends a port to a file that can be opened with `dlopen'
+  and exports an`ioctl_handler_t' typed function named
+  `hurd_ioctl_handler'.
+
+* The client can now use the ID block to determine whether it can
+  trust the server, e.g. if the server is root or the same user,
+  which is the policy used by `ioctl' in glibc.  */
+
+routine ioctl_handler_request (
io: io_t;
RPT
-   out handlers: io_t);
+   rendezvous: mach_port_send_t);
+
+skip; /* Space for ioctl_handler_reply.  */
diff --git a/hurd/ioctl_handler_reply.defs b/hurd/ioctl_handler_reply.defs
new file mode 100644
index 000..af8595b
--- /dev/null
+++ b/hurd/ioctl_handler_reply.defs
@@ -0,0 +1,46 @@
+/* Replies to ioctl_handler interface.
+
+   Written by Carl Fredrik Hammar .
+
+   This file is part of the GNU Hurd.
+
+   Copyright (C) 2009 Free Software Foundation, Inc.
+
+   The GNU Hurd is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 2 of the License, or
+   (at your option) any later version.
+
+   The GNU Hurd is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License along
+   with the GNU Hurd; see the file COPYING.  If not, write to the Free
+   Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
+   MA 02110-1301 USA.  */
+
+subsystem ioctl_handler 39100;  /* Must be ioctl_handler + 100.  */
+
+#ifdef IOCTL_HANDLER_IMPORTS
+IOCTL_HANDLER_IMPORTS
+#endif
+
+#include 
+
+type reply_port_t = polymorphic | MACH_MSG_TYPE_PORT_SEND_ONCE
+   ctype: mach_port_t;
+
+/* See  on how to use these routines.  */
+
+simpleroutine ioctl_handler_acknowledge (
+   reply_port: reply_port_t;
+   RETURN_CODE_ARG
+);
+
+simpleroutine ioctl_handler_reply (
+   reply_port: mach_port_move_send_t;
+   RETURN_CODE_ARG;
+   handle: mach_port_send_t
+);
-- 
1.6.3.3





[PATCH 0/3] Use server provided ioctl-handler

2009-08-26 Thread Carl Fredrik Hammar
Hi,

Patches that makes glibc make use of the new ioctl_handler protocol.
This should be applied in parallel to the protocol patches; the first
two patches depend on the first protocol patch, the third one depend on
the second protocol patch.  In addition all these patches depend on the
three cleanup patches I sent earlier.

There are some minor fixes in the first two patches in respect to the
earlier versions I sent.  Most notebly I now rely on `dlopen' doing
reauthentication when it reopens the ioctl-handler module file descriptor.
I have changed identifiers and file names containing `ioctl_handlers' to
`ioctl_handler'.  In addition there are some fixes I have not bothered
to track, but I don't thing anybody have reviewed my earlier patches
enough to be interested in listing those anyway.

Regards,
  Fredrik

Carl Fredrik Hammar (3):
  Reload fd ioctl handler on each call to ioctl
  Save handlers between calls to ioctl
  Use reverse authenticating ioctl-handler protocal

 hurd/Makefile |6 +-
 hurd/dtable.c |8 +-
 hurd/fd-close.c   |   13 ++-
 hurd/fd-ioctl-call.c  |  327 +
 hurd/fd-ioctl-cleanup.c   |   31 +
 hurd/hurd/fd.h|   45 ++-
 hurd/hurd/ioctl.h |   10 +-
 hurd/hurdioctl.c  |  143 +++-
 hurd/new-fd.c |7 +-
 hurd/port2fd.c|   11 ++-
 sysdeps/mach/hurd/ioctl.c |   51 +++-
 11 files changed, 603 insertions(+), 49 deletions(-)
 create mode 100644 hurd/fd-ioctl-call.c
 create mode 100644 hurd/fd-ioctl-cleanup.c





[PATCH 1/3] Reload fd ioctl handler on each call to ioctl

2009-08-26 Thread Carl Fredrik Hammar
* hurd/hurdioctl.c (_hurd_dummy_ioctl_handler): New function.
* hurd/hurd/ioctl.h (_hurd_dummy_ioctl_handler): Likewise.
* hurd/fd-ioctl-call.c: New file.
* hurd/hurd/fd.h: Update copyright years.
(_hurd_fd_call_ioctl_handler): New function declaration.
* hurd/Makefile: Update copyright years.
(user-interfaces): Add `ioctl_handler'.
(dtable): Add `fd-ioctl-call'.
* sysdeps/mach/hurd/ioctl.c: Update copyright years.
(__ioctl): Call fd ioctl handler.
---
 hurd/Makefile |7 ++-
 hurd/fd-ioctl-call.c  |   90 +
 hurd/hurd/fd.h|6 ++-
 hurd/hurd/ioctl.h |5 ++
 hurd/hurdioctl.c  |9 -
 sysdeps/mach/hurd/ioctl.c |   12 +-
 6 files changed, 123 insertions(+), 6 deletions(-)
 create mode 100644 hurd/fd-ioctl-call.c

diff --git a/hurd/Makefile b/hurd/Makefile
index ff6b7cb..4ad5128 100644
--- a/hurd/Makefile
+++ b/hurd/Makefile
@@ -1,4 +1,4 @@
-# Copyright (C) 1991,92,93,94,95,96,97,98,99,2001,2002,2004,2006
+# Copyright (C) 1991,92,93,94,95,96,97,98,99,2001,2002,2004,2006,2009
 #  Free Software Foundation, Inc.
 # This file is part of the GNU C Library.
 
@@ -40,7 +40,7 @@ user-interfaces   := $(addprefix hurd/,\
   msg msg_reply msg_request \
   exec exec_startup crash interrupt \
   fs fsys io term tioctl socket ifsock \
-  login password pfinet \
+  login password pfinet ioctl_handler \
   )
 server-interfaces  := hurd/msg faultexc
 
@@ -67,7 +67,8 @@ sig   = hurdsig hurdfault siginfo hurd-raise preempt-sig \
  thread-self thread-cancel intr-msg catch-signal
 dtable = dtable port2fd new-fd alloc-fd intern-fd \
  getdport openport \
- fd-close fd-read fd-write hurdioctl ctty-input ctty-output
+ fd-close fd-read fd-write hurdioctl ctty-input ctty-output \
+ fd-ioctl-call
 inlines = $(inline-headers:%.h=%-inlines)
 distribute = hurdstartup.h hurdfault.h hurdhost.h sysvshm.h \
 faultexc.defs intr-rpc.defs intr-rpc.h intr-msg.h Notes
diff --git a/hurd/fd-ioctl-call.c b/hurd/fd-ioctl-call.c
new file mode 100644
index 000..c5a41e8
--- /dev/null
+++ b/hurd/fd-ioctl-call.c
@@ -0,0 +1,90 @@
+/* Call descriptors ioctl handler.
+   Copyright (C) 2009 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, write to the Free
+   Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
+   Boston, MA 02110-1301, USA.  */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+
+/* Get PORT's ioctl handler module and load it, returning the linker map
+   in MAP as returned by `dlopen'.  */
+
+static error_t
+load_ioctl_handler (io_t port, void **map)
+{
+  io_t handler;
+  error_t err;
+
+  err = __ioctl_handler_get (port, &handler);
+  if (!err)
+{
+  int fd = _hurd_intern_fd (handler, 0, 1);  /* Consumes HANDLER.  */
+  if (fd == -1)
+   err = errno;
+  else
+   {
+ char *name;
+ err = __asprintf (&name, "/dev/fd/%d", fd);
+ if (err == -1)
+   err = errno;
+ else
+   {
+ *map = __libc_dlopen (name);
+ free (name);
+ err = 0;
+   }
+ __close (fd);
+   }
+}
+
+  return err;
+}
+
+
+/* Call D's ioctl handler, loading it from the underlying port if
+   necessary.  Arguments are the same as ioctl handlers.  */
+
+int
+_hurd_fd_call_ioctl_handler (int fd, int request, void *arg)
+{
+  ioctl_handler_t ioctl_handler;
+  void *ioctl_handler_map;
+  int result;
+  error_t err;
+
+  /* Avoid spurious "may be used uninitialized" warning.  */
+  ioctl_handler_map = NULL;
+
+  err = HURD_DPORT_USE (fd, load_ioctl_handler (port, &ioctl_handler_map));
+  if (!err && ioctl_handler_map)
+ioctl_handler = __libc_dlsym (ioctl_handler_map, "hurd_ioctl_handler");
+  if (err || !ioctl_handler_map || !ioctl_handler)
+ioctl_handler = _hurd_dummy_ioctl_handler;
+
+  result = (*ioctl_handler) (fd, request, arg);
+
+  if (ioctl_handler_map)
+__libc_dlclose (ioctl_handler_map);
+
+  return result;
+}
diff --git a/hurd/hurd/fd.h b/

[PATCH 2/3] Save handlers between calls to ioctl

2009-08-26 Thread Carl Fredrik Hammar
* hurd/hurd/ioctl.h (ioctl_handler_t): Move from here...
* hurd/hurd/fd.h (ioctl_handler_t): ...to here.
Change return type.
Add `d', `crit', and `result' arguments.
Change all callers and ioctl handlers.
(hurd_fd): Add ioctl handler members.
* hurd/fd-close.c: Update copyright years.
(_hurd_fd_close): Clear ioctl handler and deallocate linker map.
* hurd/fd-ioctl-call.c (_hurd_fd_call_ioctl_handler):
Change return type.
Add `d', `crit', and `result' arguments.
Handle descriptor locking.
Save ioctl handler between calls.
Change all callers.
* hurd/new-fd.c: Update copyright years.
(_hurd_new_fd): Initialize ioctl handler.
* hurd/port2fd.c: Update copyright years.
(_hurd_port2fd): Initialize ioctl handler.
* hurd/dtable.c: Update copyright years.
(init_dtable): Initialize ioctl handler.
* hurd/fd-ioctl-cleanup.c: New file.
* hurd/Makefile (dtable): Add `fd-ioctl-cleanup'.
* sysdeps/mach/hurd/ioctl.c (__ioctl): Lock descriptor.
---
 hurd/Makefile |2 +-
 hurd/dtable.c |8 ++-
 hurd/fd-close.c   |   13 -
 hurd/fd-ioctl-call.c  |  112 +++-
 hurd/fd-ioctl-cleanup.c   |   31 ++
 hurd/hurd/fd.h|   43 +-
 hurd/hurd/ioctl.h |7 +-
 hurd/hurdioctl.c  |  140 +
 hurd/new-fd.c |7 ++-
 hurd/port2fd.c|   11 +++-
 sysdeps/mach/hurd/ioctl.c |   43 --
 11 files changed, 349 insertions(+), 68 deletions(-)
 create mode 100644 hurd/fd-ioctl-cleanup.c

diff --git a/hurd/Makefile b/hurd/Makefile
index 4ad5128..768f93a 100644
--- a/hurd/Makefile
+++ b/hurd/Makefile
@@ -68,7 +68,7 @@ sig   = hurdsig hurdfault siginfo hurd-raise preempt-sig \
 dtable = dtable port2fd new-fd alloc-fd intern-fd \
  getdport openport \
  fd-close fd-read fd-write hurdioctl ctty-input ctty-output \
- fd-ioctl-call
+ fd-ioctl-call fd-ioctl-cleanup
 inlines = $(inline-headers:%.h=%-inlines)
 distribute = hurdstartup.h hurdfault.h hurdhost.h sysvshm.h \
 faultexc.defs intr-rpc.defs intr-rpc.h intr-msg.h Notes
diff --git a/hurd/dtable.c b/hurd/dtable.c
index 125345e..bf5a9c1 100644
--- a/hurd/dtable.c
+++ b/hurd/dtable.c
@@ -1,4 +1,5 @@
-/* Copyright (C) 1991,92,93,94,95,96,97,99 Free Software Foundation, Inc.
+/* Copyright (C) 1991,92,93,94,95,96,97,99,2009
+   Free Software Foundation, Inc.
This file is part of the GNU C Library.
 
The GNU C Library is free software; you can redistribute it and/or
@@ -70,6 +71,11 @@ init_dtable (void)
  _hurd_port_init (&new->port, MACH_PORT_NULL);
  _hurd_port_init (&new->ctty, MACH_PORT_NULL);
 
+ /* Initialize the ioctl handler.  */
+ new->ioctl_handler = NULL;
+ new->ioctl_handler_map = NULL;
+ new->ioctl_handler_users = NULL;
+
  /* Install the port in the descriptor.
 This sets up all the ctty magic.  */
  _hurd_port2fd (new, _hurd_init_dtable[i], 0);
diff --git a/hurd/fd-close.c b/hurd/fd-close.c
index f497d75..f3d0aa5 100644
--- a/hurd/fd-close.c
+++ b/hurd/fd-close.c
@@ -1,4 +1,4 @@
-/* Copyright (C) 1994, 1995, 1997 Free Software Foundation, Inc.
+/* Copyright (C) 1994, 1995, 1997, 2009  Free Software Foundation, Inc.
This file is part of the GNU C Library.
 
The GNU C Library is free software; you can redistribute it and/or
@@ -17,6 +17,7 @@
02111-1307 USA.  */
 
 #include 
+#include 
 
 error_t
 _hurd_fd_close (struct hurd_fd *fd)
@@ -33,10 +34,20 @@ _hurd_fd_close (struct hurd_fd *fd)
 }
   else
 {
+  /* Clear the descriptor's ioctl handler, and close its liker map.  */
+  if (fd->ioctl_handler_map != NULL
+ && _hurd_userlink_clear (&fd->ioctl_handler_users))
+   __libc_dlclose (fd->ioctl_handler_map);
+
+  fd->ioctl_handler = NULL;
+  fd->ioctl_handler_map = NULL;
+  fd->ioctl_handler_users = NULL;
+
   /* Clear the descriptor's port cells.
 This deallocates the ports if noone else is still using them.  */
   _hurd_port_set (&fd->ctty, MACH_PORT_NULL);
   _hurd_port_locked_set (&fd->port, MACH_PORT_NULL);
+
   err = 0;
 }
 
diff --git a/hurd/fd-ioctl-call.c b/hurd/fd-ioctl-call.c
index c5a41e8..873a5ca 100644
--- a/hurd/fd-ioctl-call.c
+++ b/hurd/fd-ioctl-call.c
@@ -61,30 +61,112 @@ load_ioctl_handler (io_t port, void **map)
 }
 
 
-/* Call D's ioctl handler, loading it from the underlying port if
-   necessary.  Arguments are the same as ioctl handlers.  */
+/* Load and install D's ioctl handler.  D should be locked and CRIT should
+   point to a critical section lock.  CRIT is unlocked whenever D is
+   unlocked and a new lock is returned in CRIT if D needs to be relocked.
+   D is unlocked while the handler is loaded.  If the underlying port
+   of D changes while it's unlocked the operation is retried with the
+   new port.  This is repeated until the port remains unchanged, or
+   if i

[PATCH 3/3] Use reverse authenticating ioctl-handler protocal

2009-08-26 Thread Carl Fredrik Hammar
* hurd/Makefile (interfaces): Add `ioctl_handler_reply'.
* hurd/fd-ioctl-call.c: Check that handlers are provided by the same user.
---
 hurd/Makefile|3 +-
 hurd/fd-ioctl-call.c |  157 +-
 2 files changed, 158 insertions(+), 2 deletions(-)

diff --git a/hurd/Makefile b/hurd/Makefile
index 768f93a..1286142 100644
--- a/hurd/Makefile
+++ b/hurd/Makefile
@@ -40,7 +40,8 @@ user-interfaces   := $(addprefix hurd/,\
   msg msg_reply msg_request \
   exec exec_startup crash interrupt \
   fs fsys io term tioctl socket ifsock \
-  login password pfinet ioctl_handler \
+  login password pfinet \
+  ioctl_handler ioctl_handler_reply \
   )
 server-interfaces  := hurd/msg faultexc
 
diff --git a/hurd/fd-ioctl-call.c b/hurd/fd-ioctl-call.c
index 873a5ca..9b727a6 100644
--- a/hurd/fd-ioctl-call.c
+++ b/hurd/fd-ioctl-call.c
@@ -24,6 +24,158 @@
 #include 
 #include 
 #include 
+#include 
+#include 
+
+
+/* Implement the client-side of the protocol described in
+   .  The resulting ioctl-handler module is
+   returned in FILE, and the ID block is returned in EUIDS, AUIDS, EGIDS,
+   and AGIDS.  */
+static error_t
+ioctl_handler_get (io_t io,
+  auth_t auth,
+  mach_port_t rendezvous,
+  mach_msg_type_name_t rendezvous_type,
+  file_t *file,
+  uid_t **euids, size_t *euids_len,
+  uid_t **auids, size_t *auids_len,
+  uid_t **egids, size_t *egids_len,
+  uid_t **agids, size_t *agids_len)
+{
+  struct {
+mach_msg_header_t head;
+mach_msg_type_t error_type;
+kern_return_t error;
+mach_msg_type_t file_type;
+file_t file;
+  } reply;
+  mach_port_t reply_port;
+  error_t err, msg_err;
+
+  reply_port = __mach_reply_port ();
+  if (reply_port == MACH_PORT_NULL)
+return KERN_RESOURCE_SHORTAGE;
+
+  err = __ioctl_handler_request (io, rendezvous, MACH_MSG_TYPE_MAKE_SEND);
+  if (!err)
+do
+  err = __auth_server_authenticate (auth,
+   rendezvous, rendezvous_type,
+   reply_port, MACH_MSG_TYPE_MAKE_SEND,
+   euids, euids_len,
+   auids, auids_len,
+   egids, egids_len,
+   agids, agids_len);
+while (err == EINTR);
+  if (err)
+{
+  __mach_port_destroy (__mach_task_self (), reply_port);
+  return err;
+}
+
+  if (!err)
+msg_err = __mach_msg (&reply.head, MACH_RCV_MSG | MACH_RCV_INTERRUPT,
+ 0, sizeof (reply), reply_port,
+ MACH_MSG_TIMEOUT_NONE, MACH_PORT_NULL);
+
+  __mach_port_destroy (__mach_task_self (), reply_port);
+  if (err)
+return err;
+  else
+err = msg_err;
+
+  if (!err && reply.head.msgh_id == MACH_NOTIFY_SEND_ONCE)
+err = MIG_SERVER_DIED;
+
+  if (!err && reply.head.msgh_id != 39101)
+{
+  err = MIG_REPLY_MISMATCH;
+  __mach_msg_destroy (&reply.head);
+}
+
+  if (!err)
+{
+  if (reply.head.msgh_size != sizeof (reply)
+ || !(reply.head.msgh_bits & MACH_MSGH_BITS_COMPLEX)
+
+ || reply.error_type.msgt_name != MACH_MSG_TYPE_INTEGER_32
+ || reply.error_type.msgt_size != 32
+ || reply.error_type.msgt_number != 1
+ || reply.error_type.msgt_inline != TRUE
+ || reply.error_type.msgt_longform != FALSE
+ || reply.error_type.msgt_deallocate != FALSE
+
+ || reply.file_type.msgt_name != MACH_MSG_TYPE_PORT_SEND
+ || reply.file_type.msgt_size != 32
+ || reply.file_type.msgt_number != 1
+ || reply.file_type.msgt_inline != TRUE
+ || reply.file_type.msgt_longform != FALSE
+ || reply.file_type.msgt_deallocate != FALSE)
+   {
+ err = MIG_TYPE_ERROR;
+ __mach_msg_destroy (&reply.head);
+   }
+  else
+   {
+ err = reply.error;
+ *file = reply.file;
+   }
+}
+
+  if (err)
+{
+  __munmap (*euids, *euids_len * sizeof (uid_t));
+  __munmap (*auids, *auids_len * sizeof (uid_t));
+  __munmap (*egids, *egids_len * sizeof (uid_t));
+  __munmap (*agids, *agids_len * sizeof (uid_t));
+}
+
+  return err;
+}
+
+
+/* Get the ioctl-handler module from IO, and check that the provider
+   is the same user or root, otherwise return EACCES.  */
+static error_t
+ioctl_handler_checked_get (io_t io, file_t *file)
+{
+  auth_t auth;
+  mach_port_t rendezvous;
+  uid_t euid, *euids, *auids, *egids, *agids;
+  size_t euids_len, auids_len, egids_len, agids_len;
+  error_t err;
+  int i;
+
+

[PATCH 0/3] Test server provided ioctl-handler

2009-08-26 Thread Carl Fredrik Hammar
Hi,

Here comes patches that provide tests for server provided
ioctl-handlers.  These should be applied in parallel to the
glibc patches, to which they correspond one-to-one.

Since there aren't many test cases in the Hurd, I didn't have much
to go by in implementing the test.  Instead I just focused on getting
something working, but this should atleast be a kernel for future 
tests of the ioctl functionality.

Regards,
  Fredrik

Carl Fredrik Hammar (3):
  Test server provided ioctl-handler
  Update to reflect ioctl_handler_t change
  Test reverse authenticating ioctl-handler protocol





[PATCH 3/3] Test reverse authenticating ioctl-handler protocol

2009-08-26 Thread Carl Fredrik Hammar
* Makefile (ioctl_handler_MIGSFLAGS): New variable.
* ioctl-tests/qioctl.c (S_ioctl_handler_get): Remove deprecated routine.
(S_ioctl_handler_request): New funtction.
---
 ioctl-tests/Makefile |1 +
 ioctl-tests/qioctl.c |   29 -
 2 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/ioctl-tests/Makefile b/ioctl-tests/Makefile
index a8333a7..a5df1d5 100644
--- a/ioctl-tests/Makefile
+++ b/ioctl-tests/Makefile
@@ -28,6 +28,7 @@ SRCS = qioctl.c qioctl-handler.c #test.c
 OBJS = qioctl.o qioctl-handler.o qioctlServer.o ioctl_handlerServer.o #test.o
 target = qioctl #test
 HURDLIBS = trivfs fshelp
+ioctl_handler-MIGSFLAGS = -DREPLY_PORTS
 
 include ../Makeconf
 
diff --git a/ioctl-tests/qioctl.c b/ioctl-tests/qioctl.c
index cd6389c..5967bf9 100644
--- a/ioctl-tests/qioctl.c
+++ b/ioctl-tests/qioctl.c
@@ -29,6 +29,7 @@
 
 #include "qioctl_S.h"
 #include "ioctl_handler_S.h"
+#include "ioctl_handler_reply_U.h"
 
 
 const char *argp_program_version = STANDARD_HURD_VERSION (qioctl);
@@ -93,23 +94,41 @@ S_qnormal (io_t io)
 /* Open and return HANDLER_FILE_NAME as described in
.  */
 error_t
-S_ioctl_handler_get (io_t io, io_t *handler)
+S_ioctl_handler_request (io_t io,
+mach_port_t request_reply_port,
+mach_msg_type_name_t request_reply_port_type,
+mach_port_t rendezvous)
 {
-  file_t handler_authed;
+  auth_t auth;
+  mach_port_t reply_port;
+  file_t handler_authed, handler;
   error_t err;
 
-  err = 0;
+  err = ioctl_handler_acknowledge (request_reply_port,
+  request_reply_port_type, 0);
+  if (err)
+return MIG_NO_REPLY;
+
+  auth = getauth ();
+  err = auth_user_authenticate (auth, rendezvous, MACH_MSG_TYPE_COPY_SEND,
+&reply_port);
+  mach_port_deallocate (mach_task_self (), auth);
+  mach_port_deallocate (mach_task_self (), rendezvous);
+  if (err)
+return MIG_NO_REPLY;
+
   handler_authed = file_name_lookup (handler_file_name, 0, 0);
   if (handler_authed == MACH_PORT_NULL)
 err = errno;
 
   if (!err)
 {
-  err = io_restrict_auth (handler_authed, handler, 0, 0, 0, 0);
+  err = io_restrict_auth (handler_authed, &handler, 0, 0, 0, 0);
   mach_port_deallocate (mach_task_self (), handler_authed);
 }
 
-  return err;
+  ioctl_handler_reply (reply_port, err, handler, MACH_MSG_TYPE_MOVE_SEND);
+  return MIG_NO_REPLY;
 }
 
 static int
-- 
1.6.3.3





[PATCH 1/3] Test server provided ioctl-handler

2009-08-26 Thread Carl Fredrik Hammar
* (ioctl-tests): New subdirectory.
* (ioctl-tests/Makefile)
(ioctl-tests/qio.h)
(ioctl-tests/qioctl.c)
(ioctl-tests/qioctl.defs)
(ioctl-tests/qioctl-handler.c)
(ioctl-tests/test.c): New files.
---
 Makefile |2 +-
 ioctl-tests/Makefile |   45 +
 ioctl-tests/qio.h|   30 
 ioctl-tests/qioctl-handler.c |   15 
 ioctl-tests/qioctl.c |  150 ++
 ioctl-tests/qioctl.defs  |   36 ++
 ioctl-tests/test.c   |   92 ++
 7 files changed, 369 insertions(+), 1 deletions(-)
 create mode 100644 ioctl-tests/Makefile
 create mode 100644 ioctl-tests/qio.h
 create mode 100644 ioctl-tests/qioctl-handler.c
 create mode 100644 ioctl-tests/qioctl.c
 create mode 100644 ioctl-tests/qioctl.defs
 create mode 100644 ioctl-tests/test.c

diff --git a/Makefile b/Makefile
index 6d7e688..9a9b265 100644
--- a/Makefile
+++ b/Makefile
@@ -41,7 +41,7 @@ prog-subdirs = auth proc exec init term \
   login daemons nfsd boot console \
   hostmux usermux ftpfs trans \
   console-client utils sutils ufs-fsck ufs-utils \
-  benchmarks fstests
+  benchmarks fstests ioctl-tests
 
 # Other directories
 other-subdirs = hurd doc config release include
diff --git a/ioctl-tests/Makefile b/ioctl-tests/Makefile
new file mode 100644
index 000..a8333a7
--- /dev/null
+++ b/ioctl-tests/Makefile
@@ -0,0 +1,45 @@
+# Makefile for ioctl tests.
+#
+# Copyright (C) 2009 Free Software Foundation, Inc.
+#
+# Written by Carl Fredrik Hammar .
+#
+# This file is part of the GNU Hurd.
+#
+# The GNU Hurd is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# The GNU Hurd is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with the GNU Hurd; see the file COPYING.  If not, write to the Free
+# Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
+# MA 02110-1301 USA.
+
+dir := ioctl-tests
+makemode := server
+
+SRCS = qioctl.c qioctl-handler.c #test.c
+OBJS = qioctl.o qioctl-handler.o qioctlServer.o ioctl_handlerServer.o #test.o
+target = qioctl #test
+HURDLIBS = trivfs fshelp
+
+include ../Makeconf
+
+%.so.$(hurd-version): %_pic.o
+   $(CC) -shared -Wl,-soname=$@ -o $@ \
+ $(rpath) $(CFLAGS) $(LDFLAGS) $($*.so-LDFLAGS) $^
+
+cleantarg += qioctl.server qioctl-handler.so.$(hurd-version)
+
+qioctl.server: qioctl qioctl-handler.so.$(hurd-version)
+   settrans -acg $@ $^
+
+check: test qioctl.server
+#  Prefix with `./' if not absolute path.
+   $(if $(filter /%,$^),$^,./$^)
diff --git a/ioctl-tests/qio.h b/ioctl-tests/qio.h
new file mode 100644
index 000..2deb213
--- /dev/null
+++ b/ioctl-tests/qio.h
@@ -0,0 +1,30 @@
+/* Dummy ioctls.
+
+   Copyright (C) 2009 Free Software Foundation, Inc.
+
+   Written by Carl Fredrik Hammar .
+
+   This file is part of the GNU Hurd.
+
+   The GNU Hurd is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 2 of the License, or
+   (at your option) any later version.
+
+   The GNU Hurd is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License along
+   with the GNU Hurd; see the file COPYING.  If not, write to the Free
+   Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
+   MA 02110-1301 USA.  */
+
+#include 
+
+/* An ioctl to be handled by an RPC.  */
+#define QNORMAL _IO('q', 0)
+
+/* An ioctl to be handled by a server provided ioctl-handler.  */
+#define QOVERRIDE _IO('q', 1)
diff --git a/ioctl-tests/qioctl-handler.c b/ioctl-tests/qioctl-handler.c
new file mode 100644
index 000..a5b2503
--- /dev/null
+++ b/ioctl-tests/qioctl-handler.c
@@ -0,0 +1,15 @@
+#include 
+#include "qio.h"
+
+/* Handle the QOVERRIDE ioctl.  */
+int
+hurd_ioctl_handler (int fd, int request)
+{
+  if (request == QOVERRIDE)
+return 0;
+  else
+{
+  errno = ENOTTY;
+  return -1;
+}
+}
diff --git a/ioctl-tests/qioctl.c b/ioctl-tests/qioctl.c
new file mode 100644
index 000..cd6389c
--- /dev/null
+++ b/ioctl-tests/qioctl.c
@@ -0,0 +1,150 @@
+/* Dummy ioctl server.
+
+   Copyright (C) 2009 Free Software Foundation, Inc.
+
+   Written 

[PATCH 2/3] Update to reflect ioctl_handler_t change

2009-08-26 Thread Carl Fredrik Hammar
* ioctl-tests/qioctl-handler.c (hurd_ioctl_handler):
Update to reflect `ioctl_handler_t' change.
---
 ioctl-tests/qioctl-handler.c |   20 +++-
 1 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/ioctl-tests/qioctl-handler.c b/ioctl-tests/qioctl-handler.c
index a5b2503..c56d0a6 100644
--- a/ioctl-tests/qioctl-handler.c
+++ b/ioctl-tests/qioctl-handler.c
@@ -1,15 +1,17 @@
+#include 
 #include 
 #include "qio.h"
 
 /* Handle the QOVERRIDE ioctl.  */
-int
-hurd_ioctl_handler (int fd, int request)
+error_t
+hurd_ioctl_handler (int fd, struct hurd_fd *d, void *crit,
+   int request, void *arg, int *result)
 {
-  if (request == QOVERRIDE)
-return 0;
-  else
-{
-  errno = ENOTTY;
-  return -1;
-}
+  if (request != QOVERRIDE)
+return ENOTTY;
+
+  __spin_unlock (&d->port.lock);
+  _hurd_critical_section_unlock (crit);
+  *result = 0;
+  return 0;
 }
-- 
1.6.3.3





Re: [PATCH] Apply pattern-matching immediately beneath the stow directory.

2009-09-08 Thread Carl Fredrik Hammar
Hi,

On Mon, Sep 07, 2009 at 11:24:48PM +0300, Sergiu Ivanov wrote:
> unionfs needs to explicitly enumerate the contents of a directory to
> do pattern matching in it.  I cannot envision a way to do
> multi-component pattern matching without iterating all subdirectories
> of stow/ .  Well, okay, there could be done some optimization (like
> counting the number of components in the pattern and not going deeper
> than that), but it does not change the concept considerably.

> BTW, the implied asterisk in the original implementation is achieved
> by explicitly going down to the second level under stow/ , i.e. it is
> hard-coded in the algorithm, not some patterns, filenames or anything
> else.  Since this implementation didn't care about multi-component
> pattern, I chose not to think of them either.
> 
> Do you have some general idea of how multi-component pattern matching
> could be implemented more efficiently than that?  Some vague pointer
> should suffice for me to adapt the idea to unionfs.

How about matching one component at a time?

For instance, given `*/*/*' you iterate through all files in the
current directory and find matches to `*/' (the slash filters out
non-directories).  For each match you recurse, making the match the
current directory and with the pattern with the tested component removed,
e.g. `*/*'.  The recursion continues until the pattern is static or
there are no matches.

(Clarification: by current directory I don't mean the process' CWD,
just the directory currently being iterated.)

Regards,
  Fredrik




Re: [PATCH] Apply pattern-matching immediately beneath the stow directory.

2009-09-09 Thread Carl Fredrik Hammar
Hi,

On Wed, Sep 09, 2009 at 12:28:28AM +0300, Sergiu Ivanov wrote:
> On Tue, Sep 08, 2009 at 01:55:41PM +0200, Carl Fredrik Hammar wrote:
> > On Mon, Sep 07, 2009 at 11:24:48PM +0300, Sergiu Ivanov wrote:
> >
> > > Do you have some general idea of how multi-component pattern matching
> > > could be implemented more efficiently than that?  Some vague pointer
> > > should suffice for me to adapt the idea to unionfs.
> > 
> > How about matching one component at a time?
> > 
> > For instance, given `*/*/*' you iterate through all files in the
> > current directory and find matches to `*/' (the slash filters out
> > non-directories).  For each match you recurse, making the match the
> > current directory and with the pattern with the tested component removed,
> > e.g. `*/*'.  The recursion continues until the pattern is static or
> > there are no matches.
> > 
> > (Clarification: by current directory I don't mean the process' CWD,
> > just the directory currently being iterated.)
> 
> That's exactly what I was talking about.  (Though I forgot to mention
> some details you have mentioned.)  I just wanted to say that it is not
> really efficient.
> 
> But, as I've said in another mail, I guess I should think about
> efficiency differently in this case.  My initial goal was to just
> modify the target of pattern-matching a bit.  I guess the extending
> the functionality would be okay.

This is a bit moot since using glob() seems like the way to go, but for
the record.

Your mail gave me the impression that you wanted to iterate through all
files in the directory tree but stopping no deeper than the number of
components in the pattern.

However, both this and my proposal (which you really intended), is
just as efficient as the original implementation.  Matching PATTERN
under all directories in stow, is equivalent to matching `*/PATTERN'
in these implementations.  All proposed implementations would iterate
the same number of directories, and filter the same number of paths.

Regards,
  Fredrik




Re: nsmux Documentation

2009-10-02 Thread Carl Fredrik Hammar
Hi,

On Thu, Oct 01, 2009 at 07:52:57PM +0300, Sergiu Ivanov wrote:
> On Wed, Sep 30, 2009 at 09:45:32PM +0200, Arne Babenhauserheide wrote:
> > Am Mittwoch, 30. September 2009 18:36:34 schrieb Sergiu Ivanov:
> > > > It reads nice, but I miss an info: How can I activate nsmux, so I
> > > > can use the magic filenames?
> > > 
> > > Thank you for pointing out! :-) 
> > 
> > Can I also put it "on /"? 
> > That way I could activate it systemwide :) 
> 
> Yes, this is the long-term goal, though I definitely won't advise you
> trying this out ATM -- one of the most important issues is security,
> about which nsmux does nothing but standard procedures, but it is
> possible that something more is required.

A secure way to use it on the entire filesystem would be to make use of
settrans -C flag, to start a shell chrooted to nsmux but not actually set
on /.  This way only programs started from the shell would be affected.

That is something like:

  settrans -C bash -- / nsmux ...

(I didn't test it, it might be the other way around.)

Regards,
  Fredrik




Re: My absence at the tomorrow's (Oct 7) meeting

2009-10-06 Thread Carl Fredrik Hammar
Hi,

On Tue, Oct 06, 2009 at 08:17:02PM +0300, Sergiu Ivanov wrote:
> I'm afraid I won't be able to arrive at the Hurd meeting, because
> there will be a meeting regarding scholarships in leading European
> universities tomorrow, and I have a dream to manage to get into one of
> such universities.
> 
> Things may turn out differently and I may still arrive at the Hurd
> meeting (being late), but I won't say this is much probable.

Oh, that reminds me.  I will probably also be an hour or so late for
the meeting.

> The Conference at which I delivered my presentation about GNU/Hurd and
> unionmount was awful, there were hardy ten people in the room, each of
> them only interested in his own topic :-( Still, some preliminary
> measures took quite a time, so I haven't yet got to coding :-( I hope,
> though, I'll have some time later this week.

Sorry to hear that.  :-(

Hopefully you'll get more chances like this, and it'll go better next
time.

Regards,
  Fredrik




Re: My absence at the tomorrow's (Oct 7) meeting

2009-10-06 Thread Carl Fredrik Hammar
On Tue, Oct 06, 2009 at 11:00:41PM +0100, Davi Leal wrote:
> Carl Fredrik Hammar wrote:
> > > Things may turn out differently and I may still arrive at the Hurd
> > > meeting (being late), but I won't say this is much probable.
> >
> > Oh, that reminds me.  I will probably also be an hour or so late for
> > the meeting.
> 
> Why not just delay the Hurd meeting 1 hour, or better 2 hours?

I don't think there's no point in making it official this close to meeting
time, if there's not enough people to have a meeting they'll do something
else and check back in every once in a while on their own.




Re: [PATCH 2/3] Implement mountee startup.

2009-11-09 Thread Carl Fredrik Hammar
Hi,

On Thu, Nov 05, 2009 at 12:29:54PM +0100, olafbuddenha...@gmx.net wrote:
> 
> > > > > Why are you passing O_READ, anyways?...
> > > > 
> > > > The flags which I pass to start_mountee are used in opening the
> > > > port to the root node of the mountee.  (I'm sure you've noticed
> > > > this; I'm just re-stating it to avoid ambiguities).  Inside
> > > > unionfs, this port is used for lookups *only*, so O_READ should be
> > > > sufficient for any internal unionfs needs.  Ports to files
> > > > themselves are not proxied by unionfs (as the comment reads), so
> > > > the flags passed here don't influence that case.
> > > 
> > > Hm, but wouldn't unionfs still need write permissions to the
> > > directories for adding new entries, when not in readonly mode?...
> > 
> > Well, obviously, O_READ permission on a directory is sufficient to
> > create files in it.
> 
> Ah, interesting...
> 
> > I'm not sure whether this is a feature or a misbehaviour
> 
> I don't think it's a bug -- doesn't seem very likely that nobody would
> have noticed such a fundamental bug all this time...

I was about to say it's definitaly a bug, but a quick look in open(2)
states that open() should fail with EISDIR if open mode is write...
This suggests that adding entries depend on the permission bits
of the directory and the users and grougs of the client.

How to properly verify whether a client has this access in
a proxy such as unionfs is an interesting question.
If run by root it could recreate whatever auth object
the client is using, but its harder for a normal user.

Regards,
  Fredrik




Re: grub vs st_dev (aka fsid) / st_rdev

2009-11-10 Thread Carl Fredrik Hammar
Hi,

On Mon, Nov 09, 2009 at 10:47:08PM +0100, Samuel Thibault wrote:
> I can see two solutions:
> 
> - Either we align more on POSIX to manage to get the st_dev (aka
>   fsid) of filesystems equal to the the st_rdev of their underlying /dev
>   entries.  An easy way is to have storeios expose their own pid as
>   st_rdev, and have filesystems use the underlying storeio st_rdev for
>   their st_dev (aka fsid).  One issue is for the / ext2fs, since it
>   doesn't use a storeio, and a storeio could be started later.

This solution also won't work if storeio is passive, times out, and
is later restarted.  It would be nice if the fsid was acctually stable
accross translator restarts...

The ideal would be to derive it from the underlying Mach device somehow.
The question is how to derive it if there's some sort of transformation
involved, e.g. gzip stores, concatinated stores, unionfs, etc.  We'd need
to find some stable algorithm that can produce fairly unique numbers.

> - Or we make grub use a more hurdish interface, i.e.
>   file_get_storage_info, e.g. storeinfo -n / .
>   I have however observed a disturbing behavior:
> 
>   $ dd < /dev/zero > foo bs=1M count=1
>   $ /sbin/mke2fs -o hurd foo
>   $ settrans -c bar /hurd/ext2fs $PWD/foo
>   $ storeinfo foo/
>   device (0x200): hd2: 512: 8: 4096: 11848072+8
> 
>   It is indeed true that the file is actually stored in hd2, but before
>   that it's stored in the foo file and wouldn't be available by just
>   mounting hd2.

This is a feature, not a bug.  The store returned by file_get_storage_info
on a file in ext2fs is its underlying store with a range that specifies
which blocks the file is stored in.  This way, clients that load the
store can read directly from the underlying device, assuming that they
can actually open it.  As long as grub acknowledges the range it should
be fine.

I think it would be better if the range was encoded in a seperate store
with the device as a child store.  This would more accuratly depict the
situation and wouldn't noticably affect performance, and we wouldn't
have this confusion.

Regards,
  Fredrik




Re: grub vs st_dev (aka fsid) / st_rdev

2009-11-10 Thread Carl Fredrik Hammar
On Tue, Nov 10, 2009 at 11:24:34AM +0100, Samuel Thibault wrote:
> Carl Fredrik Hammar, le Tue 10 Nov 2009 09:27:14 +0100, a écrit :
> > > - An easy way is to have storeios expose their own pid as
> > >   st_rdev, and have filesystems use the underlying storeio st_rdev for
> > >   their st_dev (aka fsid).  One issue is for the / ext2fs, since it
> > >   doesn't use a storeio, and a storeio could be started later.
> > 
> > This solution also won't work if storeio is passive, times out, and
> > is later restarted.
> 
> That's not a problem since in that case the FS above it will have to be
> restarted too. Note that a storeio can't time out while an FS is still
> running.

The FS can use file_get_storage_info and use the store directly,
after this it doesn't need storeio.  This is what ext2fs does.

> > The ideal would be to derive it from the underlying Mach device somehow.
> 
> Not all storage are Mach devices.

I assumed that was the case for the ones that are interesting for grub.

> > > - Or we make grub use a more hurdish interface, i.e.
> > >   file_get_storage_info, e.g. storeinfo -n / .
> > >   I have however observed a disturbing behavior:
> > > 
> > >   $ dd < /dev/zero > foo bs=1M count=1
> > >   $ /sbin/mke2fs -o hurd foo
> > >   $ settrans -c bar /hurd/ext2fs $PWD/foo
> > >   $ storeinfo foo/
> > >   device (0x200): hd2: 512: 8: 4096: 11848072+8
> > > 
> > >   It is indeed true that the file is actually stored in hd2, but before
> > >   that it's stored in the foo file and wouldn't be available by just
> > >   mounting hd2.
> > 
> > This is a feature, not a bug.
> 
> It is a bug to me: it should rather return a _file_ storage type, with
> the offsets etc. within the file. And then the caller can call storeinfo
> on the file itself (foo), etc.

You could get this behaviour by doing:

$ settrans -c bar /hurd/ext2fs file:$PWD/foo

But then the new ext2fs instance will also use normal file IO,
instead of using the Mach device directly.

> > The store returned by file_get_storage_info on a file in ext2fs is its
> > underlying store with a range that specifies which blocks the file is
> > stored in.  This way, clients that load the store can read directly
> > from the underlying device, assuming that they can actually open it.
> > As long as grub acknowledges the range it should be fine.
> 
> The problem is that Grub doesn't work that way: it just wants to know
> which device the volume comes from, it doesn't want the precise block,
> since it has its own ext2fs module, which allows to modify the files
> etc. without having to care about re-installing grub (i.e. not like e.g.
> lilo).

To see if a store is a real device and not a regular file, it can check
if the range covers the entire store.  I assume that Grub doesn't support
filesystems stored in files of other filesystems.

> > I think it would be better if the range was encoded in a seperate store
> > with the device as a child store.
> 
> As a parent store you mean?

This is perhaps more accurate, but in store terminology all dependencies
to other stores are called children.

Regards,
  Fredrik




Re: grub vs st_dev (aka fsid) / st_rdev

2009-11-10 Thread Carl Fredrik Hammar
On Tue, Nov 10, 2009 at 02:01:19PM +0100, Samuel Thibault wrote:
> > I assumed that was the case for the ones that are interesting for grub.
> 
> Yes, but then to realize unicity it's more difficult, as you have
> several sources of IDs. You can of course use a prefix to identify where
> it comes from etc, but it becomes clumsy.

Agreed.  With this in mind, perhaps using file_get_storage_info in Grub
(with checks that its a proper device) is the way to go.

Regards,
  Fredrik




Re: My absence from the yesterday's (Nov 11) meeting

2009-11-11 Thread Carl Fredrik Hammar
Hi,

On Thu, Nov 12, 2009 at 08:35:45AM +0200, Sergiu Ivanov wrote:
> Hello,
> 
> I'm very sorry that I didn't arrive at the yesterday's (Nov 11)
> meeting.  Unfourtunately, I fell sick and, although I made several
> attempts, I didn't manage to put my hands on the computer :-(

Sorry to hear that, I hope you get well soon!  :-)

Regards,
  Fredrik




Re: website: background color in css

2009-11-12 Thread Carl Fredrik Hammar
Hi,

On Thu, Nov 12, 2009 at 04:03:10PM +0100, Arne Babenhauserheide wrote:
> 
> I'm currently browsing in "dark mode" (dark colors for my KDE 4), and 
> realized 
> that the website wasn't very readable with dark background color. 
> 
> I just fixed that by putting "background-color: white;" into the body tag in 
> local.css
> 
> So now our background is always white except if all CSS is ignored. 

Why wasn't it readable?  If it was because the font color was still black
(my first guess), then shouldn't we change that instead, so that the
font color is also the browser's default?

Regards,
  Fredrik




Re: website: background color in css

2009-11-13 Thread Carl Fredrik Hammar
On Thu, Nov 12, 2009 at 08:28:54PM +0100, Arne Babenhauserheide wrote:
> Am Donnerstag, 12. November 2009 19:42:11 schrieb Carl Fredrik Hammar:
> > Why wasn't it readable?  If it was because the font color was still black
> > (my first guess), then shouldn't we change that instead, so that the
> > font color is also the browser's default?
> 
> That's the other option :) 
> 
> It's simply a binary choice, but since I spottet not too few places in the 
> CSS 
> where the site uses fixed dark colors to mark special content, I thought that 
> forcing a white background would be cleaner. 

I see.  I guess in that case a white background is a reasonable fix.
Hmmm... this got me kind of curious, perhaps I'll take a whack at a more
proper solution later.

> PS: Am I right that this list is now the right one for website discussions?

I think that's safe to assume.

Regards,
  Fredrik




Re: Adding entries to a directory

2009-11-17 Thread Carl Fredrik Hammar
Hi,

On Tue, Nov 17, 2009 at 11:57:46AM +0200, Sergiu Ivanov wrote:
> On Mon, Nov 09, 2009 at 02:58:12PM +0100, Carl Fredrik Hammar wrote:
> > On Thu, Nov 05, 2009 at 12:29:54PM +0100, olafbuddenha...@gmx.net wrote:
> > > 
> > > > Well, obviously, O_READ permission on a directory is sufficient to
> > > > create files in it.
> > > 
> > > Ah, interesting...
> > > 
> > > > I'm not sure whether this is a feature or a misbehaviour
> > > 
> > > I don't think it's a bug -- doesn't seem very likely that nobody would
> > > have noticed such a fundamental bug all this time...
> > 
> > I was about to say it's definitaly a bug, but a quick look in open(2)
> > states that open() should fail with EISDIR if open mode is write...
> > This suggests that adding entries depend on the permission bits
> > of the directory and the users and grougs of the client.
> 
> Thank you for the investigation! :-) It didn't occur to me to look
> into manpages first :-(

No biggie, it was buried deep inside the man-page.  It was pure chance
that I noticed it at all.

> > How to properly verify whether a client has this access in
> > a proxy such as unionfs is an interesting question.
> > If run by root it could recreate whatever auth object
> > the client is using, but its harder for a normal user.
> 
> Generally, unionfs checks permissions whenever it is asked to carry
> out some operation.  Similarly, when it is asked to create a new entry
> under a directory, it first checks the user's permissions.

Ah, yes.  That's what I thought.  Perhaps I should've explained what I
meant by ``properly'', which I left out in this little side note.

The problem with relying on file permissions is that it is only one
of several possible ways to specify permissions.  For instance, ACLs
(Access Control List) can offer a more fine grained control, where
permissions can be specified for individual users.

Now only regular file permissions are currently implemented by filesystems
in the Hurd, but it would be nice to have the possibility to implement
such alternatives in the future.  To ensure this we shouldn't rely on
file permissions being correct.

(But I might be missing something, perhaps POSIX says that regular file
permissions should always be correct or something.)

> Although I fail to realize how unionfs would help root to recreate any
> auth object used by a client, I'd believe that root could recreate any
> auth object without the aid of unionfs, too :-)

My idea was that a unionfs *run* by root can recreate any auth object
that the client has and then authenticate with it against the unioned
directories.

If run by any other user then it can only recreate the intersection of
credentials between unionfs and the client.  This isn't ideal, but it
does ensure that unionfs doesn't accidentally grant the client any new
permissions by mistake.

But just to clarify, I don't really propose that you implement this in
unionfs.  I think this would affect other filesystems as well; switching
over the Hurd to such a policy should be treated as a separate project.

Regards,
  Fredrik




Re: Adding entries to a directory

2009-11-17 Thread Carl Fredrik Hammar
Hi,

On Tue, Nov 17, 2009 at 06:49:24PM +0200, Sergiu Ivanov wrote:
> On Tue, Nov 17, 2009 at 01:15:59PM +0100, Carl Fredrik Hammar wrote:
> > On Tue, Nov 17, 2009 at 11:57:46AM +0200, Sergiu Ivanov wrote:
> > > On Mon, Nov 09, 2009 at 02:58:12PM +0100, Carl Fredrik Hammar wrote:
> > > > On Thu, Nov 05, 2009 at 12:29:54PM +0100, olafbuddenha...@gmx.net wrote:
> > > > > 
> > > > > > Well, obviously, O_READ permission on a directory is sufficient to
> > > > > > create files in it.
> > 
> > > > How to properly verify whether a client has this access in
> > > > a proxy such as unionfs is an interesting question.
> > > > If run by root it could recreate whatever auth object
> > > > the client is using, but its harder for a normal user.
> > > 
> > > Generally, unionfs checks permissions whenever it is asked to carry
> > > out some operation.  Similarly, when it is asked to create a new entry
> > > under a directory, it first checks the user's permissions.
> > 
> > Ah, yes.  That's what I thought.  Perhaps I should've explained what I
> > meant by ``properly'', which I left out in this little side note.
> > 
> > The problem with relying on file permissions is that it is only one
> > of several possible ways to specify permissions.  For instance, ACLs
> > (Access Control List) can offer a more fine grained control, where
> > permissions can be specified for individual users.
> > 
> > Now only regular file permissions are currently implemented by filesystems
> > in the Hurd, but it would be nice to have the possibility to implement
> > such alternatives in the future.  To ensure this we shouldn't rely on
> > file permissions being correct.
> 
> I see.  However, if I understand you correctly, you are talking about
> a totally different implementation of filesystem authentication
> mechanism.  In case such mechanism is ever implemented, I believe that
> the permissions check in unionfs can be pretty easily adapted to the
> new way: unionfs relies heavily on libfshelp, and the corresponding
> permission check functions could be modified to work differently.

No, I think you're confusing authentication and access control.
Authentication is the method used to establish the identity of a client,
i.e. which user(s) and groups it is run by.  Access control is deciding
which permissions the client has, typically based on which identity
it has.

Authentication must be the same throughout the system to be useful.
Access control is all up to the individual servers, and can be different
throughout the system.  It is easy to imagine new types of access control,
e.g. owner permissions for several users, or even a time lock that denies
access after a certain date.  There are many possibilities which I think
we should leave open.

> > (But I might be missing something, perhaps POSIX says that regular file
> > permissions should always be correct or something.)
> 
> Hm, why could POSIX say that regular file permissions may *not* be
> correct? :-) I may be missing something, but it's hard for me to
> imagine that POSIX file permissions were introduced with the thought
> in mind that they may be wrong.

``Correct'' wasn't really the right word, ``incomplete'' is more
appropriate.  It seems that file permissions bits must always be present
in some form, but that additional file access permissions may further
restrict permissions:

http://www.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_04

If this is the case then unionfs may inadvertently grant a permission
that the unioned directory would've denied due to such an additional mechanism.

> > > Although I fail to realize how unionfs would help root to recreate any
> > > auth object used by a client, I'd believe that root could recreate any
> > > auth object without the aid of unionfs, too :-)
> > 
> > My idea was that a unionfs *run* by root can recreate any auth object
> > that the client has and then authenticate with it against the unioned
> > directories.
> > 
> > If run by any other user then it can only recreate the intersection of
> > credentials between unionfs and the client.  This isn't ideal, but it
> > does ensure that unionfs doesn't accidentally grant the client any new
> > permissions by mistake.
> 
> From the theoretical point of view, there isn't really a problem,
> since unionfs should always grant permissions which are intersection
> between its permissions and the permissions of the calling client.
> Even if unionfs is running as root, it shouldn't give the calling

Re: Adding entries to a directory

2009-11-17 Thread Carl Fredrik Hammar
Hi,

On Tue, Nov 17, 2009 at 09:21:18PM +0200, Sergiu Ivanov wrote:
> > > > My idea was that a unionfs *run* by root can recreate any auth object
> > > > that the client has and then authenticate with it against the unioned
> > > > directories.
> > > > 
> > > > If run by any other user then it can only recreate the intersection of
> > > > credentials between unionfs and the client.  This isn't ideal, but it
> > > > does ensure that unionfs doesn't accidentally grant the client any new
> > > > permissions by mistake.
> > > 
> > > From the theoretical point of view, there isn't really a problem,
> > > since unionfs should always grant permissions which are intersection
> > > between its permissions and the permissions of the calling client.
> > > Even if unionfs is running as root, it shouldn't give the calling
> > > client more permissions than they already have.  OTOH, bugs can bring
> > > about security problems in any case :-)
> > 
> > As explained above this assumes that the file permissions tell the
> > whole story.  The main problem with my suggestion is that it might be
> > too restrictive.  For instance, if user Alice wants to add an entry
> > to Bob's union directory.  Alice has permission to add to the unioned
> > directory because she's its owner but is not a member of the owning
> > group, Bob also has permission because he is a member of the group,
> > and others are not permitted.  The problem is that the intersection of
> > their credentials will contain neither the user nor the group required to
> > write to the directory, even thought both Alice and Bob has the necessary
> > permissions on their own.
> 
> Hm, interesting situation, it didn't occur to my mind.  However, I'd
> think that this problem is specific to any filesystem based on
> standard POSIX permission bits.  Your idea was about creating an
> alternate file access control mechanism, right?

Well, this situation isn't a problem in the current implementation, so it
isn't specific to regular permission bits.  This is because Bob would use
his group membership to add the entry on Alice behalf, which he allows
because the permission bits state that she's the owner of the directory.

I don't so much want to create a new file access mechanism, as I
want to rely on the unioned directories own access mechanism, and let
them decide whether to allow Alice to add an entry.  As it is now,
unionfs implements an access policy which it *assumes* is the same as
the unioned directories.

> > I just remembered that io_restrict_auth is described to do the exactly
> > what we want.  However, it seems that in practice translators just make
> > an intersection of the credentials, so it has the same problem.  :-(
> 
> Could you please give an example of how would you suggest to use
> io_restrict_auth?  The fact is that unionfs, for instance (but I
> believe other translator do similarly) does use io_restrict_auth, but
> it indeed uses it to do the intersection.  (This is most probably what
> you are talking about; I'm just restating it in more detail to avoid
> ambiguity.)

1. Alice opens unionfs directory
2. unionfs opens unioned directories using Bob's credentials
3. unionfs restricts auth of directories to Alice's credentials
4. Alice adds entry
5, unionfs adds entry to whichever directory gets new entries

Notice how unionfs doesn't need to check whether Alice is permitted to
add the entry.  It simply relies on that the unioned directory does it.

Note that this has the problem I described.  But it wouldn't if
io_restrict_auth was defined to intersect the operations allowed by two
credentials instead of the credentials themselves.  Such a definition
would be more useful IMHO, but a separate project of course.

Also I looked up how unionfs uses io_restrict_auth, and I'm a bit
confused.  It seems it only restricts normal files with the client's
credentials.  I can't tell whether it then proxies the file or returns
it, but if it is returned then it should be reauthenticated by the client,
and then it is best to return a completely unauthenticated port, either by
not authenticating it at all, or restricting auth with empty credentials.

Regards,
  Fredrik




Re: Adding entries to a directory

2009-11-18 Thread Carl Fredrik Hammar
Hi,

On Wed, Nov 18, 2009 at 12:15:16AM +0200, Sergiu Ivanov wrote:
> On Tue, Nov 17, 2009 at 10:29:40PM +0100, Carl Fredrik Hammar wrote:
> >
> > I don't so much want to create a new file access mechanism, as I
> > want to rely on the unioned directories own access mechanism, and let
> > them decide whether to allow Alice to add an entry.  As it is now,
> > unionfs implements an access policy which it *assumes* is the same as
> > the unioned directories.
> 
> Aha, so you are talking about removing access policy implementation
> from unionfs and making unionfs check with the directory whether a
> certain user is allowed to add entries to it?

Yes.

> > > > I just remembered that io_restrict_auth is described to do the exactly
> > > > what we want.  However, it seems that in practice translators just make
> > > > an intersection of the credentials, so it has the same problem.  :-(
> > > 
> > > Could you please give an example of how would you suggest to use
> > > io_restrict_auth?  The fact is that unionfs, for instance (but I
> > > believe other translator do similarly) does use io_restrict_auth, but
> > > it indeed uses it to do the intersection.  (This is most probably what
> > > you are talking about; I'm just restating it in more detail to avoid
> > > ambiguity.)
> > 
> > 1. Alice opens unionfs directory
> > 2. unionfs opens unioned directories using Bob's credentials
> > 3. unionfs restricts auth of directories to Alice's credentials
> > 4. Alice adds entry
> > 5, unionfs adds entry to whichever directory gets new entries
> > 
> > Notice how unionfs doesn't need to check whether Alice is permitted to
> > add the entry.  It simply relies on that the unioned directory does it.
> 
> I see.  The check is ``done'' by the directory, and unionfs simply
> tries adding the entry and stops whenever a directory accepts the
> entry or when it finished traversing the list of directories.

I imagined that you'd only try to add an entry to one of the unioned
directories, otherwise it is hard to predict where the entry will
eventually be placed.

> > Also I looked up how unionfs uses io_restrict_auth, and I'm a bit
> > confused.  It seems it only restricts normal files with the client's
> > credentials.  I can't tell whether it then proxies the file or returns
> > it, but if it is returned then it should be reauthenticated by the client,
> > and then it is best to return a completely unauthenticated port, either by
> > not authenticating it at all, or restricting auth with empty credentials.
> 
> unionfs does not proxy ports to normal files.  The necessity of
> reauthentication arises from the fact that the credentials associated
> with the port unionfs returns may not be the same as those of the
> client, but only a subset of them, right?

Yes, but I also think that it should be possible to forward a not yet
authenticated port without risking privilege escalation.  That is, if you
return an authenticated port, a proxy might think it is safe to return the
port to its own client, which would leak the proxies access to its client.

I'm not entirely sure if this isn't a rule I just made up myself, but
it seems natural to assume that a port returned with FS_RETRY_REAUTH
should be unauthenticated.

Regards,
  Fredrik




Re: Adding entries to a directory

2009-11-19 Thread Carl Fredrik Hammar
Hi,

On Wed, Nov 18, 2009 at 08:03:30PM +0200, Sergiu Ivanov wrote:
> On Wed, Nov 18, 2009 at 10:21:13AM +0100, Carl Fredrik Hammar wrote:
> > On Wed, Nov 18, 2009 at 12:15:16AM +0200, Sergiu Ivanov wrote:
> > > On Tue, Nov 17, 2009 at 10:29:40PM +0100, Carl Fredrik Hammar wrote:
> > > >
> > > > 1. Alice opens unionfs directory
> > > > 2. unionfs opens unioned directories using Bob's credentials
> > > > 3. unionfs restricts auth of directories to Alice's credentials
> > > > 4. Alice adds entry
> > > > 5, unionfs adds entry to whichever directory gets new entries
> > > > 
> > > > Notice how unionfs doesn't need to check whether Alice is permitted to
> > > > add the entry.  It simply relies on that the unioned directory does it.
> > > 
> > > I see.  The check is ``done'' by the directory, and unionfs simply
> > > tries adding the entry and stops whenever a directory accepts the
> > > entry or when it finished traversing the list of directories.
> > 
> > I imagined that you'd only try to add an entry to one of the unioned
> > directories, otherwise it is hard to predict where the entry will
> > eventually be placed.
> 
> This is how unionfs does the things now: it tries to look up the
> filename with O_CREAT under every unioned directory and stops at the
> first directory which returns no error or an error different from
> ENOENT.

Oh, ok. It still doesn't seem right to me though.

> > > unionfs does not proxy ports to normal files.  The necessity of
> > > reauthentication arises from the fact that the credentials associated
> > > with the port unionfs returns may not be the same as those of the
> > > client, but only a subset of them, right?
> > 
> > Yes, but I also think that it should be possible to forward a not yet
> > authenticated port without risking privilege escalation.  That is, if you
> > return an authenticated port, a proxy might think it is safe to return the
> > port to its own client, which would leak the proxies access to its client.
> 
> Hm, interesting.  Are you talking about that type of proxies which
> have broader permissions than their clients?  In this case I'd say it
> is the proxy's responsibility to think of security and give out to the
> clients unauthenticated ports.

Well that applies to any proxy really.  What I'm talking about
unauthenticated ports vs.  ports restricted with clients credentials.

> > I'm not entirely sure if this isn't a rule I just made up myself, but
> > it seems natural to assume that a port returned with FS_RETRY_REAUTH
> > should be unauthenticated.
> 
> The comment to FS_RETRY_REAUTH in hurd/hurd_types.h says ``Retry after
> reauthenticating retry port''.  However, the only moment when unionfs
> (and libnetfs, IIRC) returns FS_RETRY_REAUTH is when the ``..''
> filename is requested.  In this case the shadow_root_parent from the
> peropen structure is returned as the retry port, but I cannot tell
> whether it is unauthenticated.

It should return FS_RETRY_REAUTH when it returns a port to non-directory
nodes as well, or atleast that is how translator transitions are
currently handled in the Hurd.  (See my ``Solving firmlink problem using
io_restrict_auth mail'' for alternative inspired by this discussion)

> So, I'd rather say that it is okay to
> assume that the port returned with FS_RETRY_REAUTH is unauthenticated,
> but it might not be true.  Actually, it doesn't really matter, since
> you are anyway bound to do reauthentication.

Yes, but you aren't forced to do reauthentication if you return a port
that is already authenticated.  That's the problem.

Regards,
  Fredrik




Solving the firmlink problem with io_restrict_auth (was Re: Adding entries to a directory)

2009-11-19 Thread Carl Fredrik Hammar
Hi,

On Tue, Nov 17, 2009 at 08:55:38PM +0100, olafbuddenha...@gmx.net wrote:
> On Tue, Nov 17, 2009 at 01:15:59PM +0100, Carl Fredrik Hammar wrote:
> 
> > If run by any other user then it can only recreate the intersection of
> > credentials between unionfs and the client.  This isn't ideal, but it
> > does ensure that unionfs doesn't accidentally grant the client any new
> > permissions by mistake.
> 
> Actually I think this is just right... Whenever a client accesses a
> resource through a translator, it should be restricted not only by its
> own access, but also the translator's access.

Well, this naturally happens as the translator cannot possibly provide
more access then it already has.

> It is actually a problem that this policy is not followed whenever an
> intermediate translator hands out a "real" port to another translator,
> and the client reauthenticates it. (The so-called "firmlink problem".)

Having a ``proxy'' do an io_restrict_auth before passing on a port has
actually far reaching consequences.  Remember that firmlink is only an
odd use of the regular hand-off protocol when going from one translator
to another, so using this policy throughout the Hurd would mean we go
from a peer-to-peer authority scheme to a very hierachical one, where
each step from one translator to another can only mean less authority
for the client.

Note also that the fact that servers return an already authenticated
but restricted port that would solve the firmlink problem, rather the
client must refuse to reauthenticate ports on the server's request,
otherwise a malicious server could still trick the client.

Does this mean that the auth server isn't needed any more?  No, there is
still one case this doesn't cover: to extend the clients authority, with
setauth or the password server.  To do this the client must reauthenticate
all open ports in order for its new credentials to take effect.

Also to do this properly we need to improve io_restrict_auth so it
restricts the allowed operations to the intersection of the allowed
operations of two sets of credentials, and not just to the operations
allowed of the intersection of two sets of credentials.

I'm not sure if switching to such an authority scheme is a good idea
overall, but I do think it would indeed solve the firmlink problem.

Regards,
  Fredrik




Re: [PATCH 2/3] Implement mountee startup.

2009-11-25 Thread Carl Fredrik Hammar
Hi,

On Sun, Nov 22, 2009 at 09:05:16PM +0100, olafbuddenha...@gmx.net wrote:
> On Thu, Nov 19, 2009 at 10:28:37AM +0200, Sergiu Ivanov wrote:
> 
> > +  /* Fetch the effective UIDs of the unionfs process.  */
> > +  nuids = geteuids (0, 0);
> > +  if (nuids < 0)
> > +return EPERM;
> > +  uids = alloca (nuids * sizeof (uid_t));
> > +
> > +  nuids = geteuids (nuids, uids);
> > +  assert (nuids > 0);
> 
> Hrmph, I didn't spot this before: I don't think the assert() is right --
> "nuids" (or "ngids") being exactly 0, is probably a perfectly valid
> case... And even if it is not, the test in the assert should be
> equivalent to the EPERM test above, to avoid confusion.

geteuids() actual error (in errno) should be returned instead of EPERM.

Also credentials can be changed at any moment by other processes through
the msg_set_init_port() RPC (very much like a signal), which becomes
a problem if the number of UIDs grows between the calls to geteuid().
So it would be more proper with a loop, e.g.:

  nuids = geteuids (0, 0);
  do {
old_nuids = nuids;
uids = alloca (nuids * sizeof (uid_t));
nuids = geteuids (nuids, uids);
  } while (old_nuids < nuids);

But alloca() should be replaced with realloc() and cleanup code for the
memory should be added.

Actually, I don't see how any use of geteuids() won't run into this
problem.  It would be much easier if it just returned malloced memory
to begin with...

Regards,
  Fredrik




Re: Solving the firmlink problem with io_restrict_auth

2009-11-29 Thread Carl Fredrik Hammar
Hi,

On Sun, Nov 22, 2009 at 08:19:01PM +0100, olafbuddenha...@gmx.net wrote:
> On Thu, Nov 19, 2009 at 02:58:07PM +0100, Carl Fredrik Hammar wrote:
> > On Tue, Nov 17, 2009 at 08:55:38PM +0100, olafbuddenha...@gmx.net
> > wrote:
> 
> > > It is actually a problem that this policy is not followed whenever
> > > an intermediate translator hands out a "real" port to another
> > > translator, and the client reauthenticates it. (The so-called
> > > "firmlink problem".)
> > 
> > Having a ``proxy'' do an io_restrict_auth before passing on a port has
> > actually far reaching consequences.  Remember that firmlink is only an
> > odd use of the regular hand-off protocol when going from one
> > translator to another, so using this policy throughout the Hurd would
> > mean we go from a peer-to-peer authority scheme to a very hierachical
> > one, where each step from one translator to another can only mean less
> > authority for the client.
> 
> Yeah, so the question is: is this a bad thing? With the current scheme,
> the only way to make translators safe is to never follow translators set
> up by untrusted users. (Which BTW is also the policy used by FUSE by
> default.) Would changing the authentication scheme preclude any
> desirable (safe) use cases that are possible presently?

I can't think of any use cases.  But I do wonder if the problem is inherit
it to all links.  OK, so you can detect and avoid symlinks if you want
to be safe, but how often is this actually done?  And while hard links
only allows linking to non-directories, it still has the same problems,
e.g. ``ln /etc/shadow /var/mail/cfhammar'' could make an MRA clobber
the systems passwords.  It seems that the firmlink problem is really
just a generalization of the ``link problem''.

Is it really worth the effort to limit the problem or should we perhaps
just stipulate that a users files and directories should not be trusted?
I'm not convinced either way...

> > Note also that the fact that servers return an already authenticated
> > but restricted port that would solve the firmlink problem, rather the
> > client must refuse to reauthenticate ports on the server's request,
> > otherwise a malicious server could still trick the client.
> 
> Yes, that was exactly my point. IMHO this should be changed -- though
> I'm not sure how exactly...

I have been thinking about how this, and it seems that *any*
reauthentication -- not just server requested -- can be used to trick
the client.  When a client reauthenticates a port, the server can simply
forward the request to a server precious to the client, and thus trick it.

It is clear that restrictions must be remembered across
reauthentictations.  However, a client's own credentials cannot be
treated as a restriction by a server, otherwise clients cannot increase
their access by reauthentication, and so a malicious server still has
an opening to trick the client.

It does work if the client remembers the restrictions, which makes
sense since it is the one being tricked.  In this case, it can simply
re-restrict ports after reauthentication.  Of course, this means the
client must be aware of the server's credentials, so it can later
re-restrict any ports returned by it.

The changes required for this is that authentication also returns the
server's credentials to the client, which could possibly be done through
reverse authentication, but should probably be added directly to the
normal authentication protocol.  It also requires that client keeps a
``paper-trail'' for each port, so that it knows the credentials of the
server that returned the port, and the credentials of the server that
returned *that* port, and so on...

> > Also to do this properly we need to improve io_restrict_auth so it
> > restricts the allowed operations to the intersection of the allowed
> > operations of two sets of credentials, and not just to the operations
> > allowed of the intersection of two sets of credentials.
> 
> I'm not yet convinced that this is really much of a problem in practice
> :-)

If we indeed want to do restrictions on every translator transition,
I think it will become a problem in practice.

Regards,
  Fredrik




Re: [PATCH 2/3] Implement mountee startup.

2009-12-03 Thread Carl Fredrik Hammar
Hi,

On Sat, Nov 28, 2009 at 12:36:07AM +0100, olafbuddenha...@gmx.net wrote:
> On Wed, Nov 25, 2009 at 07:59:33PM +0100, Carl Fredrik Hammar wrote:
> > On Sun, Nov 22, 2009 at 09:05:16PM +0100, olafbuddenha...@gmx.net wrote:
> > > On Thu, Nov 19, 2009 at 10:28:37AM +0200, Sergiu Ivanov wrote:
> 
> > > > +  /* Fetch the effective UIDs of the unionfs process.  */
> > > > +  nuids = geteuids (0, 0);
> > > > +  if (nuids < 0)
> > > > +return EPERM;
> > > > +  uids = alloca (nuids * sizeof (uid_t));
> > > > +
> > > > +  nuids = geteuids (nuids, uids);
> > > > +  assert (nuids > 0);
> > > 
> > > Hrmph, I didn't spot this before: I don't think the assert() is right --
> > > "nuids" (or "ngids") being exactly 0, is probably a perfectly valid
> > > case... And even if it is not, the test in the assert should be
> > > equivalent to the EPERM test above, to avoid confusion.
> > 
> > geteuids() actual error (in errno) should be returned instead of EPERM.
> 
> Does geteuids() actually set errno?

Yes, it calls __hurd_fail() which sets it.

> > which becomes a problem if the number of UIDs grows between the calls
> > to geteuid().
> 
> Not sure this is really a problem. If the credentials change in the
> middle of things, we can't rely on the set being current anyways; so
> it's probably fine if it's truncated to the old size...

But then you are using credentials that are neither the old ones nor the
new ones.  This could only cause confusion.  Aborting with ``setauth not
supported'' (or some such) when (new_len > old_len) is better then this.

This seems appropriate since setauth is probably not handled right after
this setup anyway.  The only way to do this currently is by using file
descriptors instead of ports for directories.  I don't know the code
well enough to tell whether this is appropriate in this case...

There is also a _hurd_reauth_hook but it, and the macros used to
manipulate it, is private to glibc.  It could probably be used but it'd
be really ugly.

Regards,
  Fredrik




Re: [RFC] git fs translator

2009-12-20 Thread Carl Fredrik Hammar
Hi,

On Sun, Dec 20, 2009 at 02:49:19PM +0530, Shakthi Kannan wrote:
> 
> This is in regard to a prototype implementation of gitfs translator
> for a student project. The idea is to write a simple translator that
> can query results from a remote git repository.
> 
> * Which lib*fs translator can be used for this? cvsfs has been written
> earlier using libnetfs.

Yes, libnetfs would be the one.

> * Using gitweb, one can obtain the repo details from the URL. So, is
> the following flow acceptable?
> 
>   gitfs translator->libcurl (HTTP request)->gitweb->HTTP response
> 
> Appreciate any inputs in this regard,

Perhaps I'm missing something as I don't know the details well enough,
but why not maintain a temporary repository where you fetch (only) the
needed repo objects on demand with the usual git commands, or perhaps
the more low-level commands?

Seems much more straightforward to me.

Regards,
  Fredrik




Re: [RFC] git fs translator

2009-12-20 Thread Carl Fredrik Hammar
Hi,

On Sun, Dec 20, 2009 at 07:47:27PM +0530, Shakthi Kannan wrote:
> 
> --- On Sun, Dec 20, 2009 at 7:43 PM, Carl Fredrik Hammar
>  wrote:
> | but why not maintain a temporary repository where you fetch (only) the
> | needed repo objects on demand with the usual git commands, or perhaps
> | the more low-level commands?
> \--
> 
> Sorry, which git commands or low-level commands are you referring to
> here? Are you referring to using git for-each-ref or git cat-file?

I'm not really familiar with the commands for for fetching and accessing
individual objects so I can't be specific.  I'm mostly just speculating.
:-)

But git cat-file seems like a good candidate once object have been fetched
to the local repository.  I'd suggest git fetch to actually transfer the
objects from the remote repository, but I can't tell from the man-page
whether it can be used for all types of objects or just refs...

Regards,
  Fredrik




Re: Should trivfs.h include fcntl.h?

2009-12-20 Thread Carl Fredrik Hammar
Hi,

On Sun, Dec 20, 2009 at 06:40:10PM +0100, olafbuddenha...@gmx.net wrote:
> 
> While trivfs.h doesn't use any definitions from fcntl.h itself,
> trivfs_allow_open takes values like O_READ, which are defined in fcntl.h
> -- so a program including trivfs.h will generally need these definitions
> as well. Thus I wonder whether trivfs.h shouldn't just include fcntl.h,
> so they are always available?
> 
> (Admittedly, this logic is not generally applied to libc headers
> either...)

I generally agree that all headers needed to use a library should be
included in its header.  But by that logic we should push it down to
fshelp.h, then we'll get it in diskfs.h and netfs.h as well.  I'd say
put it in iohelp.h as well, but surprisingly it seems that none of its
functions deal with open modes.

Regards,
  Fredrik




[bug #28408] unionmount doesn't reauthenticate handle to mountee on setauth()

2009-12-26 Thread Carl Fredrik Hammar

URL:
  

 Summary: unionmount doesn't reauthenticate handle to mountee
on setauth()
 Project: The GNU Hurd
Submitted by: hammy
Submitted on: Sat 26 Dec 2009 04:59:33 PM CET
Category: Hurd
Severity: 2 - Minor
Priority: 3 - Low
  Item Group: None
  Status: None
 Privacy: Public
 Assigned to: None
 Originator Name: 
Originator Email: 
 Open/Closed: Open
 Discussion Lock: Any
 Reproducibility: None
  Size (loc): None
 Planned Release: None
  Effort: 0.00
Wiki-like text discussion box: 

___

Details:

When changing auth port, which could happen at the request of another
process in a signal-like manner, unionmount does not reauthenticate
its handle to the mounted filesystem.  One way to fix this would be
to use a file descriptor instead of a port and let glibc handle the
reauthentication.

I can't help but think this issue has been overlooked in other translators
as well, so it might be a good idea to investigate this further before
closing this bug.





___

Reply to this item at:

  

___
  Message sent via/by Savannah
  http://savannah.gnu.org/





Re: [PATCH 2/3] Implement mountee startup.

2009-12-26 Thread Carl Fredrik Hammar
Hi,

On Wed, Dec 09, 2009 at 03:07:59PM +0100, olafbuddenha...@gmx.net wrote:
> On Tue, Dec 08, 2009 at 08:53:46PM +0200, Sergiu Ivanov wrote:
> > On Sun, Nov 22, 2009 at 09:05:16PM +0100, olafbuddenha...@gmx.net wrote:
> > > On Thu, Nov 19, 2009 at 10:28:37AM +0200, Sergiu Ivanov wrote:
> 
> > > > +  /* Fetch the effective UIDs of the unionfs process.  */
> > > > +  nuids = geteuids (0, 0);
> > > > +  if (nuids < 0)
> > > > +return EPERM;
> > > > +  uids = alloca (nuids * sizeof (uid_t));
> > > > +
> > > > +  nuids = geteuids (nuids, uids);
> > > > +  assert (nuids > 0);
> > > 
> > > Hrmph, I didn't spot this before: I don't think the assert() is right --
> > > "nuids" (or "ngids") being exactly 0, is probably a perfectly valid
> > > case... And even if it is not, the test in the assert should be
> > > equivalent to the EPERM test above, to avoid confusion.
> > 
> > OK, changed.
> 
> For the record: We agreed on IRC that rather than changing the assert,
> it's better to go back to the original code, i.e. do the check/EPERM
> thing again. It is actually possible that the number of UIDs changes in
> the middle of things...
> 
> (Yes Frederik, I agree that this is not ideal either :-) But fixing this
> properly is non-trivial, and out of scope here... Might be useful to
> file a bug on Savannah though so it won't get lost.)

Ok, I filed a report.

Regards,
  Fredrik




Re: Reauthentication implementation flaw due to EINTR

2009-12-26 Thread Carl Fredrik Hammar
Hi,

On Mon, Dec 21, 2009 at 08:43:12PM +0100, Samuel Thibault wrote:
> 
> I had been noticing odd issues with sudo when it calls setresuid &
> such, it took me some time to understand that there was a flaw in the
> reauthentication implementation:
> 
> sudo calls setresuid(), which calls setauth(), which (for each FD &
> such) allocates a rendez-vous port, calls io_reauthenticate() (RPC
> in the underlying FS which calls the auth_server_authenticate()
> RPC) and calls the auth_user_authenticate() RPC. These last two
> RPCs end up in auth, which uses the rendez-vous port passed along
> to make the match. Whichever arrives first leaves information and a
> condition variable for the other ; when the latter arrives, it fills its
> information and wakes the former.
> 
> The issue is that currently, once the user part gets its passthrough
> port from the server part, it returns immediately, and setauth() drops
> the rendez-vous port, which actually interrupts the server RPCs because
> the rendez-vous sender right becomes dead. Quite often scheduling makes
> it so that the user is not so fast and the server has time to finish its
> duties, but due to the high usage of setresuid in sudo, one every few
> tens of sudo calls fail.

Is it the code below from S_auth_server_authenticate the problem?

  /* Store the new port and wait for the user RPC to wake us up.  */
  s.passthrough = newport;
  condition_init (&s.wakeup);
  ports_interrupt_self_on_port_death (serverauth, rendezvous);
  if (hurd_condition_wait (&s.wakeup, &pending_lock))
/* We were interrupted; remove our record.  */
{
  hurd_ihash_locp_remove (&pending_servers, s.locp);
  err = EINTR;
}

That is, does hurd_condition_wait get canceled even though the condition
was signaled before it was canceled?

> The fix I'll use at least on the Debian buildds for now is to make the
> auth_user_authenticate() RPC always wait for auth_server_authenticate()
> to have called auth_server_authenticate_reply() before returning. I've
> been running that in a tight loop the whole afternoon with no issue, so
> at least it seems to work much better. However, I'd prefer to make sure
> that it works _always_ :)
>
> So my question is: is it sufficient to make the user part wait for
> auth_server_authenticate_reply() call completion before freeing the
> rendez-vous port, to make sure that auth_server_authenticate() will
> never return EINTR because of the death of the rendez-vous port?  Of
> course, the rendez-vous port can become dead in the io_reauthenticate()
> RPC, but that shouldn't be a problem.

If what I said above is correct then this would indeed fix it.
The question is if there are better ways to fix it.

Perhaps trying to determine if the condition was signaled before the
interrupt?  Though I don't know if can be interrupted for other reasons
and should always return EINTR in those cases.  Perhaps the mistake is
using interrupts as a signal for port deaths in the first place?

> And bonus question: are there other places where we have such
> rendez-vous port which might become dead too early and that would need
> the same fix?

Don't know.

Regards,
  Fredrik




  1   2   3   >