Re: Thoughts about the Lisp bindings project

2008-06-02 Thread Flávio Cruz
> A little side node: You are regularily using "pretend" in a manner that
> doesn't quite fit the context in English, and in fact is quite amusing
> :-)

Ahah, checked that up on a dictionary, indeed, it's quite amusing ;-)

> I don't think it's a good idea to define the interfaces in a completely
> new way. This will create redundancy, with all the associated
> disadvantages rdeundancy cretes. We better avoid the disadvntages of
> reddundancy. I don't want to see the disadvantgaes of rdundancy creep
> in. It would be quite cumbersome to deal with the disavantages of
> redundany. The diadvan... well, I guess you get the point ;-)

You are right, it may create redundancy but, wouldn't it be nice to define
new interfaces without leaving lisp and then using them to create new
servers only
with lisp code? I'm not even talking about re-defining the already used
interfaces in Lisp, but creating new ones.

Or, you could even (possible with more work) replace some core servers
with ones written in lisp... well I must stop before someone
calls me a Smug Lisp Weenie :-P

That's the advantages I have seen when I thought about that possibility ;-)

> Rather, reuse the existing .defs, only creating Lisp interfaces from
> them instead of C interfaces.

That's a nice approach, I think.

> BTW, what about the option of invoking the MiG-generated C stubs rather
> than creating native stubs in Lisp; have you considered that? I'm not
> saying that I think it's a better option; but I'd like to see a
> discussion of advantages and disadvantages of redun^W^W^W^W^W err I mean
> the advantages and disadvantages of both approaches...

Using MIG generated stubs means less work but keeps you dependent on stubs
generated on the C language. Using native stubs, well, you don't have to deal
with MIG, which, sincerely, is not the best thing in the world :)

>
> (And in fact also for other approaches like binding to existing
> libraries -- you now say that you want to do it this way as if was the
> most normal thing in the world, but never explain your motivation for
> that change... Don't leave us groping in the dark! :-) )

Well, when I sent my proposal, the initial goal was to develop two library
bindings: one for libtrivfs and another for libnetfs. But, Neal
Walfield expressed
some disappointment that the Lisp bindings would not bind at a deeper
level (namely, at the interface level) and then Pierre Thierry and I
discussed about investigating these different approaches.

Why I'll bind these libraries like libports? Well libports is currently used in
libtrivfs and libnetfs and is needed to manage ports and listening
to incoming messages. Also, I think it will
be generally useful to have a Lisp library to libports.

I'd like to hear more opinions on this :)




Re: Revision control

2008-06-02 Thread olafBuddenhagen
Hi,

On Thu, May 29, 2008 at 05:40:29PM -0400, Andrei Barbu wrote:

> There's also the fact that moving to something like git is easy [...]
> Since it's a rather simple change, aside from the fact that people
> would have to learn a new set of commands, which is a few minutes of
> work, [...]

Unfortunately, that's not quite true...

Sure, you can learn the most important git commands in a couple of
minutes. But that will result in a lot of frustration (I have seen it
happen to some Xorg developers): Most cvs commands don't map to git 1:1.
To understand what the git commands do, and how to use them properly,
you really need to take the trouble to learn the basic concepts of git.
Luckily, it's only a few simple principles; but still, using git
properly without at least a few hours of learnig is unrealistic.

Nevertheless, it's definitely worth it :-)

-antrik-




Re: dtrace/systemtap options

2008-06-02 Thread olafBuddenhagen
Hi,

On Fri, May 30, 2008 at 01:14:34AM -0400, Andrei Barbu wrote:

> SystemTap itself doesn't get to sit in the Linux kernel. Linux exposes
> an in-kernel API called kprobes. That allows you to do something like
> (conceptually, not real code) set_entry_probe("some_function", myfun),
> that will call myfun when some_function executes and give myfun its
> arguments; after myfun it restores the old function; including the
> stack. What SystemTap does is take code, in their internal language
> (or C if one wishes), compiles it and wraps it in module loading code.
> So what you get is a module that implements the probe that you want.
> SystemTap often will just load this for you and the module
> automatically calls kprobes to hook into the appropriate bits of the
> kernel. So if we wanted to just support SystemTap without any changes
> we'd definitely need module loading support.

I see.

> > It may still be the preferable solution though, if the other option
> > of running it is user space involves much more effort and/or risk...
> 
> I think adding a module loading API just for this is overkill and GNU
> Mach probably ought not to have one. By risky, do you mean, will the
> project be completed if we choose the 3rd option?

Yes. Or more precisely: As it seems to be a more experimental path,
could there be considerable danger that unexpected problems show up,
making it impossible to finish it in time?...

> I don't foresee this to be an issue, there's actually not all that
> much extra work that must be done.  Adding module support would be far
> more work.

OK, I take your word for it :-)

> Without modules what would be needed is a port exposed for privileged
> enough userland processes so that they can request a probe in the
> kernel and receive information for when the probe fired. Then the
> wrapper code for systemtap would be changed to create processes that
> use this port instead of kernel modules that set kprobes directly.

Doesn't sound too hard indeed -- at least on paper ;-)

-antrik-




Re: Revision control

2008-06-02 Thread olafBuddenhagen
Hi,

On Fri, May 30, 2008 at 10:44:08AM +0200, Arne Babenhauserheide wrote:

> Due to the optimizations / repacking, git repositories after gc can be
> smaller than mercurial ones (especially for binary data it seems to me
> from my own tests), but optimizing takes time (and I'm bound to forget
> it all the time :) ). 

Eh? If you notice things getting too slow, and/or taking up too much
disk space, this should prompt you that it's time for GC...

And if you do not notice, there is no need.

It's not like you need to GC every other day. Unless the repository is
very active, you can probably go without for months...

-antrik-




Re: Namespace-based translator selection; project details

2008-06-02 Thread olafBuddenhagen
Hi,

On Fri, May 30, 2008 at 11:40:09PM +0300, Sergiu Ivanov wrote:
> On Fri, May 30, 2008 at 6:32 AM, <[EMAIL PROTECTED]> wrote:

> > > However, the matter about ignoring some translators ('-u', for
> > > instance) is a bit foggy for me. Do you mean that these should
> > > also be symbolic links?

Looking at it again, I might have misread your question.

"-u" is more tricky that "-gunzip", as it doesn't skip one specific
translator, but rather filters a whole class. While it is certainly
possible to implement this also by means of a generic filter translator
with some more complex rules, it might be easier to use a hardcoded
implementation, making "-u" a real translator on its own...

"-gunzip" however is trivial, and can be easily implemented as a symlink
to a generic filter translator, or as a simple launcher script invoking
the filter translator.

> > That would be one of the implementation methods I suggested: Links
> > to a common filter translator. The script variant might be more
> > elegant, though...
> 
> 
> I'm very sorry, but I feel completely lost :-( As far as I can get,
> the '-u' filter should show the file/directory on which it is applied
> without any archive translators.

Right.

> Does it mean that, when the node 'file,,-u' is accessed, the proxy
> filesystem should generate a virtual node and stack upon it all
> translators which precede the 'u' translator?

Well, that's more or less the effect from the client's point of view.

Technically, it's a bit different: It should actually create a virtual
node that mirrors the original "file" node, and attach the "-u"
translator to this proxy node. "-u" will in turn present the client with
another virtual node, which corresponds to the original "file",
translated by all static translators present on the original node,
except for those that need to be skipped according to the filter rules.

As "-u" must be able to filter possibly present "gunzip" or "bunzip2" or
similar static translators attached to "file", "-u" probably needs to be
attached to a mirror of the *untranslated* "file" node. "-u" then will,
when handling client requests, have to explicitely query for any static
translators attached to "file", and follow them, unless it finds some
that needs to be skipped.

> I'm trying to say the following thing: suppose there are the following
> translators on 'file': x,u,y,z. When I ask for 'file,,-u' should I get
> the file 'file' with only 'x' set upon it?

Yes, probably. It makes most sense I think. (Keeping "y" and "z" while
skipping "u" is in fact impossible technically I believe, as "y" is
attached to the root node of "u". Skipping "u" actually means skipping
"u" and anything on top of it...)

Note that what you in fact get is a proxy node mirroring "file", with
only "-u" attached to it; when you open that however, "-u" will see that
the original node has static translators attached to it, and gives you
the "file" translated by "x" (while skipping the "u" etc.). But I
already wrote that above... :-)

> By the way, could you please suggest an idea where I can find out how
> to list the entries of a stack of translators?

Check the implementation of open() in glibc. This function normally
follows all translators, or none (when using O_NOTRANS); but it should
give you an idea how translator traversal works, and allow you to come
up with a more selective implementation...

> Is showtrans with its single file_get_translator call relevant in this
> case?..

"showtrans" only lists passive translators. "fsysopts" is used to query
active ones. Checking its implementation should also be helpful.

> Actually, I should start with the question: should the proxy create
> its own stacks of translators or just stack active translators?..

The dynamic translators specified through the special file name suffixes
should be attached (as active translators) to the proxy node.

The proxy node itself, as explained in the provious mail, probably
sometimes needs to mirror the original node ignoring any (static)
translators attached to it, so things like "-gunzip" can explicitely
check the static translators, and follow them selectively; while in
other cases (like "gunzip"), it needs to mirror the node already
translated by any static translators present. ("gunzip" knows nothing
about the proxying, and thus unlike the "-gunzip" won't follow the
static translators explicitely.)

> I think fetching the list of translators will be needed when the proxy
> will be told to ignore some of the static translators.

Yes, that's my understanding as well.

> When you say scripts, do you mean bash scripts or other scripting
> languages?

I was actually thinking of shell scripts here. Other languages could be
used as well, but probably would be overkill for a simple launcher :-)

Of course, instead of a simple launcher, you could implement some more
complicated scripts, if it seems useful...

> What really leaves me confused is how the interaction between the
> proxy filesystem 

Re: Revision control

2008-06-02 Thread olafBuddenhagen
Hi,

On Sun, Jun 01, 2008 at 11:24:16PM +0200, Arne Babenhauserheide wrote:

> [...] but I'd prefer to see the Hurd development more accessible, and
> even though there are many good candidates, Mercurial is best suited
> for that, at least in my opinion. 

How accessible it is, depends first and foremost on what most people
know. That probably leaves Mercurial and git as the only serious
contenders...

Among those two, git is the one I know myself. My decision for learning
git rather than Mercurial was influenced mostly by Xorg's (or in fact
Keith Packard's) decision, but also other things like the fact that
Savannah offers git hosting but no Mercurial hosting.

Once I actually started learning git, I quickly became convinced that I
made the right choice. The thing I really like about git is that, unlike
almost all other software available today, it really follows the UNIX
philosophy, both in concept and in implementation.

git, like UNIX, is based on a couple of very simple yet powerful ideas,
and a set of basic tools doing the work. On top of that, you get a set
of high-level scripts to easily perform all typical operations; but the
internals are not hidden behind a limiting interface -- once you
understand how things work, you can use the low-level tools to do about
anything you can imagine.

Not knowing Mercurial, I can't really judge. But I have a very hard time
believing that any other system comes even *close* to the power and
flexibility of git... git is not a shiny toy with idiot-proof UI; it's a
powerful tool for serious users.

-antrik-




Re: GSoC: the plan for the project network virtualization

2008-06-02 Thread olafBuddenhagen
Hi,

On Sat, May 31, 2008 at 04:17:37PM +0200, Zheng Da wrote:

> step 1. A mechanism for different pfinet servers to communicate with each
> other:
> There are two possible solutions to reach the goal at least: the BPF
> translator and the hypervisor.

Right. You never followed up on the discussion which of the variants
best to persue... This is actually a bit disappointing -- I was counting
on these questions being resolved before the beginning of the summer
session :-(

> I wonder how the pfinet server tells the BPF translator? Do we modify
> the code of pfinet to send the filter rule to BPF translator?

Yes.

> step 2. A mechanism for the user to run several pfinets together. Do I
> need to use filesystem proxy to do it?

No.

> as I see, /servers/socket/1 is used for local, /servers/socket/2 is
> used for inet. Does it mean I can create /servers/socket/3... used for
> other pfinet?

No. These are for different protocol families.

/servers is for the default system servers. What you want to do is to
use private servers -- they are not supposed to be in /servers. You can
basically put them anywhere -- most likely in the home directory, or
perhaps somewhere in /var or in /tmp...

There is really nothing special you need to do to be able to run
multiple pfinet:s. It's already possible right now -- there is just no
mechanism to tell a process to use an alternate server instead of the
default one... (You could in fact do it using chroot, but that's a bit
of overkill :-) )

> step 3. override the default server in the system: using some approach
> mentioned in the project of Server Overriding Mechanism.
> Maybe I can try the first approach first.

That's fine for the start.

> I wonder how much work it could be? And how do I start?

Well, I must remind you that you were supposed to provide a schedule in
your application... I accepted your not doing so, because the task
description was very unspecific, and you were not really in a position
to provide a schedule without discussing things first; but now that you
have an idea what you will be actually working on, you should be able to
come up with a rough schedule...

I suggest you start with the server overriding mechanism, as it's the
easier part, and there is less to discuss...

-antrik-