Hi, On Fri, Aug 24, 2007 at 02:14:38PM +0200, Carl Fredrik Hammar wrote:
> I'm not familiar with `kitten', and it didn't turn up in any of the > searches I made. But I like its name :-), could you give me some > pointers to where it's described? I don't think it is described anywhere. Maybe it was mentioned on the gnu-system-discuss list at some point two years ago or so; but otherwise, it has only been discussed on IRC. The idea stems from the fact that in the "official" GNU system, software installation would be handled by stowfs. (A variant of unionfs.) Similar to the existing stow program, this allows installing each package in an own directory tree, and the files are then linked to common directories like /bin, /lib etc., so it looks like a traditional UNIX layout. However, unlike traditional stow, stowfs doesn't use hook scripts to update static links; but instead creates the virtual directories on the fly, making use of Hurd's extensible filesystem. (Someone pointed out that nowadays this probably would be possible with Linux as well, using inotify and unionfs+unionctl -- though less elegant of course...) One problem is: How to handle files in /etc? While many belong exclusively to specific packages, there are some config files that need to be customized by various installed programs. Alfred M. Szmidt suggested that in many cases, it suffices to merge snippets provided by various packages into a single file. For these cases a generic translator could be used, which presents a file that is a concatenation of all input files. As this is related to what the "cat" utility does, he named it "kitten". Soeren Schulze at some point actually implemented it along with rollover. (Though I'm not sure how complete his implementations are.) However, as there was no progress with the GNU system, and thus his translators never actually got used, he lost interest. No idea whether the implementations are still available somewhere. Anyways, while implementing these, we realized that the current mechanisms are not really suitable for such translators that remap the content of underlying files: They are very hard to implement; they have semantical problems; and they are pretty inefficient, especially when stacking several of them. Quite recently it occurred to me, that all these problems could be addressed by implementing them using stores (or some similar concept): These can provide a generic framework for managing the mapping, where only the actual functionality of the specific translator must be filled in; they have a more powerful interface, which allows representing more semantics than the standard file interface, if accessed by programs aware of them, or between the layers in a stack; and they allow avoiding a lot of unnecessary overhead when stacking. > Perhaps instead of packages, we should provide a translator for every > package, e.g. `audioio', `storeio' and `netio'. Seems a sane approach for now. > One translator per type would definitely be over-kill. ;-) I'm actually not so sure about that. It probably would be overkill for the standard packages with their more or less fixed libraries of types. What I did not mention yet, is that I see channels in a much broader context. For quite a while now, I thought about mechanisms for transparently (to the user) optimizing translator stacks, to avoid excessive overhead in modular applications based on combining translators. These considerations have been fueled by the observations made with the implementation of kitten and rollover. I was aware of stores as a mechanism for optimizing certain translator stacks; but these only handle one pretty specific case. (And what's more, the one where it's probably least important -- stores usually operate an extremely slow backends, so the performance of the translators doesn't really make any difference...) I'd prefer a generic solution. I initially didn't like the idea of channels at all, because on the one hand it seemed another specific solution; yet throwing together such diverse things as audio streams and network stacks -- seemed pretty messy to me. Only when I realized that the channel concept could be generic enough to handle *all* kinds of translators, I started liking it: This could actually turn into the generic solution I was looking for! However, for that it's necessary to handle things a bit differently. To transparently optimise all kinds of translators, possibly coming from different sources (a user or application can bring its own translators!), a central library of types won't do. Here, my own ideas come in. What I envision is a mechanism where each and every tranlator, instead of handling all client requests, could be asked to provide a module implementing its functionality, that can be promoted to run inside the client. I think with the channel mechanisms in place, this shouldn't be too hard to implement... Basically it just means that instead of loading the modules from a library, libchannel would get them directly from the translators. (Once the translator code runs inside the client, one could even imagine using something like LLVM to do inter-module optimisations -- I don't know how well this would work out in practice, but at least in theory overhead could be reduced so far as to achieve similar efficiency as if the code was monolithic in the first place...) This promoting would happen almost totally transparent to the user. Thus, most of the time the user wouldn't even need to care that channels are involved -- he would just stack arbitrary translators, and the channel mechanisms would do the magic in the background. In such a scenario, it actually wouldn't make sense anymore to have a generic channelio translator (or one per package), that can invoke all the types from some central library. Rather, one would have each type embedded in an own translator. The user then would just stack these translators, typically each one creating a single layer. -antrik- _______________________________________________ Bug-hurd mailing list Bug-hurd@gnu.org http://lists.gnu.org/mailman/listinfo/bug-hurd