Hi, I am afraid I have to come back to guardians once more.
I stumpled over our peculiar guardians while trying to improve the weak hash table marking algorithm so that it will properly deal with references from the non-weak part to the weak part. (For example, referencing a weak key from a non-weak value should not prevent that entry to be dropped once there are no other references to the key or value.) There are some significant differences between our guardian semantics and the ones described in the Guardian paper. (Not "The Guardian", this one: ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/guardians.ps.gz) I think we should go back to the original guardian semantics since they are everything you need, much simpler, and better known. The differences, afaik, are these: - With our guardians and by default, you can only put a single object into a single guardian. With the orginal guardians, you can put a single object into as many guardians as many times as you want and you would get it back from each guardian that many times. We have the option of 'sharing' guardians that will accept an object that is already guarded (but they are not the default). When an object is in a default 'greedy' guardian and a sharing one, it will be returned from the greedy one first. - Our guardians make guarantees about the order in which objects are returned from them, while the original guardians do not. Our guardians only return objects from a guardian that are not referenced by other objects that are still in a guardian. As a consequence, cycles can not be handled. They will be dropped from a guardian and not returned at all. An object will be dropped as soon as it is part of any cycle, not just a cycle formed only by guarded objects. I have the impression that these differences are mostly motivated by seeing guardians as being only good for finalization (as in C++ destructors). You normally want to finalize objects in a certain order, and it makes only sense to finalize them once. I think guardians should not try to solve these problems of finalization and lose their original generality for that. If a program puts an object into more than one guardian, we should assume that there is a good reason for that. Maybe one guardian is for finalizing and the other is for updating some statistics. Therefore, and to be compatible with the original semantics by default, guardians should not be made greedy by default. Of course, failing to return objects with cyclic dependencies from a guardian is very suboptimal even more so since we already fail on any cycles even if only one object of that cycle is guarded. In fact, I don't see a reason to provide greedy guardians at all. Why should you care whether someone else is also interested in learning about the death of some object? In my view, it should really be a rare occassion when putting an object into two guardians is an error. When you want to make sure that object FOO is returned from a guardian before object BAR is, you can put BAR into a global data structure to keep it alive. When FOO has died, you can remove BAR from that structure and let it die as well. This explicit approach allows much more control over the order of actions than letting the guardians do it implicitly by following _all_ references. Unwanted cycles can be avoided easily, for example. I therefore propose to go back to the simple guardian semantics as described in the paper by Dybvig et al. That would be an incompatible change, and so I will try to find people who are relying on the current semantics. -- GPG: D5D4E405 - 2F9B BCCC 8527 692A 04E3 331E FAF8 226A D5D4 E405 _______________________________________________ Guile-devel mailing list Guile-devel@gnu.org http://lists.gnu.org/mailman/listinfo/guile-devel