Tim Bunce wrote: > On Thu, Aug 28, 2003 at 07:26:25PM -0400, Dan Sugalski wrote: > > > How does it work? Simple. When a watched resource does what we're > > watching for (it changes, an entry is deleted, an entry is > > added [...] > > Only after the action being watched is performed I presume.
It's also useful to have notifications before an operation--or even during the operation, if the notification is an end unto itself. Typically, notifications are named with future tense (DogWillPoop) and past tense (DogDidPoop or DogPooped) to indicate whether they preceed or follow the action. Sometimes, an object will actually support both. Consider: Your neighbors are complaining of poor response time leading to residual odor. To help improve the situation, you add a DogWillPoop notification handler so that you can cache a pooper scooper handle while the dog poops asynchronously. But you still need to wait for the DogDidPoop notification before employing the pooper scooper, else you run the risk of a race condition with the dog. If you win the race condition, then the problem of residual odor will be worsened and the next homeowners association meeting will not end favorably for you. So you need both notifications. Speaking generally, the only safe statement here is to say that notification occurs when the subject issues a notification. > > we post an event to the event queue. When that > > event is processed, whatever notification routines were registered > > are run. Very simple. > > The async nature of this approach needs to be kept in mind. It will > often be important that the 'thread' handling the event queue runs > at a high priority. (Perhaps it would help to have a simple flag > on each watch to say if a yield() should be performed after posting > an event for that watch.) Parrot uses high-priority event dispatch for signal handling. Imagine that notifications would be high-priority events as well. Imagine an event dequeue applied just after the notification event is enqueued. Optimizate that: Don't bother with enqueue-dequeue, and simply feed the event straight to the handler. So event dispatch mechanics could be leveraged, but the notification could in fact be dispatched synchronously. Common notification vocab: - Subject: An object which is a source of notifications. - Notification: The combination of a subject and a notification identifier (name, number--implementation detail). - Observer: An object which asks to be informed when a notification fires. (Note: Subjects are generally *much* more common than observers.) - Notification center: An object which maps {subject, notification name} [that is, notifications] onto zero or more {observer, handler} tuples, and handles dispatching notifications. The NC needs to know both how to remove all notifications for a given subject, and how to remove all handlers for a given observer. Since observers are generally uncommon, it's often cheaper at a systems level to have a huge master notification center than it is to even reserve a single list head in every subject for chaining notifications off of them. Of course, such a global notification center here has to be threadsafe. Now, what notifications issued during DoD would be usefully able to do is another question, probably more what Tim was getting at than what I responded to. It strikes me that the handlers would be unable to do much more than nullify their weak pointer, and would have to be written in C: - If the notification handler tried to allocate an object, it could invoke recursive DoD. That's bad, right? Parrot code pretty much can't run without allocating memory. If DoD was for memory exhaustion, then parrot just screwed itself with full generality. - Dereferencing another weak reference from that code would be dangerous, too: Weak ref A and weak ref B are found to be invalid during the same DoD run. Weak ref A's "subject died" notification fires first. One of its handlers happen to dereference weak ref B. Is that reference guaranteed valid until DoD completes? And for recursive DoD? In the case of memory exhaustion? Seems to like weak references need to be nulled out in one big atomic sweep, primarily for the first reason. Afterwards, and once GC is run, death notifications (now an entirely separate feature) could fire. (But how to identify an object which has been GC'd? Java always encapsulates weak references within a WeakRef instance, so that instance can be a surrogate identity for the collected object...) I also have to wonder at adding the expense of any of the following to DoD: 1. Enqueuing an event for each dying object. 2. Adding space for a listhead to every object so that an notification observer list can be built. 3. Checking a notification center for every dying object. Tangentially: Notification centers are themselves an example of why dying object notifications are useful: Both subject and observer entries need to be culled, or the NC will consume all memory in a long-running process. Death notifications allow the entries to be removed in O(1) time rather than O(N). -- Gordon Henriksen IT Manager ICLUBcentral Inc. [EMAIL PROTECTED]