Prakash Sangappa wrote:
To activate  monitoring(watching) a file, it needs to be registered
Upon delivering an event, the file monitor is disabled. It needs to be
re-registered again to reactivate the monitor and receive further events.

Why is this?  Usually apps that monitor a file want to do it on their
own terms, because they have the state they need to determine when to
watch a file or not.  This constraint seems like it will just be a
burden on programmers.

This behavior aids proper multi threaded programing. The overhead
is just to re-register which will enable the file monitor. Note that with the current approach
events don't get queued up. The queuing issues  have been discussed before.
The main aim is to keep the kernel implementation simple with out exposing the system to scalability problems that can have a potential for denial of service attacks(DOS).

To further expand on Prakash's comments, there are two principle
reasons why we do not wish to deliver a stream of notifications once
a process has registered interest in a file:

1) As Prakash points out above, imagine a application doing successive
updates of a file.  If another multi-threaded process is waiting for
notification events on that file, each application write will result
in another event being sent to the event port. If there are multiple
threads waiting for notification events on that ports, all those threads
will be dispatched to handle the successive application writes to the
file. This is not likely a very useful behavior; the developer of
the MT monitoring process would need to design around this carefully.

2) The rate at which applications can do writes to files are likely
much higher than a monitoring process can do something useful with
that information.  If a single registration were to result in
a multiple event stream, either the kernel would need to throw
away events (and deliver notification of missed events) or
the kernel would need to throttle writers to slow event
generation.  The latter is unacceptable, I think; as a result
the monitoring app needs to cope with missed events.  This will
result in two logic paths in the monitoring app, one to handle
the "normal" case and one to handle missed events; this seems
undesirable.  In order to appreciate the difference, try pseudo-coding
the example Prakash sent out yesterday for a multi-threaded process
watching multiple files assuming that multiple events are generated from
a single registration.  A key idea here is that the monitoring app
always goes through the same code paths, regardless of how quickly
the files are modified or how slowly or quickly the monitoring app
processes the files.

As a mental model, I prefer to think of file events as
one-shots; passing in the stat data collected by the monitoring
app before processing a file insures that 1) no modifications are
lost and 2) as many modifications as possible are combined into
a single event.

- Bart

--
Bart Smaalders                  Solaris Kernel Performance
[EMAIL PROTECTED]               http://blogs.sun.com/barts
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to