I placed the interfaces for the Event Queues in the 
org.apache.avalon.excalibur.event
package.  The more I look at it, the separation of the types of queues makes 
too many
different dirivations.  I think it might be better to merge the Source and Sink 
types.
At first, I was thinking that blocking concerns can be applied to the Queue as 
a whole
(i.e. the Queue is Blocking or NonBlocking by implementation).  A Blocking 
Queue would
have a standard timeout set for _all_ dequeue operations.  In the end, it still 
may
be better to do it that way (thus limiting the dirivations of Queues to only 
three
instead of six).  The idea I had was that blocking is a system concern, and not 
a
stage concern.  I added the BlockingSink interface after I thought about it some
more, thinking that maybe the system could identify the blocking time period it 
was
willing to wait for new events.

In the end, I have come around full circle to my original way of thinking.  
Blocking
should be an implementation concern with one set time for the entire queue.  One
reason is it's intended use.  If you recall my original posting when I 
described what
a Stage was, you have a background thread that dequeues events from one queue, 
and
gives them to an event handler.  The event handler in turn works with the Stage,
which decides what type of events it will pass to the next queue, or which
modifications it will perform to the event.  In this use case, blocking queues 
can
actually be a detriment.  It artificially locks a thread for a period of time,
keeping it from processing events from another queue in the mean time.

In that spirit, I will remove the BlockingQueue and BlockingSink interfaces as 
this
is both an implementation issue, and not the preferred use.

As to Regular Queues vs. Lossy Queues, I had the same dicotomy of thought.  
Just because
the SEDA architecture provides it does not mean we have to implement it in the 
same
manner.  In fact, the concept of a Lossy Queue should be an implementation 
detail.
To this end, I added the boolean return value to the enqueue() commands so that 
the
Stage would still have to handle events that were removed due to the queue 
being full.
Again, queue length can be finite, or unbounded.  I am still open on this issue.
Should the lossiness of a Queue be implementation dependant or specifically 
mandated
by interface?

Lastly, the the TransactionalSource interface I did in an attempt to separate 
those
concerns, as not all stages will require transactional queues.  However, using 
the
JDBC Connection object as inspiration, I believe that the Transactional queue 
interface
should be merged in with the base queue.  I implemented transactional queueing 
more
in line with the JDBC Connection in that the prepareEnqueue() method returns a
PreparedQueue object with the commit() and abort() interfaces.  Since 
transactional
queueing is required to have both the commit and abort methods succeed if a
PreparedQueue object is returned, they do not throw exceptions (thus making 
transactions
easier to write for the programmer).

I find the Connection approach to transactional queues superior to the SEDA 
implementation
as it can allow for a cleaner API.  For example, compare the two approaches:


SEDA:

Object key = sink.enqueue_prepare( events ); // SEDA queues start with a sink 
?!?
if ( should_commit )
{
    sink.enqueue_commit( key ); // Throws BadKeyException
}
else
{
    sink.enqueue_abort( key ); // Throws BadKeyException
}


Avalon:

PreparedEnqueue transaction = source.prepareEnqueue( events ); // Avalon queues 
start with a source.
if ( should_commit )
{
    transaction.commit(); // Never have a bad key!
}
else
{
    transaction.abort(); // Never have a bad key!
}


I would like to have reactions on the Blocking and EventDropping issues though.... --

"They that give up essential liberty to obtain a little temporary safety
 deserve neither liberty nor safety."
                - Benjamin Franklin


-- To unsubscribe, e-mail: <mailto:[EMAIL PROTECTED]> For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>



Reply via email to