yep - being able to reproduce - this is now fixed in trunk - but too late for the 5.1 release

thanks,

Rob

On 16 Apr 2008, at 13:54, yaussy wrote:


Rob,

While I don't yet have a solid / consistent test case for you, our
application, using a particular test scenario the app guys have, can
reproduce the problem every time. If you have anything you want me to try
out, let me know.

I'm still working with my test code to get something more reproduceable,
while adding some debug code to IndexManager.

Kevin


yaussy wrote:

Still working on a consistent test case. But, I've run into something along the way, and I'm not sure if it's related to the original exception I've logged here. I was able to get the problem to happen when I expanded my test case to a publisher of 6 topics, at 100+/sec/topic, and a consumer
that consumes all those topics.  I have this configured to use a
connection for each topic, for both the publisher and the consumer. I've only been able to get the original exception to happen once, but I've more consistently hit a problem wherein no exception is thrown, but the broker just stops forwarding events for one of the topics. After this point, the publisher continues, but all events for this one topic are queued to disk.

I'm trying to see if I can reduce this test case down right now. But, maybe you could give this a go with the supplied consumer / supplier test
code?

Kevin


rajdavies wrote:

ok - thx for checking! - let me know when you have a test case

On 15 Apr 2008, at 15:06, yaussy wrote:


Bad news - problem still happens with "5.2" snapshot dated 4/13.  I
did not
see anymore 5.1 snapshots, but saw the 5.2 directory.  The archive
still has
5.1 for the path names, so I figured this is what I should take.

I'm still working with it and am trying to see if I can get any more
debug
information.  I have not been able to reproduce it with a test
program yet.



yaussy wrote:

Rob,

This is good news - I will try using last nights snapshot.

I started out using AMQMessageStore, but I've never had good
performance
luck with AMQs Journal - it always seems to have this frequent,
high CPU
cost as it checkpoints.  Our Durable rate requirements are fairly
significant, such as a few hundred events per second to a handful of
consumers, on one of our clusters.  It has not performed well,
whereas
Kaha by itself is excellent.

Kevin


rajdavies wrote:


On 14 Apr 2008, at 14:07, yaussy wrote:


I have not been able to reproduce this problem outside of our
application,
but I'm still trying.

Anyway, I'm using regular Kaha persistence (not AMQMessageStore),
and during
a Durable Topic test, I'm eventually getting the following
exception
in the
AMQBroker, after which the broker stops forwarding events to the
consumer.

Anyone seen this?


<
org
.apache
.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch>
<Thread[ActiveMQ Transport: tcp:///127.0.0.1:43595,4,main]> Failed
to fill
batch
Stack Trace follows:
java.lang.RuntimeException: Failed to get next index from
IndexManager:(index-topic-subs) for offset=7681977, key=(2,
20080006, 47),
value=(2, 20080058, 165), previousItem=7681212, nextItem=7682895
     at
org
.apache
.activemq
.kaha
.impl
.index .DiskIndexLinkedList.getNextEntry(DiskIndexLinkedList.java:
267)
     at
org
.apache
.activemq
.kaha
.impl.container.MapContainerImpl.getNext(MapContainerImpl.java:
449)
     at
org
.apache
.activemq
.store
.kahadaptor .TopicSubContainer.getNextEntry(TopicSubContainer.java:
95)
     at
org
.apache
.activemq
.store
.kahadaptor
.KahaTopicMessageStore
.recoverNextMessages(KahaTopicMessageStore.java:165)
     at
org
.apache
.activemq
.store
.ProxyTopicMessageStore
.recoverNextMessages(ProxyTopicMessageStore.java:97)
     at
org
.apache
.activemq
.broker
.region
.cursors.TopicStorePrefetch.doFillBatch(TopicStorePrefetch.java:
107)
     at
org
.apache
.activemq
.broker
.region
.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:
188)
     at
org
.apache
.activemq
.broker
.region
.cursors.AbstractStoreCursor.hasNext(AbstractStoreCursor.java:
104)
     at
org
.apache
.activemq
.broker
.region
.cursors
.StoreDurableSubscriberCursor
.hasNext(StoreDurableSubscriberCursor.java:210
)
     at
org
.apache
.activemq
.broker
.region
.PrefetchSubscription.dispatchPending(PrefetchSubscription.java:
479)
     at
org
.apache
.activemq
.broker
.region .PrefetchSubscription.acknowledge(PrefetchSubscription.java:
357)
     at
org
.apache
.activemq
.broker.region.AbstractRegion.acknowledge(AbstractRegion.java: 349)
     at
org
.apache
.activemq .broker.region.RegionBroker.acknowledge(RegionBroker.java:
474)
     at
org
.apache
.activemq
.broker.TransactionBroker.acknowledge(TransactionBroker.java: 194)
     at
org
.apache
.activemq.broker.BrokerFilter.acknowledge(BrokerFilter.java:73)
     at
org
.apache
.activemq.broker.BrokerFilter.acknowledge(BrokerFilter.java:73)
     at
org
.apache
.activemq
.broker .MutableBrokerFilter.acknowledge(MutableBrokerFilter.java:
84)
     at
org
.apache
.activemq
.broker
.TransportConnection.processMessageAck(TransportConnection.java:
444)
     at
org.apache.activemq.command.MessageAck.visit(MessageAck.java: 196)
     at
org
.apache
.activemq
.broker.TransportConnection.service(TransportConnection.java: 293)
     at
org.apache.activemq.broker.TransportConnection
$1.onCommand(TransportConnection.java:181)
     at
org
.apache
.activemq
.transport.TransportFilter.onCommand(TransportFilter.java:68)
     at
org
.apache
.activemq
.transport
.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:
143)
     at
org
.apache
.activemq
.transport.InactivityMonitor.onCommand(InactivityMonitor.java: 206)
     at
org
.apache
.activemq
.transport.TransportSupport.doConsume(TransportSupport.java:
84)
     at
org
.apache
.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:
196)
     at
org
.apache .activemq.transport.tcp.TcpTransport.run(TcpTransport.java:
183)
     at java.lang.Thread.run(Thread.java:619)
Caused by: java.io.EOFException
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:
383)
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:
361)
     at
org
.apache
.activemq
.kaha .impl.index.StoreIndexReader.readItem(StoreIndexReader.java:
46)
     at
org
.apache
.activemq .kaha.impl.index.IndexManager.getIndex(IndexManager.java:
66)
     at
org
.apache
.activemq
.kaha
.impl
.index .DiskIndexLinkedList.getNextEntry(DiskIndexLinkedList.java:
265)
     ... 27 more

--
View this message in context:
http://www.nabble.com/Kaha-persistence-issue-w-5.1-%28upto-4-8-SNAPSHOT%29-tp16677512s2354p16677512.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


There was a bug fixed in this area last Friday - would be interested
if this is still a problem for you with a later version?
If it is - Ill digg a little deeper.

btw - why are you using Kaha instead of AMQStore ?




cheers,

Rob

http://open.iona.com/ -Enterprise Open Integration
http://rajdavies.blogspot.com/








--
View this message in context:
http://www.nabble.com/Kaha-persistence-issue-w-5.1-%28upto-4-8-SNAPSHOT%29-tp16677512s2354p16700798.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.







--
View this message in context: 
http://www.nabble.com/Kaha-persistence-issue-w-5.1-%28upto-4-8-SNAPSHOT%29-tp16677512s2354p16721669.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Reply via email to