Yes,
I saw the Splitter problem too.
While looping the split, camel processes the next processor definition into
the loop so the SendProcessor it's executed for each message while it's
splitting.
We've solved that using an external splitting queue to send the message be
splitten and then go to destination queue via setting a header.
That performs the commit when big message is sended to the splitter queue
and when individual messages are coming back to destination queue can find
their data ok in database.
SplitterQueue:
from("non-xa://SplitterQueue?maxConcurrentConsumers=10")
.split(body())
.recipientList(header(IConstants.HEADER_DEFAULT_SPLITTER_DESTINATION));
An example sending to SplitterQueue:
from("non-xa://DoInsertsAndALotOfMessagesQueue?maxConcurrentConsumers=1")
.policy(getPropagationPolicy())
.process(new DoInsertAndPopulateMessage())
.setHeader(HEADER_DEFAULT_SPLITTER_DESTINATION,
constant("non-xa://MyAfterSplitterQueue"))
.to("non-xa://"+QueueNames.QUEUE_SPLITTER_DEFAULT);
That two steps instead of:
from("non-xa://DoInsertsAndALotOfMessagesQueue?maxConcurrentConsumers=1")
.policy(getPropagationPolicy())
.process(new DoInsertAndPopulateMessage())
.split(body())
.to("non-xa://MyAfterSplitterQueue");
Regards.
--
View this message in context:
http://camel.465427.n5.nabble.com/Transaction-behaviour-after-ToDefinition-tp4909694p4910841.html
Sent from the Camel - Users mailing list archive at Nabble.com.