into small groups.
However I'll definitely profile it / compare it to the other workaround of
introducing a thread pool.
--
View this message in context:
http://camel.465427.n5.nabble.com/Aggregator-lock-tp5739692p5740053.html
Sent from the Camel - Users mailing list archive at Nabble.com.
That way,
>> behaviour is unchanged except that *downstream processing no longer happens
>> inside the lock*.
>>
>> I'm not, of course suggesting it's this trivial, there are several
>> complications in getting to this - this is just an outline.
>>
>>
>> sendDownstream() would use the existing code to submit to an
>> executorService, which would be synchronous by default still. That way,
>> behaviour is unchanged except that *downstream processing no longer happens
>> inside the lock*.
>>
>> I'm no
s by default still. That way,
> behaviour is unchanged except that *downstream processing no longer happens
> inside the lock*.
>
> I'm not, of course suggesting it's this trivial, there are several
> complications in getting to this - this is just an outline.
>
> Thanks
&
ing it's this trivial, there are several
complications in getting to this - this is just an outline.
Thanks
Baris.
--
View this message in context:
http://camel.465427.n5.nabble.com/Aggregator-lock-tp5739692p5739771.html
Sent from the Camel - Users mailing list archive at Nabble.com.
Thanks Claus. It is still verbose/error prone - I have to repeat it every time
I use such a processor (verbose) and I have to remember to do it (error prone).
And that assumes I *know* that I have to do it in the first place - I don't
know how I can tell whether a given processor holds a lock ar
message that completes an
>>>> aggregation, that downstream processing will continue on that consumer
>>>> thread, whilst other such downstream processing for another 'completed
>>>> aggregation' message may be happening in parallel on the other SEDA
>>>&g
;>
>> What I'm finding instead is that whilst all of the work downstream of
>> aggregate() does occur across the two consumer threads, it is serialised;
>> no two threads execute the processors at the same time. This becomes quite
>> noticeable if this downstream wor
; message may be happening in parallel on the other SEDA
>>> consumer thread.
>>>
>>> What I'm finding instead is that whilst all of the work downstream of
>>> aggregate() does occur across the two consumer threads, it is serialised;
>>> no two threads
umer threads, it is serialised;
> no two threads execute the processors at the same time. This becomes quite
> noticeable if this downstream work is lengthy. I've uploaded a sample to
> https://github.com/bacar/aggregator-lock, which you can run with mvn test
> -Dtest=AggregateLock.
oss the two consumer threads, it is serialised;
no two threads execute the processors at the same time. This becomes quite
noticeable if this downstream work is lengthy. I've uploaded a sample to
https://github.com/bacar/aggregator-lock, which you can run with mvn test
-Dtest=AggregateLock
11 matches
Mail list logo