Hi all,

As part of an Univ. project, we developed a small architecture: 
- Clients=N (JMS producers) sends query requests
- ActiveMQ Topics L=queue size P=# of queues
- Processors=M (JMS clients) execute and sends query results
- PostgresQL 

were we would vary these variables and observe the throughput. We use the
standard benchmark TPCH schema.

Although our benchmarks were not comprenhensive we noticed that having fixed
relatively large N and M, increasing the P (# queues) the throughput would
degrade. We used two different strategies to publish messages: random and
round robin. The optimal number of queues P was always found to be one. Any
ideas why is that so?

Here you can find all the code, slides and (gnu) plots of our experiments:
http://dl.getdropbox.com/u/2675033/milestone02_3806.tar.gz

Thanks in advance,
best regards,
Giovanni
-- 
View this message in context: 
http://www.nabble.com/increasing-number-of-queues-degrades-performance--tp26109931p26109931.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to