Clebert, Thanks for the quick response!
Let me make sure I understand (all of the code I’m talking about is using the JMS client API). The client thread pool consists of worker threads which are enlisted to call my JMS MessageListener.onMessage(). So the maximum number of requests that can be simultaneously serviced by all MessageListeners in the entire JVM is set by the thread pool limit, which in turn can be changed by the URI parameter scheduledThreadPoolMaxSize. But, if I understand correctly, the parallelism of given service will also be constrained by the number of MessageListener instances I’ve created and activated. So a given service’s parallelism will be limited by the smaller of those two numbers. This has become a concern for us given the following scenario: * We have about 25 logical RPC services combined into a single JVM process. * Each RPC service runs between 1 and 10 MessageListener instances to reflect desired scalability. * However, because we are following a blocking RPC-style pattern, some of these RPC services can call out to other RPC services, and block their handler threads while awaiting a response. * Under high load patterns, all available threads can be blocked waiting for responses that cannot be serviced due to lack of available threads, resulting in deadlock and eventual request timeouts. Initially our response was to increase the scheduledThreadPoolMaxSize to a larger number. But this just pushes the problem off to a higher load threshold. My current proposal is, having done a static caller/callee analysis of our services, to split the services into groups, each using a separate thread pool, so that no given group can call itself. Fortunately our call graph is acyclic. I believe this can be implemented, according to https://activemq.apache.org/components/artemis/documentation/latest/thread-pooling.html, by * Set useGlobalPools=false in the URI * Use a different ActiveMQConnectionFactory for each service group * Set scheduledThreadPoolMaxSize in the URI to a value sufficient to handle the max desired parallelism across all services Does this seem like a workable strategy? Is there a better approach? Yes, I know that async programming models solve this nicely, but we are not yet ready to go there for reasons of existing code base, skills, and experience. Thanks John [rg] <https://www.redpointglobal.com/> John Lilley Data Management Chief Architect, Redpoint Global Inc. 888 Worcester Street, Suite 200 Wellesley, MA 02482 M: +1 7209385761<tel:+1%207209385761> | john.lil...@redpointglobal.com<mailto:john.lil...@redpointglobal.com> From: Clebert Suconic <clebert.suco...@gmail.com> Sent: Friday, November 11, 2022 8:19 AM To: users@activemq.apache.org Subject: Re: Multi-threaded consumers in AMQ classic vs Artemis *** [Caution] This email is from an external source. Please use caution responding, opening attachments or clicking embedded links. *** The thread pool on the client is just for executors and other shared threads. Like if you have a MessageListener, the client will call an executor.execute(....); when a listener is called. So if you have multiple connections on your client (from different connection factories) we wouldn't be creating threads like crazy. Unless you are doing something crazy (many, many threads) this shouldn't be an issue. It was meant to share the thread pool between multiple clients and clients towards different servers. On Fri, Nov 11, 2022 at 10:01 AM John Lilley <john.lil...@redpointglobal.com.invalid<mailto:john.lil...@redpointglobal.com.invalid>> wrote: Greetings, We are using AMQ/Artemis to build a set of RPC-style services. We discovered that Artemis (by default) uses a global thread pool for all consumers, whereas AMQ classic created a new thread every time we make a consumer and call setMessageListener(). At least I think that’s what happened in AMQ classic. Generally I like the thread-pool model better, but it makes me question whether our current approach is still correct. Our approach is as follows. A service processing a request queue that wants to use N threads, then for each thread we * Get the singleton Connection * Create a new Session * Create a new MessageConsumer on the queue * Create a MessageProducer to return RPC responses on the reply-to queues * Call setMessageListener() on the consumer Is this the best pattern for Artemis? In AMQ classic we were able to share the Session across consumers, but this is does not seem to be allowed in Artemis. If the consumers are stateless, is there a way to get N multi-threading without creating N consumers? Or must I have a consumer per thread? Thanks john [rg]<https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.redpointglobal.com%2f&c=E,1,e0TlXYG8A-_otWi0FY31Xvr786gBgUaFsYRw-dSy-eue1-Licy_JWRKBB9dlS0_ZVV1QkDAuSFfxiFnrowiWWJPWcfvtN4LzXz8ocAWrDPA9L5Na&typo=1> John Lilley Data Management Chief Architect, Redpoint Global Inc. 888 Worcester Street, Suite 200 Wellesley, MA 02482 M: +1 7209385761<tel:+1%207209385761> | john.lil...@redpointglobal.com<mailto:john.lil...@redpointglobal.com> PLEASE NOTE: This e-mail from Redpoint Global Inc. (“Redpoint”) is confidential and is intended solely for the use of the individual(s) to whom it is addressed. If you believe you received this e-mail in error, please notify the sender immediately, delete the e-mail from your computer and do not copy, print or disclose it to anyone else. If you properly received this e-mail as a customer, partner or vendor of Redpoint, you should maintain its contents in confidence subject to the terms and conditions of your agreement(s) with Redpoint. -- Clebert Suconic PLEASE NOTE: This e-mail from Redpoint Global Inc. (“Redpoint”) is confidential and is intended solely for the use of the individual(s) to whom it is addressed. If you believe you received this e-mail in error, please notify the sender immediately, delete the e-mail from your computer and do not copy, print or disclose it to anyone else. If you properly received this e-mail as a customer, partner or vendor of Redpoint, you should maintain its contents in confidence subject to the terms and conditions of your agreement(s) with Redpoint.