Hello Norbert, Thank you very much for your fast reply. Like you have said, I turned off the ProducerFlowControl flag for my topic and also increased the systemUsage parameters. The adding of the below code, made my broker to work stable and with no freeze. :) I really appreciate your help. Thanks a lot, Nobert. Have a nice day!!! PolicyEntry entry = new PolicyEntry(); entry.setTopic("myTopic"); entry.setProducerFlowControl(false); //512MB entry.setMemoryLimit(536870912); List<PolicyEntry> entries= new ArrayList<PolicyEntry>(); entries.add(entry); PolicyMap map = new PolicyMap(); map.setPolicyEntries(entries); broker.setDestinationPolicy(map); SystemUsage systemUsage = new SystemUsage(); MemoryUsage memoryUsage = new MemoryUsage(); StoreUsage storeUsage = new StoreUsage(); //128MB memoryUsage.setLimit(134217728); //1GB storeUsage.setLimit(1073741824); systemUsage.setMemoryUsage(memoryUsage); systemUsage.setStoreUsage(storeUsage); broker.setSystemUsage(systemUsage);
Norbert Pfistner-2 wrote: > > Maybe this is a ProducerFlowControl issue. Have you tried to turn it off? > (see also > http://kovyrin.net/2009/01/23/activemq-tips-flow-control-and-stalled-producers-problem/) > > Greetings, > Norbert > > kley schrieb: >> >> sinus wrote: >>> Hello to everybody. I need a help. During 3 days I'm trying to solve a >>> problem that appears with my embedded pub/sub(with listener) model after >>> processing around 80000 messages. >>> So, all goes well: publishing and consumig , but after ~ 80000 messages >>> the threads of my multi-theaded application block in >>> connection.createSession(false, Session.AUTO_ACKNOWLEDGE) method. I >>> noticed that at that moment i can not event make a ping to >>> google.com.Some >>> code that manege jms: >>> >>> BrokerService broker = new BrokerService(); >>> broker.setUseJmx(false); >>> broker.setPersistent(false); >>> broker.getSystemUsage().setSendFailIfNoSpace(true); >>> broker.addConnector("vm://jms"); >>> broker.start(); >>> >>> send process: >>> ActiveMQConnectionFactory connectionFactory= null; >>> String user = ActiveMQConnection.DEFAULT_USER; >>> String password = ActiveMQConnection.DEFAULT_PASSWORD; >>> connectionFactory= new ActiveMQConnectionFactory(user, >>> password, "vm://jms"); >>> connectionFactory.setUseAsyncSend(true); >>> connectionFactory.setCopyMessageOnSend(false); >>> Connection con = connectionFactory.createConnection(); >>> con.start(); >>> session = connection.createSession(false, >>> Session.AUTO_ACKNOWLEDGE); >>> >>> topic = session.createTopic("myTopic")); >>> producer = session.createProducer(topic); >>> producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT); >>> message = session.createObjectMessage(ticket); >>> producer.send(message); >>> >>> ActiveMQ 5.2 >>> So, I'm using a connection for all my threads of my application> Each >>> thread creates its own session and the object that it need to send the >>> message. All the clean is done well( close session, publisher, etc.). >>> What can be the problem ???Can you help me please? Its really blocking >>> for >>> me. Please. Thanks. >>> Serge. >>> >>> >>> >>> >>> >> >> i have some kind an similar architecture an multithreded server that >> puts >> data in JMS Queue to use is as a giant buffer for slow data treatment. >> After a while ~ N number of messages all my workers (pooled threads) are >> blocked on the method >> create jms session on the way to put data in jms Queue >> connection.createSession(false, Session.AUTO_ACKNOWLEDGE); >> as a result server doesn't respond any more to the new requests throwing >> RejectedExecutionException >> please help , why does it stops creating new jms sessions ? >> >> did i use an wrong architecture using the same connection object and >> creating -closing session after each task? >> I assume that i 'm closing any used resource (connections ,sockets) ? >> Any help will be appreciated. >> >> >> > > -- View this message in context: http://www.nabble.com/createSession%28%29-makes-all-the-threads-to-wait-...-tp24916480p24931285.html Sent from the ActiveMQ - User mailing list archive at Nabble.com.