Hi,

Yes, I will surely give these.

Note that I am not talking about just a spike. I am talking about 100% CPU
all the time. On both the broker (with KahaDb) and the worker (only 10
consumers). There must be something trivial wrong.

Thanks again,
Niels

>
>
> Hi,
>
> The only thing that can shed light in the right direction is a series of
> thread dumps taken during the 100% CPU spike, with a space of 3-8 seconds
> each.
>
> Could you please attach those?
>
> Thanks,
>
> *Raúl Kripalani*
> Enterprise Architect, Open Source Integration specialist, Program
> Manager | Apache
> Camel Committer
> http://about.me/raulkripalani | http://www.linkedin.com/in/raulkripalani
> http://blog.raulkr.net | twitter: @raulvk
>
> On Wed, Apr 3, 2013 at 6:42 PM, nielsbaloe <ni...@geoxplore.nl> wrote:
>
>> Hi,
>>
>> Thanks for your thoughts.
>>
>> What do you mean with an activeMQ pool? Can you point out a reference?
>> And why do you think that with only a few events an hour the process
>> would
>> go to 100% immediately after startup?
>>
>> We'll check out the load without kahaDb, but note that the
>> worker/consumer
>> machine does not even have a kahaDb instance, so I can't see why it
>> should
>> matter for the consumer/worker.
>>
>> For the other issues:
>> - Java 1.7.0_03 was simply the most recent when we installed everything,
>> same with the activeMQ 5.6 version itsself. I'll switch to the latest
>> activeMQ version, I hope you remember a bug in our activeMQ version that
>> causes this.
>> - you're right about the multiples for xmx and xms, but we have had
>> multiples before as well and then also had 100% load. The broker runs
>> within several meg anyway but has a 100% load.
>>
>> Here is the top on the worker/consumer with 12GB memory:
>>
>>
>> ~top - 19:45:21 up 30 days,  3:26,  1 user,  load average: 1.02, 1.04,
>> 1.05
>> Tasks:  83 total,   1 running,  82 sleeping,   0 stopped,   0 zombie
>> Cpu(s): 17.0%us,  8.2%sy,  0.0%ni, 74.8%id,  0.0%wa,  0.0%hi,  0.0%si,
>> 0.0%st
>> Mem:  11899728k total,   725212k used, 11174516k free,   148112k buffers
>> Swap: 12056572k total,        0k used, 12056572k free,   320760k cached
>>
>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>   405 user      20   0 1736m 200m  10m S  100  1.7 214:55.31 java
>>   829 zabbix    20   0  8680 1160  752 S    0  0.0  35:27.80
>> zabbix_agentd
>>   830 zabbix    20   0  8680 1192  740 S    0  0.0  28:28.26
>> zabbix_agentd
>>   833 zabbix    20   0  8680 1192  740 S    0  0.0  29:18.04
>> zabbix_agentd
>>     1 root      20   0  3520 1836 1248 S    0  0.0   0:15.46 init
>>     2 root      20   0     0    0    0 S    0  0.0   0:00.38 kthreadd
>>     3 root      20   0     0    0    0 S    0  0.0   0:33.71 ksoftirqd/0
>>     5 root      20   0     0    0    0 S    0  0.0   0:00.52 kworker/u:0
>>     6 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/0
>>     7 root      RT   0     0    0    0 S    0  0.0   0:07.38 watchdog/0
>>     8 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/1
>>     9 root      20   0     0    0    0 S    0  0.0   0:10.29 kworker/1:0
>>    10 root      20   0     0    0    0 S    0  0.0   0:02.46 ksoftirqd/1
>>    12 root      RT   0     0    0    0 S    0  0.0   0:05.08 watchdog/1
>>    13 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/2
>>    14 root      20   0     0    0    0 S    0  0.0   0:00.00 kworker/2:0
>>    15 root      20   0     0    0    0 S    0  0.0   0:05.08 ksoftirqd/2
>>
>>
>> Thanks!
>> Niels
>>
>> >
>> >
>> > So, here're a few of my observations and suggestions based on the
>> provided
>> > info:
>> > 1. use a newer java than v1.7.0_03
>> > 2. switch to amq v5.7 or v5.8
>> > 3. keep xmx and xms the same and preferably a multiple of 512 megs
>> > 4. where are you getting server grade machines with 2 and 15 gig ram
>> > 5. when you notice 100% util, take a few spaced apart thread dumps and
>> > post them on here
>> > 6. on a 4 core machine, load should not just be 100% but higher than
>> 1.
>> > can you post 'top' output as well
>> > 7. use the amq pool for producers, sessions, connections where
>> possible
>> > 8. test your existing config and load without kahadb (non-persistent
>> mode)
>> > and compare results
>> >
>> >
>> > On Apr 3, 2013, at 7:09, nielsbaloe <ni...@geoxplore.nl> wrote:
>> >
>> > Hi,
>> >
>> > -Xms512m -Xmx1500m
>> >
>> > but before we've set anything, it went wrong as well, and setting it
>> > higher also doesn't help. Note that there are only a few messages
>> going
>> > through every hour, this is not yet fully production, so it should not
>> > even hit the 1% continuously...
>> >
>> > Thanks!,
>> > Niels
>> >
>> >>
>> >> Niels,
>> >>
>> >> I am not one of the experts here, but I am a new user of ActiveMQ via
>> >> TomEE, and I like to listen in on the mail topics/questions/responses
>> >> here.
>> >>
>> >> Since I listen in on Tomcat user list as well, I would say that this
>> >> sounds
>> >> like a GC (garbage collection) issue, but I might be mistaking.
>> >>
>> >> Can you reply with the java options of your app/container? What is
>> your
>> >> java options for the Windows 7 laptops and the Ubuntu Linux
>> >> server/machine
>> >> that has 12GB and 2GB?
>> >>
>> >> I assume you have smaller memory settings on your Windows 7
>> (developer)
>> >> laptops and probably larger memory settings on the Ubuntu Linux
>> >> server/machine. Right? If yes, then GC may be the reason why you are
>> >> always
>> >> experiencing 100% CPU.
>> >>
>> >> This is just a guess/hunch, but since you provided such a detailed
>> >> question, please do not leave out the java (memory) options on your
>> >> Windows
>> >> 7 laptops as well as Ubuntu Linux server(s).
>> >>
>> >> Howard
>> >>
>> >>
>> >> On Wed, Apr 3, 2013 at 8:51 AM, nielsbaloe <ni...@geoxplore.nl>
>> wrote:
>> >>
>> >>> Hi all,
>> >>>
>> >>> We are using activeMQ successfully for two projects now, but we
>> >>> accidentely
>> >>> discovered that both the broker and the worker/consumer machines are
>> >>> hitting
>> >>> 100% CPU continuously. We do not have this issue on our developers
>> >>> machines
>> >>> (all Windows 7 laptops). This occurs even when no events are being
>> >>> processed.
>> >>>
>> >>> I couldn't find any clues for this issue, except setting the
>> prefetch
>> >>> size.
>> >>> I've set the prefetch size to 10, as we have 10 consumers at the
>> >>> worker/consumer machine. We have a broker machine and a
>> worker/consumer
>> >>> machine, which are both configured like below. In the near future we
>> >>> will
>> >>> add more worker/consumer machines.
>> >>>
>> >>> OS: Ubuntu Linux 12.04 (headless)
>> >>> Memory: 12GB and 2GB
>> >>> CPU: Intel Xeon 3.06GHz 4core
>> >>> Java: "1.7.0_03", OpenJDK Runtime Environment (IcedTea7 2.1.1pre)
>> >>> (7~u3-2.1.1~pre1-1ubuntu3)
>> >>> Webcontainer: none, java-standalone
>> >>> ActiveMQ: 5.6.0
>> >>>
>> >>> The broker uses the internal KahaDB database.
>> >>>
>> >>> We are using one queue, to which the worker/consumer machine is
>> >>> listening
>> >>> and posting to, say about 100 messages a day. We also use about 4
>> >>> scheduled
>> >>> messages for every 'modem' (our internal subject) which results in
>> >>> about
>> >>> 40
>> >>> scheduled messages or so which generates an event once every 30
>> minuts.
>> >>> Nothing spectacular so to say.
>> >>>
>> >>> Thanks for any clues in advance. For completeness, I will post our
>> >>> Broker,
>> >>> Consumer and Producer code (without comments), this might show any
>> >>> wrong
>> >>> assumptions on our side.
>> >>>
>> >>> Best,
>> >>> Niels Baloe
>> >>>
>> >>>
>> >>>> -----------------------
>> >>>
>> >>> public class Consumer implements ExceptionListener {
>> >>>
>> >>>        private Session session;
>> >>>        private MessageConsumer messageConsumer;
>> >>>        private static Logger LOG =
>> >>> Logger.getLogger(Consumer.class.getName());
>> >>>
>> >>>        public Consumer(String brokerServer, String queueName,
>> >>>                        MessageListener messageListener) throws
>> >>> JMSException,
>> >>>                        FileNotFoundException, IOException {
>> >>>                this(Broker.getSession(brokerServer), queueName,
>> >>> messageListener);
>> >>>        }
>> >>>
>> >>>        public Consumer(Session session, String queueName,
>> >>>                        MessageListener messageListener) throws
>> >>> JMSException {
>> >>>                this.session = session;
>> >>>
>> >>>                Queue queue = session.createQueue(queueName);
>> >>>                messageConsumer = session.createConsumer(queue);
>> >>>                messageConsumer.setMessageListener(messageListener);
>> >>>        }
>> >>>
>> >>>        public void close() {
>> >>>                try {
>> >>>                        messageConsumer.close();
>> >>>                } catch (JMSException e) {
>> >>>                }
>> >>>                try {
>> >>>                        session.close();
>> >>>                } catch (JMSException e) {
>> >>>                }
>> >>>        }
>> >>>
>> >>>        @Override
>> >>>        public void onException(JMSException je) {
>> >>>                LOG.log(Level.SEVERE, je.getMessage(), je);
>> >>>        }
>> >>>
>> >>> }
>> >>>
>> >>> public class Producer {
>> >>>
>> >>>        private Session session;
>> >>>        private MessageProducer producer;
>> >>>        public Producer(String brokerUrl, String queue) throws
>> >>> JMSException,
>> >>>                        FileNotFoundException, IOException {
>> >>>                this(Broker.getSession(brokerUrl), queue);
>> >>>        }
>> >>>
>> >>>        public Producer(Session session, String queue) throws
>> >>> JMSException
>> >>> {
>> >>>                this.session = session;
>> >>>                Destination destination = session.createQueue(queue);
>> >>>                producer = session.createProducer(destination);
>> >>>                producer.setDeliveryMode(DeliveryMode.PERSISTENT);
>> >>>        }
>> >>>
>> >>>        public void close() {
>> >>>                try {
>> >>>                        producer.close();
>> >>>                } catch (JMSException e) {
>> >>>                }
>> >>>                try {
>> >>>                        session.close();
>> >>>                } catch (JMSException je) {
>> >>>                }
>> >>>        }
>> >>>
>> >>>        public Message getMessageText(String text) throws
>> JMSException {
>> >>>                return session.createTextMessage(text);
>> >>>        }
>> >>>
>> >>>        public Message getMessageObject() throws JMSException {
>> >>>                return session.createObjectMessage();
>> >>>        }
>> >>>
>> >>>        public void send(Message message) throws JMSException {
>> >>>                producer.send(message);
>> >>>        }
>> >>>
>> >>>        public void sendScheduled(Message message, String cron)
>> throws
>> >>> JMSException
>> >>> {
>> >>>
>> >>> message.setStringProperty(ScheduledMessage.AMQ_SCHEDULED_CRON,
>> cron);
>> >>>                producer.send(message);
>> >>>        }
>> >>>
>> >>> }
>> >>>
>> >>> public class Broker {
>> >>>
>> >>>        private BrokerService broker;
>> >>>
>> >>>        public Broker(String host, int port, String brokerName)
>> throws
>> >>> Exception {
>> >>>                broker = new BrokerService();
>> >>>                broker.setUseJmx(true);
>> >>>                broker.setBrokerName(brokerName);
>> >>>                broker.addConnector("tcp://" + host + ":" + port);
>> >>>                broker.setSchedulerSupport(true);
>> >>>                broker.start();
>> >>>        }
>> >>>
>> >>>        public URI getNameTCP() {
>> >>>                return broker.getVmConnectorURI();
>> >>>        }
>> >>>
>> >>>        public void close() {
>> >>>                try {
>> >>>                        broker.stop();
>> >>>                        broker.waitUntilStopped();
>> >>>                } catch (Exception e) {
>> >>>                }
>> >>>        }
>> >>>
>> >>>        public static void closeConnection() {
>> >>>                if (connection != null) {
>> >>>                        try {
>> >>>                                connection.close();
>> >>>                        } catch (JMSException e) {
>> >>>                        }
>> >>>                }
>> >>>        }
>> >>>
>> >>>        private static Connection connection;
>> >>>
>> >>>        private static Session getSessionWithoutRetry(String
>> >>> brokerServer)
>> >>>                        throws JMSException, FileNotFoundException,
>> >>> IOException {
>> >>>                if (connection == null) { // does not work when
>> broker
>> >>> is
>> >>> local
>> >>>                        ActiveMQConnectionFactory connectionFactory =
>> >>> new
>> >>> ActiveMQConnectionFactory(
>> >>>                                        brokerServer);
>> >>>                        connectionFactory.setAlwaysSessionAsync(true);
>> >>>
>> >>>                        // Prefetch size
>> >>>                        String prefetch =
>> >>> NoImportUtils.getSettings().getProperty(
>> >>>                                        "broker.prefetchSize");
>> >>>
>> >>>                        connectionFactory.getPrefetchPolicy().setAll(
>> >>>                                        Integer.parseInt(prefetch));
>> >>>                        connection =
>> >>> connectionFactory.createConnection();
>> >>>                        connection.start();
>> >>>                }
>> >>>                return connection.createSession(false,
>> >>> Session.AUTO_ACKNOWLEDGE);
>> >>>        }
>> >>>
>> >>>        public static Session getSession(String brokerServer) throws
>> >>> JMSException,
>> >>>                        FileNotFoundException, IOException {
>> >>>                try {
>> >>>                        return getSessionWithoutRetry(brokerServer);
>> >>>                } catch (ConnectionFailedException e) {
>> >>>                        // Retry once when connection failed
>> >>>                        closeConnection();
>> >>>                        return getSessionWithoutRetry(brokerServer);
>> >>>                }
>> >>>        }
>> >>>
>> >>> }
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> View this message in context:
>> >>> http://activemq.2283324.n4.nabble.com/100-CPU-tp4665414.html
>> >>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>> >>
>> >>
>> >>
>> >>
>> >> _______________________________________________
>> >> If you reply to this email, your message will be added to the
>> discussion
>> >> below:
>> >> http://activemq.2283324.n4.nabble.com/100-CPU-tp4665414p4665415.html
>> >>
>> >> To unsubscribe from 100% CPU, visit
>> >>
>> http://activemq.2283324.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=4665414&code=bmllbHNAZ2VveHBsb3JlLm5sfDQ2NjU0MTR8NTk0NzAwMTI4
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> > http://activemq.2283324.n4.nabble.com/100-CPU-tp4665414p4665420.html
>> > Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>> >
>> >
>> >
>> > _______________________________________________
>> > If you reply to this email, your message will be added to the
>> discussion
>> > below:
>> > http://activemq.2283324.n4.nabble.com/100-CPU-tp4665414p4665431.html
>> >
>> > To unsubscribe from 100% CPU, visit
>> >
>> http://activemq.2283324.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=4665414&code=bmllbHNAZ2VveHBsb3JlLm5sfDQ2NjU0MTR8NTk0NzAwMTI4
>>
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://activemq.2283324.n4.nabble.com/100-CPU-tp4665414p4665442.html
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>
>
>
>
>
> _______________________________________________
> If you reply to this email, your message will be added to the discussion
> below:
> http://activemq.2283324.n4.nabble.com/100-CPU-tp4665414p4665443.html
>
> To unsubscribe from 100% CPU, visit
> http://activemq.2283324.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=4665414&code=bmllbHNAZ2VveHBsb3JlLm5sfDQ2NjU0MTR8NTk0NzAwMTI4






--
View this message in context: 
http://activemq.2283324.n4.nabble.com/100-CPU-tp4665414p4665444.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to