Hi all, Based on my investigation [1], I believe this is a combined effect of using eventlet and condition variables on Python 2.x. When heartbeats are enabled in oslo.messaging, you'll see polling with very small timeout values. This must not waste a lot of CPU time, still it is kind of annoying.
Thanks, Roman [1] https://bugs.launchpad.net/mos/+bug/1380220 On Wed, Feb 17, 2016 at 3:06 PM, gordon chung <g...@live.ca> wrote: > hi, > > this seems to be similar to a bug we were tracking in earlier[1]. > basically, any service with a listener never seemed to idle properly. > > based on earlier investigation, we found it relates to the heartbeat > functionality in oslo.messaging. i'm not entirely sure if it's because > of it or some combination of things including it. the short answer, is > to disable heartbeat by setting heartbeat_timeout_threshold = 0 and see > if it fixes your cpu usage. you can track the comments in bug. > > [1] https://bugs.launchpad.net/oslo.messaging/+bug/1478135 > > On 17/02/2016 4:14 AM, Gyorgy Szombathelyi wrote: >> Hi! >> >> Excuse me, if the following question/problem is a basic one, already known >> problem, >> or even a bad setup on my side. >> >> I just noticed that the most CPU consuming process in an idle >> OpenStack cluster is ceilometer-collector. When there are only >> 10-15 samples/minute, it just constantly eats about 15-20% CPU. >> >> I started to debug, and noticed that it epoll()s constantly with a zero >> timeout, so it seems it just polls for events in a tight loop. >> I found out that the _maybe_ the python side of the problem is >> oslo_messaging.get_notification_listener() with the eventlet executor. >> A quick search showed that this function is only used in aodh_listener and >> ceilometer_collector, and both are using relatively high CPU even if they're >> just 'listening'. >> >> My skills for further debugging is limited, but I'm just curious why this >> listener >> uses so much CPU, while other executors, which are using eventlet, are not >> that >> bad. Excuse me, if it was a basic question, already known problem, or even a >> bad >> setup on my side. >> >> Br, >> György >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- > gord > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev