Re: [Openstack] [grizzly]Problems of qpid as rpcbackend

2013-05-30 Thread minmin ren
Hi Ray, Thanks for your reply. try except change to line 386 only solve cinder-scheduler or nova-compute service which is the similar implementation stop raise exception. However, all cinder-volume queue be removed when one of multi-cinder-volume service stop. It is another problem. I us

Re: [Openstack] [grizzly]Problems of qpid as rpcbackend

2013-05-30 Thread minmin ren
Hi Ray, Thanks for your reply. try except change to line 386 only solve cinder-scheduler or nova-compute service which is the similar implementation stop raise exception. However, all cinder-volume queue be removed when one of multi-cinder-volume service stop. It is another problem. I us

Re: [Openstack] [grizzly]Problems of qpid as rpcbackend

2013-05-30 Thread Ray Pekowski
I am not familiar with impl_qpid,py, but am familiar with amqp.py and have had problems around rpc_amqp.cleanup() the Pool.empty() method it calls. It was a totally different problem, but I decided to take a look at your problem. I noticed that in impl_qpid.py the only other place a connection.clo

Re: [Openstack] [grizzly]Problems of qpid as rpcbackend

2013-05-30 Thread minmin ren
Hi all, I think it is a bug of qpid as rpcbackend. Other service(nova-compute, cinder-scheduler, etc) use eventlet thead to run service. They stop service use thread kill() method. The last step rpc.cleanup() just did nothing, because the relative consume connection run in thread and killed. I