Is murano python3.x compatible, from what I understand oslo.messaging isn't 
(yet). If murano is supporting python3.x then brining in oslo.messaging might 
make it hard for murano to be 3.x compatible. Maybe not a problem (I'm not sure 
of muranos python version support).

From: Serg Melikyan <smelik...@mirantis.com<mailto:smelik...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, February 11, 2014 at 5:05 AM
To: OpenStack Development Mailing List 
<OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Murano] Can we migrate to oslo.messaging?

oslo.messaging<http://github.com/openstack/oslo.messaging> is a library that 
provides RPC and Notifications API, they are part of the same library for 
mostly historical reasons. One of the major goals of oslo.messaging is to 
provide clean RPC and Notification API without any trace of messaging queue 
concepts (but two of most advanced drivers used by oslo.messaging is actually 
based on AMQP: RabbitMQ and QPID).

We were designing Murano on messaging queue concepts using some AMQP/RabbitMQ 
specific features, like queue TTL. Since we never considered communications 
between our components in terms of RPC or Notifications and always thought 
about them as message exchange through broker it has influenced our components 
architecture. In Murano we use simple 
wrapper<https://github.com/stackforge/murano-common/tree/master/muranocommon/messaging>
 around Puka<https://github.com/majek/puka> (RabbitMQ client with most simple 
and thoughtful async model) that is used in all our components. We forked 
Puka<https://github.com/istalker2/puka> since we had specific requirements to 
SSL and could not yet merge our work<https://github.com/majek/puka/pull/43> 
back to master.

Can we abandon our own 
wrapper<https://github.com/stackforge/murano-common/tree/master/muranocommon/messaging>
 around our own fork of Puka<https://github.com/istalker2/puka> in favor of 
oslo.messaging? Yes, but this migration may be tricky. I believe we can migrate 
to oslo.messaging in a week or so.

I had played with oslo.messaging emulating our current communication patterns 
with oslo.messaging, and I am certain that current implementation can be 
migrated to oslo.messaging. But I am not sure that oslo.messaging may be easily 
suited to all future use-cases that we plan to cover in a few next releases 
without major contributions. Please, try to respond with any questions related 
to oslo.messaging implementation and how it can be fitted with certain use-case.

Below, I tried to describe our current use-cases and what specific MQ features 
we are using, how they may be implemented with oslo.messaging and with what 
limitations we will face.

Use-Case
Murano has several components with communications between them based on 
messaging queue:
murano-api -> murano-conductor:

  1.  murano-api sends deployment tasks to murano-conductor

murano-conductor -> murano-api:

  1.  murano-conductor reports to murano-api task progress during processing
  2.  after processing, murano-conductor sends results to murano-api

murano-conductor -> murano-agent:

  1.  during task processing murano-conductor sends execution plans with 
commands to murano-agent.

Note: each of mentioned components above may have more than one instance.

One of great messaging queue specific that we heavily use is a idea of queue 
itself, messages sent to component will be handled any time soon as at least 
one instance would be started. For example, in case of murano-agent, message is 
sent even before murano-agent is started. Another one is queue life-time, we 
control life-time of murano-agent queues to exclude overflow of MQ server with 
queues that is not used anymore.

One thing is also worse to mention: murano-conductor communicates with several 
components at the same time: process several tasks at the same time, during 
task processing murano-conductor sends progress notifications to murano-api and 
execution plans to murano-agent.

Implementation
Please, refer to 
Concepts<https://wiki.openstack.org/wiki/Oslo/Messaging#Concepts> section of 
oslo.messaging Wiki before further reading to grasp key concepts expressed in 
oslo.messaging library. In short, using RPC API we can 'call' server 
synchronously and receive some result, or 'cast' asynchronously (no result is 
returned). Using Notification API we can send Notification to the specified 
Target about happened event with specified event_type, importance and payload.

If we move to oslo.messaging we can only primarily rely on features provided by 
RPC/Notifications model:

  1.  We should not rely on message delivery without other side is properly up 
and running. It is not a message delivery, it is Remote Procedure Call;
  2.  To control queue life-time as we do now, we may be required to 'hack' 
oslo.messaging by writing own driver.

murano-api -> murano-conductor:

  1.  murano-api sends deployment tasks to murano-conductor: May be replaced 
with RPC Cast

murano-conductor -> murano-api:

  1.  murano-conductor reports to murano-api task progress during processing: 
May be replaced with Notification or RPC Cast
  2.  after processing, murano-conductor sends results to murano-api: May be 
replaced with RPC Cast

murano-conductor -> murano-agent:

  1.  during task processing murano-conductor sends execution plans with 
commands to murano-agent: May be replaced with two way RPC Cast (murano-agent 
Cast to murano-conductor with message like 'I am running', than 
murano-conductor Call to murano-agent with execution plan)

Our code going to be became less clean and readable with moving to 
oslo.messaging since code that received or sends message will need to be 
replaced with many servers/clients and so. Communications with murano-agent 
would became more failure in-tolerant. On other hand oslo.messaging has very 
simple Base 
API<https://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_drivers/base.py>,
 so we always may implement own driver with all required functionality (and 
underlying tricky implementations), but I think this is should be a last resort.

--
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com<http://mirantis.com/> | 
smelik...@mirantis.com<mailto:smelik...@mirantis.com>

+7 (495) 640-4904, 0261
+7 (903) 156-0836
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to