On 16/06/15 19:51 +0000, Alec Hothan (ahothan) wrote:
Gordon,

These are all great points for RPC messages (also called "CALL" in oslo
messaging). There are similar ambiguous contracts for the other types of
messages (CAST and FANOUT).
I am worried about the general lack of interest from the community to fix
this as it looks like most people assume that oslo messaging is good
enough (with rabbitMQ) and hence there is no need to invest any time on an
alternative transport (not mentioning that people generally prefer to work
on newer trending areas in OpenStack than contribute on a lower-level
messaging layer).

I won't deny the above is a common feeling in many folks - Rabbitmq is
good enough - but saying that there's a lack of interest from the
community is probably not true. My reasons behind this are that there
are efforts - like zmq's and amqp1's - to improve these reality and
there are alos members of the community looking forward to have
alternative solutions.

These alternative solutions don't necessarily mean rabbitmq is bad as
a messaging technology. It may be related to deployments architecture,
resources available, etc.

I saw Sean Dague mention in another email that RabbitMQ is used by 95% of
OpenStack users - and therefore does it make sense to invest in ZMQ (legit
question). RabbitMQ had had a lot of issues but there has been several
commits fixing some of the issues, so it would make sense IMHO to make
another status update to reevaluate the situation.

For OpenStack to be really production grade at scale, there is a need for
a very strong messaging layer and this cannot be achieved with such a
loose API definitions (regardless of what transport is used). This will be
what distinguishes a great cloud OS platform from a so-so one.
There is also a need for defining more clearly the roadmap for oslo
messaging because it is far from over. I see a need for clarifying the
following areas:
- validation at scale and HA
- security and encryption on the control plane

And this is exaclty why I'm always a bit scared when I read things
like "after all, it's used in 95% of the deployments". That's a huge
number, agreed. That number should also set some priorities in the
community, sure. But I don't believe it should determine whether other
technologies may be good or not.

If we would ever make oslo.messaging a fully-opinionated library -
which we just decided not to[0] - I believe folks interested in
promoting other solutions would end up working on forks of
oslo.messaging for such solutions. I know Sean disagrees with me on
this, though.

If you'd ask me whether it makes sense to spend time on a zmq driver,
I'd reply saying that you should think on what issues you're trying to
solve with it, what deployments or use-cases you're targetting and
decide based on that. We need to focus on the users, operators and
"making the cloud scale(TM)".

There's a 95% of deployments using rabbit not because rabbit is the
best solution for all OpenStack problems but because it was the one
that works best now. The lack of support on other drivers caused this
and as long this lack of support on such drivers persist, it won't
change.

Do not read the above as "something against rabbitmq". Rabbit is a
fantastic broker and I'm happy that we've dedicated all these
resources on improving our support for it but I do believe there are
other scenarios that would work better with other drivers.

I'm happy to help on improving the documentation around what the
expectations and requirements are.

Flavio


 Alec



On 6/16/15, 11:25 AM, "Gordon Sim" <g...@redhat.com> wrote:

On 06/12/2015 09:41 PM, Alec Hothan (ahothan) wrote:
One long standing issue I can see is the fact that the oslo messaging
API
documentation is sorely lacking details on critical areas such as API
behavior during fault conditions, load conditions and scale conditions.

I very much agree, particularly on the contract/expectations in the face
of different failure conditions. Even for those who are critical of the
pluggability of oslo.messaging, greater clarity here would be of benefit.

As I understand it, the intention is that RPC calls are invoked on a
server at-most-once, meaning that in the event of any failure, the call
will only be retried by the olso.messaging layer if it believes it can
ensure the invocation is not made twice.

If that is correct, stating so explicitly and prominently would be
worthwhile. The expectation for services using the API would then be to
decide on any retry themselves. An idempotent call could retry for a
configured number of attempts perhaps. A non-idempotent call might be
able to check the result via some other call and decide based on that
whether to retry. Giving up would then be a last resort. This would help
increase robustness of the system overall.

Again if the assumption of at-most-once is correct, and explicitly
stated, the design of the code can be reviewed to ensure it logically
meets that guarantee and of course it can also be explicitly tested for
in stress tests at the oslo.messaging level, ensuring there are no
unintended duplicate invocations. An explicit contract also allows
different approaches to be assessed and compared.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
@flaper87
Flavio Percoco

Attachment: pgplekxBNNtj3.pgp
Description: PGP signature

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to