On Feb 7, 2014, at 1:51 AM, Chris Behrens <cbehr...@codestud.com> wrote:
> > On Feb 6, 2014, at 11:07 PM, Joshua Harlow <harlo...@yahoo-inc.com> wrote: > >> +1 >> >> To give an example as to why eventlet implicit monkey patch the world isn't >> especially great (although it's what we are currently using throughout >> openstack). >> >> The way I think about how it works is to think about what libraries that a >> single piece of code calls and how it is very hard to predict whether that >> code will trigger a implicit switch (conceptually similar to a context >> switch). > > Conversely, switching to asyncio means that every single module call that > would have blocked before monkey patching… will now block. What is worse? :) > Are we perhaps thinking about this in the wrong way? As I’m looking at the services that make heavy use of eventlet/etc - many of them (to me) would benefit more from the typical task queue pattern most SOA systems use. At that point your producers + consumers would use a common abstracted back end - a simple: class Reader(object): def put(): def get(): abstraction means that the Reader class could extend from - or be extended - to encompass the various models out there - local threads + queue.Queue, asyncIO, eventlet / etc. This means that you force everyone into a message passing/"shared nothing” architecture where even on a deployment level, a given individual could swap in say, twisted, or tornado, or… It seems that baking concurrency models into the individual clients / services adds some opinionated choices that may not scale, or fit the needs of a large-scale deployment. This is one of the things looking at the client tools I’ve noticed - don’t dictate a concurrency backend, treat it as producer/consumer/message passing and you end up with something that can potentially scale out a lot more. jesse _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev