> On Nov 23, 2014, at 7:21 PM, Mike Bayer <mba...@redhat.com> wrote:
> 
> Given that, I’ve yet to understand why a system that implicitly defers CPU 
> use when a routine encounters IO, deferring to other routines, is relegated 
> to the realm of “magic”.   Is Python reference counting and garbage 
> collection “magic”?    How can I be sure that my program is only declaring 
> memory, only as much as I expect, and then freeing it only when I absolutely 
> say so, the way async advocates seem to be about IO?   Why would a high level 
> scripting language enforce this level of low-level bookkeeping of IO calls as 
> explicit, when it is 100% predictable and automatable ?

The difference is that in the many years of Python programming I’ve had to 
think about garbage collection all of once. I’ve yet to write a non trivial 
implicit IO application where the implicit context switch didn’t break 
something and I had to think about adding explicit locks around things.

Really that’s what it comes down to. Either you need to enable explicit context 
switches (via callbacks or yielding, or whatever) or you need to add explicit 
locks. Neither solution allows you to pretend that context switching isn’t 
going to happen nor prevents you from having to deal with it. The reason I 
prefer explicit async is because the failure mode is better (if I forget to 
yield I don’t get the actual value so my thing blows up in development) and it 
ironically works more like blocking programming because I won’t get an implicit 
context switch in the middle of a function. Compare that to the implicit async 
where the failure mode is that at runtime something weird happens.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to