Pablo Galindo Salgado <pablog...@gmail.com> added the comment:

> - Rewriting code objects in place is wrong, IMO: you always need to have a 
> way to deoptimize the entire thing, so you need to keep the original one. It 
> might be that you have well defined and static types for the first 10000 
> invocations and something entirely different on 10001. So IMO we need a 
> SpecializedCode object with the necessary bailout guards.

Imagine that we have a secondary copy of the bytecode in the cache inside the 
code object and we mutate that instead. The key difference with the current 
cache infrastructure is that we don't accumulate all the optimizations on the 
same opcode, which can be very verbose. Instead, we change the generic opcode 
to a more specialised to optimize and we change it back to deoptimize. The 
advantage is that BINARY_SUBSCRIPT for example won't be this gigantic block of 
text that will do different things depending if is specialising for dicts or 
lists or tuples, but we will have a different opcode for every of them, which I 
think is much easier to manage.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue42115>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to