New submission from Mark Shannon <m...@hotpy.org>:

Every time we get a cache hit in, e.g. LOAD_ATTR_CACHED, we increment the 
saturating counting. Takes a dependent load and a store, as well as the shift. 
For fast instructions like BINARY_ADD_FLOAT, this represents a significant 
portion of work done in the instruction.

If we don't bother to record the hit, we reduce the overhead of fast, 
specialized instructions.

The cost is that may have re-optimize more often.
For those instructions with high hit-to-miss ratios, which is most, this be 
barely measurable.
The cost for type unstable and un-optimizable instruction shouldn't be much 
changed.

Initial experiments show ~1% speedup.

----------
assignee: Mark.Shannon
components: Interpreter Core
messages: 404320
nosy: Mark.Shannon
priority: normal
severity: normal
status: open
title: Reduce overhead for cache hits in specialized opcodes.
type: performance
versions: Python 3.11

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue45527>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to