On 25/05/16 04:13, Emilio G. Cota wrote: > For some workloads such as arm bootup, tb_phys_hash is performance-critical. > The is due to the high frequency of accesses to the hash table, originated > by (frequent) TLB flushes that wipe out the cpu-private tb_jmp_cache's. > More info: > https://lists.nongnu.org/archive/html/qemu-devel/2016-03/msg05098.html > > To dig further into this I modified an arm image booting debian jessie to > immediately shut down after boot. Analysis revealed that quite a bit of time > is unnecessarily spent in tb_phys_hash: the cause is poor hashing that > results in very uneven loading of chains in the hash table's buckets; > the longest observed chain had ~550 elements. > > The appended addresses this with two changes: > > 1) Use xxhash as the hash table's hash function. xxhash is a fast, > high-quality hashing function. > > 2) Feed the hashing function with not just tb_phys, but also pc and flags. > > This improves performance over using just tb_phys for hashing, since that > resulted in some hash buckets having many TB's, while others getting very few; > with these changes, the longest observed chain on a single hash bucket is > brought down from ~550 to ~40. > > Tests show that the other element checked for in tb_find_physical, > cs_base, is always a match when tb_phys+pc+flags are a match, > so hashing cs_base is wasteful. It could be that this is an ARM-only > thing, though. UPDATE: > On Tue, Apr 05, 2016 at 08:41:43 -0700, Richard Henderson wrote: >> The cs_base field is only used by i386 (in 16-bit modes), and sparc (for a TB >> consisting of only a delay slot). >> It may well still turn out to be reasonable to ignore cs_base for hashing. > BTW, after this change the hash table should not be called "tb_hash_phys" > anymore; this is addressed later in this series. > > This change gives consistent bootup time improvements. I tested two > host machines: > - Intel Xeon E5-2690: 11.6% less time > - Intel i7-4790K: 19.2% less time > > Increasing the number of hash buckets yields further improvements. However, > using a larger, fixed number of buckets can degrade performance for other > workloads that do not translate as many blocks (600K+ for debian-jessie arm > bootup). This is dealt with later in this series. > > Reviewed-by: Richard Henderson <r...@twiddle.net> > Reviewed-by: Alex Bennée <alex.ben...@linaro.org> > Signed-off-by: Emilio G. Cota <c...@braap.org>
Reviewed-by: Sergey Fedorov <sergey.fedo...@linaro.org> > --- > cpu-exec.c | 4 ++-- > include/exec/tb-hash.h | 8 ++++++-- > translate-all.c | 10 +++++----- > 3 files changed, 13 insertions(+), 9 deletions(-) > > diff --git a/cpu-exec.c b/cpu-exec.c > index 14df1aa..1735032 100644 > --- a/cpu-exec.c > +++ b/cpu-exec.c > @@ -231,13 +231,13 @@ static TranslationBlock *tb_find_physical(CPUState *cpu, > { > CPUArchState *env = (CPUArchState *)cpu->env_ptr; > TranslationBlock *tb, **tb_hash_head, **ptb1; > - unsigned int h; > + uint32_t h; > tb_page_addr_t phys_pc, phys_page1; > > /* find translated block using physical mappings */ > phys_pc = get_page_addr_code(env, pc); > phys_page1 = phys_pc & TARGET_PAGE_MASK; > - h = tb_phys_hash_func(phys_pc); > + h = tb_hash_func(phys_pc, pc, flags); > > /* Start at head of the hash entry */ > ptb1 = tb_hash_head = &tcg_ctx.tb_ctx.tb_phys_hash[h]; > diff --git a/include/exec/tb-hash.h b/include/exec/tb-hash.h > index 0f4e8a0..88ccfd1 100644 > --- a/include/exec/tb-hash.h > +++ b/include/exec/tb-hash.h > @@ -20,6 +20,9 @@ > #ifndef EXEC_TB_HASH > #define EXEC_TB_HASH > > +#include "exec/exec-all.h" > +#include "exec/tb-hash-xx.h" > + > /* Only the bottom TB_JMP_PAGE_BITS of the jump cache hash bits vary for > addresses on the same page. The top bits are the same. This allows > TLB invalidation to quickly clear a subset of the hash table. */ > @@ -43,9 +46,10 @@ static inline unsigned int > tb_jmp_cache_hash_func(target_ulong pc) > | (tmp & TB_JMP_ADDR_MASK)); > } > > -static inline unsigned int tb_phys_hash_func(tb_page_addr_t pc) > +static inline > +uint32_t tb_hash_func(tb_page_addr_t phys_pc, target_ulong pc, uint32_t > flags) > { > - return (pc >> 2) & (CODE_GEN_PHYS_HASH_SIZE - 1); > + return tb_hash_func5(phys_pc, pc, flags) & (CODE_GEN_PHYS_HASH_SIZE - 1); > } > > #endif > diff --git a/translate-all.c b/translate-all.c > index b54f472..c48fccb 100644 > --- a/translate-all.c > +++ b/translate-all.c > @@ -991,12 +991,12 @@ void tb_phys_invalidate(TranslationBlock *tb, > tb_page_addr_t page_addr) > { > CPUState *cpu; > PageDesc *p; > - unsigned int h; > + uint32_t h; > tb_page_addr_t phys_pc; > > /* remove the TB from the hash list */ > phys_pc = tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK); > - h = tb_phys_hash_func(phys_pc); > + h = tb_hash_func(phys_pc, tb->pc, tb->flags); > tb_hash_remove(&tcg_ctx.tb_ctx.tb_phys_hash[h], tb); > > /* remove the TB from the page list */ > @@ -1126,11 +1126,11 @@ static inline void tb_alloc_page(TranslationBlock *tb, > static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, > tb_page_addr_t phys_page2) > { > - unsigned int h; > + uint32_t h; > TranslationBlock **ptb; > > - /* add in the physical hash table */ > - h = tb_phys_hash_func(phys_pc); > + /* add in the hash table */ > + h = tb_hash_func(phys_pc, tb->pc, tb->flags); > ptb = &tcg_ctx.tb_ctx.tb_phys_hash[h]; > tb->phys_hash_next = *ptb; > *ptb = tb;