Emilio G. Cota <c...@braap.org> writes:
> On Thu, Sep 20, 2018 at 01:19:51 +0100, Alex Bennée wrote: >> If we are going to have an indirection then we can also drop the >> requirement to scale the TLB according to the number of MMU indexes we >> have to support. It's fairly wasteful when a bunch of them are almost >> never used unless you are running stuff that uses them. > > So with dynamic TLB sizing, what you're suggesting here is to resize > each MMU array independently (depending on their use rate) instead > of using a single "TLB size" for all MMU indexes. Am I understanding > your point correctly? Not quite - I think it would overly complicate the lookup to have a differently sized TLB lookup for each mmu index - even if their usage patterns are different. I just meant that if we already have the cost of an indirection we don't have to ensure: CPUTLBEntry tlb_table[NB_MMU_MODES][CPU_TLB_SIZE]; CPUIOTLBEntry iotlb[NB_MMU_MODES][CPU_TLB_SIZE]; restrict their sizes so any entry in the 2D array can be indexed directly from env. Currently CPU_TLB_SIZE/CPU_TLB_BITS is restricted by the number of NB_MMU_MODES we have to support. But if each can be flushed and managed separately we can have: CPUTLBEntry *tlb_table[NB_MMU_MODES]; And size CPU_TLB_SIZE for the maximum offset we can mange in the lookup code. This is mainly driven by the varying TCG_TARGET_TLB_DISPLACEMENT_BITS each backend has available to it. > > Thanks, > > E. -- Alex Bennée