Avi Kivity <a...@redhat.com> wrote: > > > Slot searching is quite fast since there's a small number of slots, and > > > we sort the larger ones to be in the front, so positive lookups are fast. > > > We cache negative lookups in the shadow page tables (an spte can be > > > either "not mapped", "mapped to RAM", or "not mapped and known to be > > > mmio") so we rarely need to walk the entire list. > > > > Well, we don't always have shadow page tables. Having hints for unmapped > > guest memory like this is pretty tricky. > > We're currently running into issues with device assignment though, where we > > get a lot of small slots mapped to real hardware. I'm sure that will hit us > > on x86 sooner or later too. > > For x86 that's not a problem, since once you map a page, it stays mapped > (on modern hardware). >
I was once thinking about how to search a slot reasonably fast for every case, even when we do not have mmio-spte cache. One possible way I thought up was to sort slots according to their base_gfn. Then the problem would become: "find the first slot whose base_gfn + npages is greater than this gfn." Since we can do binary search, the search cost is O(log(# of slots)). But I guess that most of the time was wasted on reading many memslots just to know their base_gfn and npages. So the most practically effective thing is to make a separate array which holds just their base_gfn. This will make the task a simple, and cache friendly, search on an integer array: probably faster than using *-tree data structure. If needed, we should make cmp_memslot() architecture specific in the end? Takuya