On Tue, Jun 19, 2012 at 3:52 PM, 陳韋任 (Wei-Ren Chen) <che...@iis.sinica.edu.tw> wrote: >> But if QEMU/TCG is doing a GVA->GPA translation as Wei-Ren said, I don't see >> how >> KVM can help. > > Just want to clarify. QEMU maintain a TLB (env->tlb_table) which stores GVA > -> > HVA mapping, it is used to speedup the address translation. If TLB miss, QEMU > will call cpu_arm_handle_mmu_fault (take ARM as an example) doing GVA -> GPA > translation. > >> I could understand having multiple 32bit regions in QEMU's virtual space (no >> need for KVM), one per guest page table, and then simply adding an offset to >> every memory access to redirect it to the appropriate 32-bit region (1 region >> per guest page table). >> >> This could translate a single guest ld/st into a host ld+add+ld/st (the first >> load is to get the "region" offset for the currently executing guest >> context). > > It differs from what QEMU's doing? Each time we fill TLB, we add an offset to > the GPA to get HVA, then store GVA -> HVA mapping into the TLB (IIUC). I don't > see much differences here. I think What is Qemu doing is to mapped GPA to HVA . Lluís mean we can map GVA to HVA. So we event do not need to lookup TLB and just use one host memory access instruction to simulate one guest memory access instruction.
Thanks MK > > Regards, > chenwj > > -- > Wei-Ren Chen (陳韋任) > Computer Systems Lab, Institute of Information Science, > Academia Sinica, Taiwan (R.O.C.) > Tel:886-2-2788-3799 #1667 > Homepage: http://people.cs.nctu.edu.tw/~chenwj > -- www.skyeye.org