Hi, On Tue, Oct 18, 2022 at 1:34 AM Sean Christopherson <sea...@google.com> wrote: > > On Fri, Sep 30, 2022, Fuad Tabba wrote: > > > > > > pKVM would also need a way to make an fd accessible again > > > > > > when shared back, which I think isn't possible with this patch. > > > > > > > > > > But does pKVM really want to mmap/munmap a new region at the > > > > > page-level, > > > > > that can cause VMA fragmentation if the conversion is frequent as I > > > > > see. > > > > > Even with a KVM ioctl for mapping as mentioned below, I think there > > > > > will > > > > > be the same issue. > > > > > > > > pKVM doesn't really need to unmap the memory. What is really important > > > > is that the memory is not GUP'able. > > > > > > Well, not entirely unguppable, just unguppable without a magic FOLL_* > > > flag, > > > otherwise KVM wouldn't be able to get the PFN to map into guest memory. > > > > > > The problem is that gup() and "mapped" are tied together. So yes, pKVM > > > doesn't > > > strictly need to unmap memory _in the untrusted host_, but since > > > mapped==guppable, > > > the end result is the same. > > > > > > Emphasis above because pKVM still needs unmap the memory _somehwere_. > > > IIUC, the > > > current approach is to do that only in the stage-2 page tables, i.e. only > > > in the > > > context of the hypervisor. Which is also the source of the gup() > > > problems; the > > > untrusted kernel is blissfully unaware that the memory is inaccessible. > > > > > > Any approach that moves some of that information into the untrusted > > > kernel so that > > > the kernel can protect itself will incur fragmentation in the VMAs. > > > Well, unless > > > all of guest memory becomes unguppable, but that's likely not a viable > > > option. > > > > Actually, for pKVM, there is no need for the guest memory to be GUP'able at > > all if we use the new inaccessible_get_pfn(). > > Ya, I was referring to pKVM without UPM / inaccessible memory. > > Jumping back to blocking gup(), what about using the same tricks as secretmem > to > block gup()? E.g. compare vm_ops to block regular gup() and a_ops to block > fast > gup() on struct page? With a Kconfig that's selected by pKVM (which would > also > need its own Kconfig), e.g. CONFIG_INACCESSIBLE_MAPPABLE_MEM, there would be > zero > performance overhead for non-pKVM kernels, i.e. hooking gup() shouldn't be > controversial. > > I suspect the fast gup() path could even be optimized to avoid the > page_mapping() > lookup by adding a PG_inaccessible flag that's defined iff the TBD Kconfig is > selected. I'm guessing pKVM isn't expected to be deployed on massivve NUMA > systems > anytime soon, so there should be plenty of page flags to go around. > > Blocking gup() instead of trying to play refcount games when converting back > to > private would eliminate the need to put heavy restrictions on mapping, as the > goal > of those were purely to simplify the KVM implementation, e.g. the "one > mapping per > memslot" thing would go away entirely.
My implementation of mmap for inaccessible_fops was setting VM_PFNMAP. That said, I realized that that might be adding an unnecessary restriction, and now have changed it to do it the secretmem way. That's straightforward and works well. > > This of course goes back to what I'd mentioned before in v7; it seems that > > representing the memslot memory as a file descriptor should be orthogonal to > > whether the memory is shared or private, rather than a private_fd for > > private > > memory and the userspace_addr for shared memory. > > I also explored the idea of backing any guest memory with an fd, but came to > the conclusion that private memory needs a separate handle[1], at least on > x86. > > For SNP and TDX, even though the GPA is the same (ignoring the fact that SNP > and > TDX steal GPA bits to differentiate private vs. shared), the two types need > to be > treated as separate mappings[2]. Post-boot, converting is lossy in both > directions, > so even conceptually they are two disctint pages that just happen to share > (some) > GPA bits. > > To allow conversions, i.e. changing which mapping to use, without memslot > updates, > KVM needs to let userspace provide both mappings in a single memslot. So > while > fd-based memory is an orthogonal concept, e.g. we could add fd-based shared > memory, > KVM would still need a dedicated private handle. > > For pKVM, the fd doesn't strictly need to be mutually exclusive with the > existing > userspace_addr, but since the private_fd is going to be added for x86, I > think it > makes sense to use that instead of adding generic fd-based memory for pKVM's > use > case (which is arguably still "private" memory but with special semantics). > > [1] https://lore.kernel.org/all/yulth7bl4mwt5...@google.com > [2] > https://lore.kernel.org/all/869622df-5bf6-0fbb-cac4-34c6ae7df...@kernel.org As long as the API does not impose this limit, which would imply pKVM is misusing it, then I agree. I think that's why renaming it to something like "restricted" might be clearer. Thanks, /fuad