* Zhang, Yanmin <[email protected]> wrote:
> 2) We couldn't get guest os kernel/user stack data in an easy way, so we
> might not support callchain feature of tool perf. A work around is KVM
> copies kernel stack data out, so we could at least support guest os kernel
> callchain.
If the guest is Linux, KVM can get all the info we need.
While the PMU event itself might trigger in an NMI (where we cannot access
most of KVM's data structures safely), for this specific case of KVM
instrumentation we can delay the processing to a more appropriate time - in
fact we can do it in the KVM thread itself.
We can do that because we just triggered a VM exit, so the VM state is for all
purposes frozen (as far as this virtual CPU goes).
Which egives us plenty of time and opportunity to piggy back to the KVM
thread, look up the guest stack, process/fill the MMU cache as we walk the
guest page tables, etc. etc.
It would need some minimal callback facility towards KVM, triggered by a perf
event PMI.
One additional step needed is to get symbol information from the guest, and to
integrate it into the symbol cache on the host side in ~/.debug. We already
support cross-arch symbols and 'perf archive', so the basic facilities are
there for that. So you can profile on 32-bit PA-RISC and type 'perf report' on
64-bit x86 and get all the right info.
For this to work across a guest, a gateway is needed towards the guest.
There's several ways to achieve this. The most practical would be two steps:
- a user-space facility to access guest images/libraries. (say via ssh, or
just a plain TCP port) This would be useful for general 'remote profiling'
sessions as well, so it's not KVM specific - it would be useful for remote
debugging.
- The guest /proc/kallsyms (and vmlinux) could be accessed via that channel
as well.
(Note that this is purely for guest symbol space access - all the profiling
data itself comes via the host kernel.)
In theory we could build some sort of 'symbol server' facility into the
kernel, which could be enabled in guest kernels too - but i suspect existing,
user-space transports go most of the way already. (the only disadvantage of
existing transports is that they all have to be configured, enabled and made
user-accessible, which is one of the few weak points of KVM in general.)
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html