On Sat, Sep 3, 2011 at 9:26 AM, Dhaval Giani <dhaval.gi...@gmail.com> wrote: > On Sat, Sep 3, 2011 at 1:53 AM, Blue Swirl <blauwir...@gmail.com> wrote: >> On Wed, Aug 31, 2011 at 6:00 PM, Dhaval Giani <dhaval.gi...@gmail.com> wrote: >>> On Wed, Aug 31, 2011 at 10:58 AM, Blue Swirl <blauwir...@gmail.com> wrote: >>>> On Wed, Aug 31, 2011 at 8:38 AM, Avi Kivity <a...@redhat.com> wrote: >>>>> On 08/26/2011 10:06 PM, Blue Swirl wrote: >>>>>> >>>>>> Let guests inject tracepoint data via fw_cfg device. >>>>>> >>>>>> >>>>> >>>>> At least on x86, fw_cfg is pretty slow, involving multiple exits. IMO, >>>>> for >>>>> kvm, even one exit per tracepoint is too high. We need to use a shared >>>>> memory transport with a way to order guest/host events later on (by using >>>>> a >>>>> clock). >>>> >>>> This could be an easy way, if the guest always had access to an >>>> accurate clock, but that may not be the case. >>>> >>> >>> From what I understand, kvmclock should be good enoguh for this >>> purpose. That is what I am using. >> >> It's only available for KVM on x86, that is most certainly not enough. >> > > From what I understood it is available on other architectures as well, > but let us just confirm with glommer.
This line in Makefile.target limits the device to KVM on x86: obj-i386-$(CONFIG_KVM) += kvmclock.o Moreover, the code assumes that CPUState structure contains a field cpuid_kvm_features but that is only available on x86. Perhaps the device can be made generic but I don't see how it should work with TCG. Actually I don't understand it at all, I think key functionality must be inside KVM module.