Hi All,

The RHEL-6 version of qemu-kvm makes the tracepoints available to SystemTap. I 
have been working on useful examples for the SystemTap tracepoints in qemu. 
There doesn't seem to be a great number of examples showing the utility of the 
tracepoints in diagnosing problems. However, I came across the following blog 
entry that had several examples:

http://blog.vmsplice.net/2011/03/how-to-write-trace-analysis-scripts-for.html

I reimplemented the VirtqueueRequestTracker example from the blog in SystemTap 
(the attached virtqueueleaks.stp). I can run it on RHEL-6's 
qemu-kvm-0.12.1.2-2.160.el6_1.8.x86_64 and get output like the following. It 
outputs the pid and the address of the elem that leaked when the script is 
stopped like the following:

$ stap virtqueueleaks.stp 
^C
     pid     elem
   19503  1c4af28
   19503  1c56f88
   19503  1c62fe8
   19503  1c6f048
   19503  1c7b0a8
   19503  1c87108
   19503  1c93168
...

I am not that familiar with the internals of qemu. The script seems to 
indicates qemu is leaking, but is that really the case?  If there are resource 
leaks, what output would help debug those leaks? What enhancements can be done 
to this script to provide more useful information?

Are there other examples of qemu probing people would like to see?

-Will



# virtqueueleaks.stp
#
# virtqueueleaks.stp is based on the VirtqueueRequestTracker from:
# http://blog.vmsplice.net/2011/03/how-to-write-trace-analysis-scripts-for.html

global elems
probe qemu.kvm.virtqueue_pop { elems[pid(),elem] = elem }
probe qemu.kvm.virtqueue_fill { delete elems[pid(),elem] }

probe end
{
  printf("\n%8s %8s\n", "pid", "elem")
  foreach([p+, elem] in elems) {
    printf("%8d %8x\n", p, elem)
  }
}

Reply via email to