On Tue, Nov 9, 2010 at 3:11 PM, Lluís <xscr...@gmx.net> wrote: > Stefan Hajnoczi writes: > >> On Mon, Nov 8, 2010 at 2:35 PM, Lluís <xscr...@gmx.net> wrote: >>> Another important change would be to provide a generic interface inside >>> QEMU that provides a programmatic control of the tracing state of all >>> events (both per-cpu and global), and let each backend provide a small >>> .c file implementing that functionality. This would provide a >>> centralized point through QEMU monitor for the control of tracing >>> events, plus any backend-specific control interface that might also be >>> available. > >> Programmatic control can't be provided for all backends. > >> If you're using a platform-specific trace backend then you probably >> want to use those tools. For example, DTrace has a powerful language >> for aggregating trace data and reporting. DTrace isn't about >> enabling/disabling probes, it does much more which we cannot usefully >> work into a common subset interface. UST has its own control tool to >> query and configure tracepoints in a running process. > >> I think it makes sense to use each backend's native tools. If we try >> to come up with a common interface it will cost effort to maintain and >> users will be better off using the native tools. Imagine a Fedora >> user wanting to instrument QEMU, why learn a QEMU-specific subset >> interface rather than using SystemTap in the same way you can >> instrument the kernel or other applications? > >> The simple backend is built in to QEMU and therefore does need a >> HMP/QMP interface for control. > >> Am I missing the cases where a common interface is useful? > > I see. My current concern is how to integrate the per-vCPU state with > the backend-specific per-event state. > > Right now I have a programmatic interface for controlling the per-vCPU > state, but that is oblivious of the global backend-specific per-event > state (which must be enabled beforehand), but I dont know how to provide > per-backend _and_ per-vCPU control of tracing state. > > Maybe it's just not necessary, given that the trace events I'm inserting > are disabled by default, and then it's just a matter of flipping two > bits instead of one (although from separate tools).
Yes, it's not ideal but sounds like the simplest option to start with. Some trace mechanisms support filter expressions to record only a subset of the trace events. For example, sun4m_cpu_reset_interrupt when cpu==1 && level>2. I think trace events in the Linux kernel have a certain amount of context information like the physical cpu number and you can filter on it. The per-vcpu filtering could be solved with filtering but neither the simple trace backend nor external trace backends support vcpu as a first-class concept. Stefan