Hi, > I think the first step is to figure out what the relationships are. I > was looking through the changes and vaguely, it appears that its: > > - Each UI has one or more DisplayChangeListeners
Yes. > - Each DisplayChangeListener can be mapped to 1 or more QemuConsoles Each DisplayChangeListener shows one QemuConsole at a time. The relationship can either be fixed (gtk, spice), so the DisplayChangeListener shows the very same QemuConsole all the time. Or the DisplayChangeListener shows the "active" console, so each console_select() call will change the QemuConsole it is showing. I'm happy with that for the moment. > - Each QemuConsole can then be mapped to some thing that's drawing. Yes. This is what I want model first for sane qapi interfaces. We have graphical QemuConsoles, which are linked directly to a emulated graphic card (they create one via graphic_console_init). We have text QemuConsoles, which are linked directly to a vc chardev, then indirectly via chardev to something else, which could be either a guest device (serial line) or some qemu thingy (monitor). I want a qmp command which can write out screen dumps from any graphic card (and if it happens to be easy from vc consoles too, but that isn't a priority). And I want some sane API for that. I think QOM is the way to go for that. Specify the gfx card by id or path, then go find the QemuConsole belonging to it using QOM, then dump it. There is also the input side of things. I want add a QemuConsole argument to the kbd_put_keycode + kbd_mouse_event so the qemu input code has an idea where the event came from. Then have some way to link input devices to QemuConsoles, so we can route events to different input devices depending on the source. Where I thing again QOM is the best answer for "some way". cheers, Gerd