On Nov 19, 2013, at 9:59 PM, Dave Airlie <airl...@gmail.com> wrote:
> On Tue, Nov 19, 2013 at 6:11 PM, Gerd Hoffmann <kra...@redhat.com> wrote: >> Hi, >> >>> So I felt I had a choice here for sharing a single output surface >>> amongst outputs: >>> >>> a) have multiple QemuConsole reference multiple DisplaySurface wihch >>> reference a single pixman image, >> >> This one. >> >>> In either case we need to store, width/height of the console and x/y >>> offset into the output surface somewhere, as the output dimensions >>> will not correspond to surface dimensions or the surface dimensions >>> won't correspond to the pixman image dimensions >> >> Not needed (well, internal to virtio-gpu probably). > > I think you are only considering output here, for input we definitely > need some idea of screen layout, and this needs to be stored > somewhere. > > e.g. SDL2 gets an input event in the right hand window it needs to > translate that into an input event on the whole output surface. > > Have a look the virtio-gpu branch in my repo (don't look at the > history, its ugly, just the final state), you'll see code in sdl2.c to > do input translation from window coordinates to the overall screen > space. So we need at least the x,y offset in the ui code, and I think > we need to communicate that via the console. > One of the patches I will be submitting as part of this includes bi-directional calls to set the orientation. A HwOp, and a DisplayChangeListenerOp. This allows you to move the display orientation around in the guest (if your driver and backend support it), or to move the orientation around by dragging windows... Either way you have the data you need to get absolute coordinates right, even if you are scaling the guest display in your windows. Whether the orientation offsets end up stored in the QemuConsole or not becomes an implementation detail if you get notifications. > Otherwise I think I've done things the way you've said and it seems to > be working for me on a dual-head setup. > > (oh and yes this all sw rendering only, to do 3D rendering we need to > put a thread in to do the GL stuff, but it interacts with the console > layer quite a bit, since SDL and the virtio-gpu need to be in the same > thread, so things like resize can work). > I also have a patch to add dpy_lock and dpy_unlock hooks to the DisplayChangeListener so that the UI can be in another thread. In fact, on XenClient we run with the bulk of the UI in another process so that multiple VMs can share the same windows and GL textures. Otherwise dom0 doesn't have enough memory for lots of guests with multiple big monitors connected. I wasn't planning on submitting the lock patch since I figured nobody would want our UI that uses it. But if there is interest I can. Eventually I would like to write a GEM/KMS UI for full zero-copy display, and that would need locking hooks anyway. We used to run with a GLX UI that was a thread per display inside qemu. If you'd like I can send you that patch, but I don't have the bandwidth to modernize it. I believe it is qemu 1.0 vintage. (It's on the Citrix website in some obscure location already) > Dave. >