Hi,

> I think you are only considering output here, for input we definitely
> need some idea of screen layout, and this needs to be stored
> somewhere.

Oh yea, input.  That needs quite some work for multihead / multiseat.

I think we should *not* try to hack that into the ui.  We should extend
the input layer instead.

The functions used to notify qemu about mouse + keyboard events should
get an additional parameter to indicate the source of the event.  I
think we can use a QemuConsole here.

Then teach the input layer about seats, where a seat is a group of input
devices (kbd, mouse, tablet) and a group of QemuConsoles.  With x+y for
each QemuConsole.  The input layer will do the event routing:  Translate
coordinates, send to the correct device.

I think initially we just can handle all existing QemuConsole and input
devices implicitly as "seat 0".  Stick x+y into QemuConsole for now, and
have the input layer get it from there.  At some point in the future we
might want move this to a QemuSeat when we actually go multiseat.

Bottom line: please do the coordinates math in input.c not sdl2.c so we
don't run into roadblocks in the future.

cheers,
  Gerd




Reply via email to