Just remember to map points like the Qt interface does:
Point screen_coord = mir_surface_coord_to_screen(surface, client_coord);
So we are covered for arbitrary 3D transformations (don't assume windows
are on screen as rectangles).
That only leaves two problems which are not really problems:
(1) Races -- Make sure you don't move a window between getting its
input coordinates and synthesising an event.
(2) Mirroring e.g. in desktop previews where the same surface is
composted multiple times in the frame. Actually that's not an issue
because the input coordinate mapping stuff only cares about the real
surface location.
So there don't seem to be any practical problems other than us wanting
to keep private information mostly hidden.
On 24/07/14 13:52, Christopher James Halse Rogers wrote:
On Thu, Jul 24, 2014 at 12:03 PM, Daniel van Vugt
<daniel.van.v...@canonical.com> wrote:
While events are being injected into evdev at the kernel level, you do
have to know screen coordinates of the window in the least, so touches
match up. That makes Mir's sharing the window position important.
We could take Mir out of the equation if:
(a) The event injection moved up the stack; or
(b) All windows tested reside at (0,0).
I'm not sure either of those are as practical as adding a single
function to the Mir client API. Although I know we don't want it to be
used in general.
What do we do for the cases where it's not possible to return a correct
answer from that function?
Hm. On second thoughts, maybe we should include this, and our existing
two debug symbols, in an explicit libmirclient-debug library that we
disclaim responsibility for.
That would still make autopilot need mir-specific code, but not too
badly. And we wouldn't need to feel bad about making the implementation
loop over every screen pixel checking whether input at that point hits
the requested bit. :)
--
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/mir-devel