bypass/overlays: If you look at the current logic you will see that the DisplayBuffer holds the previous bypass/overlay buffer until _after_ the client has provided the next one. And it must, to avoid scan-out artefacts. So the server holds two of them very briefly. But only one is held most of the time. Without "predictive bypass" as I'm working on right now, that buffer is held for almost two frames. With "predictive bypass" it's closer to (but greater than still) one frame held. On startup, absolutely you're right that only one buffer is required to get bypass/overlays going. So my wording was wrong.

client wake-up: I may have worded that poorly too. The point is in the new world (tm) frame dropping mostly happens in the client (as opposed to all in the server like it is today). But some of it still needs to happen in the server because you don't want a compositor that tries to keep up with a 1000 FPS client by scheduling all of those frames on a 60Hz display. It has to drop some.


On 26/06/15 11:39, Christopher James Halse Rogers wrote:
On Fri, Jun 26, 2015 at 12:39 PM, Daniel van Vugt
<daniel.van.v...@canonical.com> wrote:
I'm curious (but not yet concerned) about how the new plan will deal
with the transitions we have between 2-3-4 buffers which is neatly
self-contained in the single BufferQueue class right now. Although as
some responsibilities clearly live on one side and not the other,
maybe things could become conceptually simpler if we manage them
carefully:

  framedropping: Always implemented in the client process as a
non-blocking acquire. The server just receives new buffers quicker
than usual and needs the smarts to deal with (skip) a high rate of
incoming buffers [1].

Clients will need to tell the server at submit_buffer time whether or
not this buffer should replace the other buffers in the queue. Different
clients will need different behaviour here - the obvious case being a
video player that wants to dump a whole bunch of time-stamped buffers on
the compositor at once and then go to sleep for a while.

But in general, yes. The client acquires a bunch of buffers and cycles
through them.

  bypass/overlays: Always implemented in the server process, invisible
to the client. The server just can't enable those code paths until at
least two buffers have been received for a surface.

I don't think that's the case? Why does the server need two buffers in
order to overlay? Even with a single buffer the server always has a
buffer available¹.

It won't be entirely invisible to the client; we'll probably need to ask
the client to reallocate buffers when overlay state changes, at least
sometimes.

  client wake-up: Regardless of the model/mode in place the client
would get woken up at the physical display rate by the server if it's
had a buffer consumed (but not woken otherwise). More frequent
wake-ups for framedropping are the responsibility of libmirclient
itself and need not involve the server to do anything different.

By and large, clients will be woken up by EGL when the relevant fence is
triggered.

I don't think libmirclient will have any role in waking the client.
Unless maybe we want to mess around with

[1] Idea: If the server skipped/dropped _all_ but the newest buffer it
has for each surface on every composite() then that would eliminate
buffer lag and solve the problem of how to replace dynamic double
buffering. Client processes would still only be woken up at the
display rate so vsync-locked animations would not speed up
unnecessarily. Everyone wins -- minimal lag and maximal smoothness.

¹: The assumption here is that a buffer can be simultaneously scanned
out from and textured from. I *think* that's a reasonable assumption,
and in the cases where I know it doesn't apply having multiple buffers
doesn't help, because it's the buffer *format* that can only be scanned
out from, not textured from.


--
Mir-devel mailing list
Mir-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/mir-devel

Reply via email to