Good news. I've done some visual testing and there definitely is visibly
reduced lag from double-buffering.
My previous argument was both right and wrong:
"Even the slightest pause then, and you can be certain that the "ready"
sub-queue is emptied and thus it's only one frame lag from client to
Okay, so resynthesizing the concerns to try to come up with a plan...
It seems the practical thing to do is first implement:
rpc exchange_buffer(Buffer) returns (Buffer)
which is by-in-large just an evolution of what we have now. The difference
that lets me proceed is that the buffer release is no
On 09/07/14 16:39, Kevin Gunn wrote:
> First
> Not sure we're still on topic necessarily wrt changing from id's to fd's
> do we need to conflate that with the double/triple buffering topic ?
> let's answer this first...
>
> Second
> while we're at it :) triple buffering isn't always a win. In the
On Thu, Jul 10, 2014 at 3:53 AM, Alberto Aguirre
wrote:
I think RAOF point is accurate, with the current existing approach
(triple buffered by default), BufferQueue has 2 buffers for clients,
1 for compositor. rpc next_buffer(SurfaceId) returns (Buffer) will
return immediately as BufferQueu
Critically I think we need to avoid any RPC call which is an exchange or
swap in a single call. Because that's the problem, and it prevents the
client from ever owning more than one buffer. So performance bug 1253868
would remain unsolved:
https://bugs.launchpad.net/mir/+bug/1253868
Buffer
Triple buffering is relevant to the conversation because it appears to
be the only way to make full concurrent use of buffers without a round
trip on every swap. So if we want better performance that is directly
relevant to the protocol change.
Presently a client swap_buffers is a delay (round
I think RAOF point is accurate, with the current existing approach (triple
buffered by default), BufferQueue has 2 buffers for clients, 1 for
compositor. rpc next_buffer(SurfaceId) returns (Buffer) will return
immediately as BufferQueue will have a free buffer to give out.
However Daniel's point I
Alberto pointed out a gap in my suggestion #2/#6, which is that the client
wouldn't have a way of getting ownership of the additional buffers. So
maybe that idea (#2/#6) should become:
rpc exchange_buffer(Buffer) returns (Buffer)
rpc allocate_additional_buffer(Void) returns (Buffer)
rpc remove_add
Attempting to steer the convo more towards the near term (kgunn's "First"),
I'm just trying to make sure that the protocol change is somewhat
forward-looking without wading into changing the buffer
distribution/swapping system too much.
Trying to distil the potential directions suggested a bit mor
First
Not sure we're still on topic necessarily wrt changing from id's to fd's
do we need to conflate that with the double/triple buffering topic ?
let's answer this first...
Second
while we're at it :) triple buffering isn't always a win. In the case of
small, frequent renders (as an example "8x8
Forgive me for rambling on but I just had an important realisation...
Our current desire to get back to double buffering is only because the
Mir protocol is synchronously waiting for a round trip every swap, and
somehow I thought that the buffer queue length affected time spent in
the ready_to
Oops. I keep forgetting that the new BufferQueue disallows the
compositor to own less than one buffer, so there would no longer be any
benefit to double buffered clients from a more concurrent protocol :(
Maybe Kevin's suggestion is just fine then. So long as the server is
able to figure out t
On Wed, Jul 9, 2014 at 12:00 PM, Daniel van Vugt
wrote:
Sounds better to just pass buffers around although I'm not keen on
any change that doesn't make progress on the performance bottleneck
LP: #1253868. The bottleneck is the swapping/exchanging approach
which limits the client to holding onl
Note that we're working on making double-buffering the default again and
triple the exception. In that case fixing LP: #1253868 may seem
pointless, but it is surprisingly still relevant. Because a fully
parallelized design would significantly speed up double buffering too...
client swap buffers
Sounds better to just pass buffers around although I'm not keen on any
change that doesn't make progress on the performance bottleneck LP:
#1253868. The bottleneck is the swapping/exchanging approach which
limits the client to holding only one buffer, so I don't think it's a
good idea for new d
15 matches
Mail list logo