Re: Improving next_buffer() rpc

2014-07-11 Thread Daniel van Vugt
Good news. I've done some visual testing and there definitely is visibly reduced lag from double-buffering. My previous argument was both right and wrong: "Even the slightest pause then, and you can be certain that the "ready" sub-queue is emptied and thus it's only one frame lag from client to

Re: Improving next_buffer() rpc

2014-07-10 Thread Kevin DuBois
Okay, so resynthesizing the concerns to try to come up with a plan... It seems the practical thing to do is first implement: rpc exchange_buffer(Buffer) returns (Buffer) which is by-in-large just an evolution of what we have now. The difference that lets me proceed is that the buffer release is no

Re: Improving next_buffer() rpc

2014-07-10 Thread Gerry Boland
On 09/07/14 16:39, Kevin Gunn wrote: > First > Not sure we're still on topic necessarily wrt changing from id's to fd's > do we need to conflate that with the double/triple buffering topic ? > let's answer this first... > > Second > while we're at it :) triple buffering isn't always a win. In the

Re: Improving next_buffer() rpc

2014-07-09 Thread Christopher James Halse Rogers
On Thu, Jul 10, 2014 at 3:53 AM, Alberto Aguirre wrote: I think RAOF point is accurate, with the current existing approach (triple buffered by default), BufferQueue has 2 buffers for clients, 1 for compositor. rpc next_buffer(SurfaceId) returns (Buffer) will return immediately as BufferQueu

Re: Improving next_buffer() rpc

2014-07-09 Thread Daniel van Vugt
Critically I think we need to avoid any RPC call which is an exchange or swap in a single call. Because that's the problem, and it prevents the client from ever owning more than one buffer. So performance bug 1253868 would remain unsolved: https://bugs.launchpad.net/mir/+bug/1253868 Buffer

Re: Improving next_buffer() rpc

2014-07-09 Thread Daniel van Vugt
Triple buffering is relevant to the conversation because it appears to be the only way to make full concurrent use of buffers without a round trip on every swap. So if we want better performance that is directly relevant to the protocol change. Presently a client swap_buffers is a delay (round

Re: Improving next_buffer() rpc

2014-07-09 Thread Alberto Aguirre
I think RAOF point is accurate, with the current existing approach (triple buffered by default), BufferQueue has 2 buffers for clients, 1 for compositor. rpc next_buffer(SurfaceId) returns (Buffer) will return immediately as BufferQueue will have a free buffer to give out. However Daniel's point I

Re: Improving next_buffer() rpc

2014-07-09 Thread Kevin DuBois
Alberto pointed out a gap in my suggestion #2/#6, which is that the client wouldn't have a way of getting ownership of the additional buffers. So maybe that idea (#2/#6) should become: rpc exchange_buffer(Buffer) returns (Buffer) rpc allocate_additional_buffer(Void) returns (Buffer) rpc remove_add

Re: Improving next_buffer() rpc

2014-07-09 Thread Kevin DuBois
Attempting to steer the convo more towards the near term (kgunn's "First"), I'm just trying to make sure that the protocol change is somewhat forward-looking without wading into changing the buffer distribution/swapping system too much. Trying to distil the potential directions suggested a bit mor

Re: Improving next_buffer() rpc

2014-07-09 Thread Kevin Gunn
First Not sure we're still on topic necessarily wrt changing from id's to fd's do we need to conflate that with the double/triple buffering topic ? let's answer this first... Second while we're at it :) triple buffering isn't always a win. In the case of small, frequent renders (as an example "8x8

Re: Improving next_buffer() rpc

2014-07-09 Thread Daniel van Vugt
Forgive me for rambling on but I just had an important realisation... Our current desire to get back to double buffering is only because the Mir protocol is synchronously waiting for a round trip every swap, and somehow I thought that the buffer queue length affected time spent in the ready_to

Re: Improving next_buffer() rpc

2014-07-09 Thread Daniel van Vugt
Oops. I keep forgetting that the new BufferQueue disallows the compositor to own less than one buffer, so there would no longer be any benefit to double buffered clients from a more concurrent protocol :( Maybe Kevin's suggestion is just fine then. So long as the server is able to figure out t

Re: Improving next_buffer() rpc

2014-07-09 Thread Christopher James Halse Rogers
On Wed, Jul 9, 2014 at 12:00 PM, Daniel van Vugt wrote: Sounds better to just pass buffers around although I'm not keen on any change that doesn't make progress on the performance bottleneck LP: #1253868. The bottleneck is the swapping/exchanging approach which limits the client to holding onl

Re: Improving next_buffer() rpc

2014-07-09 Thread Daniel van Vugt
Note that we're working on making double-buffering the default again and triple the exception. In that case fixing LP: #1253868 may seem pointless, but it is surprisingly still relevant. Because a fully parallelized design would significantly speed up double buffering too... client swap buffers

Re: Improving next_buffer() rpc

2014-07-08 Thread Daniel van Vugt
Sounds better to just pass buffers around although I'm not keen on any change that doesn't make progress on the performance bottleneck LP: #1253868. The bottleneck is the swapping/exchanging approach which limits the client to holding only one buffer, so I don't think it's a good idea for new d