On Fri, 8 May 2015, Emil Velikov wrote:
Shouldn't we authenticate with the correct gpu or master/render node ?
This implementation will auth with GPU1, and then use GPU2 which seems
a bit odd. I might be missing something ?
The original patches did do differently: when GPU1 was discovered to
On 8 May 2015 at 18:06, Axel Davy wrote:
> On Fri, 8 May 2015, Emil Velikov wrote:
>
>> Shouldn't we authenticate with the correct gpu or master/render node ?
>> This implementation will auth with GPU1, and then use GPU2 which seems
>> a bit odd. I might be missing something ?
>>
>>
>
> The origin
On 2 May 2015 at 11:15, Axel Davy wrote:
> When the server gpu and requested gpu are different:
> . They likely don't support the same tiling modes
> . They likely do not have fast access to the same locations
>
> Thus we do:
> . render to a tiled buffer we do not share with the server
> . Copy th
On 2 May 2015 at 20:15, Axel Davy wrote:
> When the server gpu and requested gpu are different:
> . They likely don't support the same tiling modes
> . They likely do not have fast access to the same locations
>
> Thus we do:
> . render to a tiled buffer we do not share with the server
> . Copy th
When the server gpu and requested gpu are different:
. They likely don't support the same tiling modes
. They likely do not have fast access to the same locations
Thus we do:
. render to a tiled buffer we do not share with the server
. Copy the content at every swap to a buffer with no tiling
that