On 13/10/2015 00:42, Bas Nieuwenhuizen wrote:
Hi Axel,

Using DCC for scanout surfaces is problematic because as far as I
understand the display hardware does not support it. We could solve
that partially by decompressing when displaying.

However, the X server can also use these surfaces as a front buffer
and for that case we cannot just use decompression without performance
regressions for decompressing often.

Furthermore, when using such a surface as back buffer, we would still
need a single decompression before displaying it. It really depends on
the application whether that improves performances or regresses it.
For example, Xonotic regresses for me if I enable DCC for scanout
surfaces.

Yours sincerely,
Bas Nieuwenhuizen


Hi Bas,

When the application is fullscreen, the backbuffer can be reused as is for the display. This is not done in practice in all scenario. DRI2 with vsync does it, Wayland and DRI3 do it. In X world that also depends whether the compositor enforces compositing for fullscreen applications or not.

The backbuffer needs to be scanout able in case the backbuffer is used for the display.

When a backbuffer is used for displaying, we use another one for rendering.

Given usually apps do render to backbuffer directly, it seems good to me to have the backbuffer compressed during rendering, and then
decompressed before presenting the buffer.
I guess some scenarios can be hit, for example if application is rendering everything to a framebuffer, and then copying to backbuffer in the end before presenting. Perhaps it is what Xonotic does ? I suggest to use commercial games for testing.

Is the card able to use the dma engine to decompress dcc ? That should solve the performance hits when decompressing.


Yours,

Axel
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to