Hello, everyone!

I have a question regarding the interpolation precision of llvmpipe. Feel free to redirect me to somewhere else if this is not the right place to ask. Consider the following scenario: In a fragment shader we are sampling from a 16x16, 8 bit texture with values between 0 and 3 using linear interpolation. Then we write white to the screen if the sampled value is > 1/255 and black otherwise. The output looks very different when rendered with llvmpipe compared to the result produced by rendering hardware (for both intel (mesa i965) and nvidia (proprietary driver)).

I've uploaded examplary output images here (https://imgur.com/a/D1udpez) and the corresponding fragment shader here (https://pastebin.com/pa808Req).

My hypothesis is that llvmpipe (in contrast to hardware) only uses 8 bit for the interpolation computation when reading from 8 bit textures and thus loses precision in the lower bits. Is that correct? If so, does anyone know of a workaround?

A little bit of background about the use case: We are trying to move the CI of Voreen (https://www.uni-muenster.de/Voreen/) to the Gitlab-CI running in docker without any hardware dependencies. Using llvmpipe for our regression tests works in principle, but shows significant differences in the raycasting rendering of an 8-bit-per-voxel dataset. (The effect is of course less visible than the constructed example case linked above, but still quite noticeable for a human.)

Any help or pointers would be appreciated!

Best,
Dominik

--
Dominik Drees

Department of Computer Science
Westfaelische Wilhelms-Universitaet Muenster

email: dominik.dr...@wwu.de
web: https://www.wwu.de/PRIA/personen/drees.shtml
phone: +49 251 83 - 38448

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to