Am 20.02.2013 11:39, schrieb Aras Pranckevicius:
>
> Why did glsl implement this really as x * (1 - a) + y * a?
> The usual way for lerp would be (y - x) * a + x, i.e. two ops for most
> gpus (sub+mad, or sub+mul+add). But I'm wondering if that sacrifices
> precision
>
>
> Yes.
>
> Why did glsl implement this really as x * (1 - a) + y * a?
> The usual way for lerp would be (y - x) * a + x, i.e. two ops for most
> gpus (sub+mad, or sub+mul+add). But I'm wondering if that sacrifices
> precision
Yes.
http://fgiesen.wordpress.com/2012/08/15/linear-interpolation-past-present-a
Not much to say about the code (the theory sounds sane) but I was
wondering about the comment.
Why did glsl implement this really as x * (1 - a) + y * a?
The usual way for lerp would be (y - x) * a + x, i.e. two ops for most
gpus (sub+mad, or sub+mul+add). But I'm wondering if that sacrifices
preci
From: Kenneth Graunke
Many GPUs have an instruction to do linear interpolation which is more
efficient than simply performing the algebra necessary (two multiplies,
an add, and a subtract).
Pattern matching or peepholing this is more desirable, but can be
tricky. By using an opcode, we can at l