Presumably there needs to be a api-level mechanism to wait for the
background optimization to finish, so that piglit etc can validate the
behavior of the optimized shader?
-- Chris
On Tue, Jul 10, 2012 at 5:17 AM, Eric Anholt wrote:
> Tiziano Bacocco writes:
>
>> I've done benchmarks and compar
Presumably there needs to be a api-level mechanism to wait for the
background optimization to finish, so that piglit etc can validate the
behavior of the optimized shader?
-- Chris
On Tue, Jul 10, 2012 at 5:17 AM, Eric Anholt wrote:
> Tiziano Bacocco writes:
>
>> I've done benchmarks and compar
I've done benchmarks and comparison between proprietary drivers and
Mesa, Mesa seems to be up to 200x slower compiling the same shader,
since i understand optimizing such part of code may take months or even
more, i have thought to solve it this way:
Upon calling glLinkProgram , an unoptimized
Tiziano Bacocco writes:
> I've done benchmarks and comparison between proprietary drivers and
> Mesa, Mesa seems to be up to 200x slower compiling the same shader,
> since i understand optimizing such part of code may take months or even
> more, i have thought to solve it this way:
>
> Upon ca
Tiziano Bacocco writes:
> I've done benchmarks and comparison between proprietary drivers and
> Mesa, Mesa seems to be up to 200x slower compiling the same shader,
> since i understand optimizing such part of code may take months or even
> more, i have thought to solve it this way:
>
> Upon ca
I've done benchmarks and comparison between proprietary drivers and
Mesa, Mesa seems to be up to 200x slower compiling the same shader,
since i understand optimizing such part of code may take months or even
more, i have thought to solve it this way:
Upon calling glLinkProgram , an unoptimized