On Fri, Jun 26, 2015 at 4:20 PM, Darren Cook <dar...@dcook.org> wrote:

>
> Confirmed here:
>   http://blogs.nvidia.com/blog/2015/03/17/pascal/
>
> So, currently they use a 32-bit float, rather than a 64-bit double, but
> will reduce that to 16-bit to get a double speed-up. Assuming they've
> been listening to customers properly, that must mean 16-bit floats are
> good enough for neural nets?
>
> Apparently you can use 16-bit representations in DNNs with little or no
degradation in accuracy:
http://arxiv.org/pdf/1502.02551.pdf

Nikos
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to