I'm curious if anyone has applied this idea in their Go software, and what
results you obtained? It is a way to make rotations (and transpositions
with more effort) go away as an issue, regardless of the way you input the
board you'd get the same result back out. Short summary from the paper (
https://arxiv.org/pdf/1602.02660.pdf):

We have introduced a framework for building rotation
equivariant neural networks, using four new layers which
can easily be inserted into existing network architectures.
Beyond adapting the minibatch size used for training, no
further modifications are required. We demonstrated improved
performance of the resulting equivariant networks
on datasets which exhibit full rotational symmetry, while
reducing the number of parameters. A fast GPU implementation
of the rolling operation for Theano (using
CUDA kernels) is available at https://github.com/benanne/kaggle-ndsb.

It was apparently used by this science competition winner:

http://benanne.github.io/2015/03/17/plankton.html

And there's related codebase here that implements the paper Group
Equivariant Convolutional Networks (
https://tacocohen.files.wordpress.com/2016/06/gcnn.pdf).

https://github.com/tscohen/gconv_experiments

The paper makes it sound like implementing for rotation would be straight
forward, and implementing for transposition more difficult but also
doable.Which sounds perfect for Go AI applications.

-Jonathan
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to