> I would very much appreciate an open source implementation of this - or rather, I'd rather spend my time using one to do interesting things rather than building one, I do plan to open source my implementation if I have to make one and can bring myself to build one from scratch...
I started building a convolution network library for OpenCL at https://github.com/hughperkins/ClConvolve/ - tanh, relu, linear activations - OpenCL - fully connected and convolutional layers OpenCL you might see as good or bad, depending on your point of view. It's certainly unique. eg, caffe uses CUDA I believe, as does Theano, and so on. OpenCL has the advantage of being an open standard, and you can run it on many CPUs, eg Intel Ivy Bridge integrated graphics cards. I intend to implement 'pizza-slice' symmetry, or maybe 'kaleidoscope'-symmetry is a better name. Either way, the 4-way symmetry, for w: vertically, horizontally, and across both diagonals. It's currently a work in progress. It can get 83% accuracy on mnist, using a single convolutional layer, and no other layers at all. Fully-connected layer also seems to be working. forward prop and backward prop are both in gpu, for convolutional layers. fully-connected layers are still 100% on cpu, but you only would have one such layer, right, so not a high priority? I'm currently building test cases to ensure that multiple, deep, topologies work correctly. Hugh
_______________________________________________ Computer-go mailing list Computer-go@computer-go.org http://computer-go.org/mailman/listinfo/computer-go