Alvaro Begue: <CAF8dVMVMwi65m9jMTsvOa=qzortqz-dedh5494uwzeld9su...@mail.gmail.com>: >On Tue, May 23, 2017 at 4:51 AM, Hideki Kato <hideki_ka...@ybb.ne.jp> wrote: > >> (3) CNN cannot learn exclusive-or function due to the ReLU >> activation function, instead of traditional sigmoid (tangent >> hyperbolic). CNN is good at approximating continuous (analog) >> functions but Boolean (digital) ones. >> > >Oh, not this nonsense with the XOR function again. > >You can see a neural network with ReLU activation function learning XOR >right here: http://playground.tensorflow.org/#activation=relu& >batchSize=10&dataset=xor®Dataset=reg-plane&learningRate=0.01& >regularizationRate=0&noise=0&networkShape=4,4&seed=0.96791& >showTestData=false&discretize=false&percTrainData=50&x=true& >y=true&xTimesY=false&xSquared=false&ySquared=false&cosX= >false&sinX=false&cosY=false&sinY=false&collectStats=false& >problem=classification&initZero=false&hideText=false
That NN has no "sharp" edges. Using sigmoid (hyperbolic tangent) activation function, changing weights can change the sharpness of the edges of the approximated function. For ReLU, changing weights only changes the slope. Hideki -- Hideki Kato <mailto:hideki_ka...@ybb.ne.jp> _______________________________________________ Computer-go mailing list Computer-go@computer-go.org http://computer-go.org/mailman/listinfo/computer-go