On 17-08-17 21:35, Darren Cook wrote: > "I'm sure some things were learned about parallel processing... but the > real science was known by the 1997 rematch... but AlphaGo is an entirely > different thing. Deep Blue's chess algorithms were good for playing > chess very well. The machine-learning methods AlphaGo uses are > applicable to practically anything." > > Agree or disagree?
Deep Thought (the predecessor of Deep Blue) used a Supervised Learning approach to set the initial evaluation weights. The details might be lost in time but it's reasonable to assume some were carried over to Deep Blue. Deep Blue itself used hill-climbing to find evaluation features that did not seem to correlate with strength much, and improve them. A lot of the strength of AlphaGo comes from a fast, parallelized tree search. Uh, what was the argument again? Maybe we should stop inventing artificial differences and appreciate that the tools in our toolbox have become much sharper over the years. -- GCP _______________________________________________ Computer-go mailing list Computer-go@computer-go.org http://computer-go.org/mailman/listinfo/computer-go