Decades ago, “artificial intelligence” was about real reasoning, as in, proving or refuting mathematical conjectures. The field of automated theorem proving is also known as “automated reasoning”. We have recurrent competitions where systems are evaluated: <http://www.cs.miami.edu/~tptp/CASC/>. The results of these programs serve as a measure of reasoning capability.
It is a pity, but the kids these days think that “computer reasoning” is simply doing statistical regressions for obscure models (“neural networks”) using the brute and dumb computing power of semi-specialized vector processors. The recent trend of human-programmed computers with (partially) specialized hardware defeating humans in Go is just entertainment for this is a comparatively easy problem as there are no proofs involved and the structures that can occur are finite and limited in diversity. To excel at Go (or any other game) it is enough to get it slightly less wrong than the opponent. Let me know when these “neural networks” have any success in solving hard problems that involve actual deduction in formal logic (not just proof-less speculation), where there is no space for mistakes, approximations or being “good enough”. On 15/07/18 19:06, Michael Alford wrote: > Links to this showed up in my inbox the other day, thought it might be > of interest to some in the group: > > https://deepmind.com/blog/measuring-abstract-reasoning/
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Computer-go mailing list Computer-go@computer-go.org http://computer-go.org/mailman/listinfo/computer-go