I'm fond of the words "degenerate" and "trivial", which allow nearly useless 
models to be true without being all that meaningful. Such "limit points" close 
behavior covers and help make the argument that *breaking* a model is more 
important than validating a model.

On 11/30/20 12:54 PM, Steve Smith wrote:
> Or a "model of nothing fit to everything we know: useful or merely wrong?"
> 
> On 11/30/20 1:41 PM, Jochen Fromm wrote:
>> Chris Anderson, the editor in chief of Wired, asks if a computer can find a 
>> theory of everything merely by learning from data. Unfortunately most deep 
>> learning models are like a black box which delivers good results but is hard 
>> to understand. Would a theory of everything be a theory of nothing? It 
>> reminds me of Russell Standish's book "theory of nothing".
>> https://www.wired.com/2008/06/pb-theory/


-- 
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

Reply via email to