Yeah that's nice and all, but I don't see how this would steer our research in any way.
On Sat, 10 Aug 2019 at 03:09, Matt Mahoney <[email protected]> wrote: > Suppose you have a simple learner that can predict any computable sequence > of symbols with some probability at least as good as random guessing. Then > I can create a simple sequence that your predictor will get wrong 100% of > the time. My program runs a copy of your program and outputs something > different from your guess. > > All the empirical evidence supports this. Good compressors have a lot of > code to handle lots of special cases. > > On Fri, Aug 9, 2019, 8:15 PM Ben Goertzel <[email protected]> wrote: > >> >> >> >>> >>> Legg proved there is no such thing as a simple, universal learner. So we >>> can stop looking for one. >>> >> >> >> To be clear, these algorithmic information theory results don't show >> there is no such thing as a simple learner that is universal in our >> physical universe... >> >> I'm not saying there necessarily is one, just pointing out that the math >> is not so practically applicable as your statement implies... >> >> *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + delivery > options <https://agi.topicbox.com/groups/agi/subscription> Permalink > <https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M8269e585c24ec57005dafb93> > -- Stefan Reich BotCompany.de // Java-based operating systems ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M18019b2b39a20b12df309342 Delivery options: https://agi.topicbox.com/groups/agi/subscription
