That's a good example of the weakness of the lossy compression model. We
only use it because it's better than nothing. Find a function that
approximates the data. Choose the function that minimizes the description
length of the function plus the prediction errors.

Another example. Suppose you have 10 sample points of the form (x, e^x).
You can search the space of polynomials for the best fit. A 9th order
polynomial will fit exactly, but a lower order polynomial approximation
might get a better score. Which will make better predictions for new x?
Neither, of course, but we can't search all possible functions either.

On Thu, Nov 21, 2019, 11:47 AM James Bowery <[email protected]> wrote:

> I, quite deliberately, did not mention "Solomonoff Induction" as an
> information criterion for model selection, precisely because it is not
> computable.   The point of my conjecture is that there is a very good
> reason to select "the smallest executable archive of the data" as your
> information criterion over the other information criteria -- and it has to
> do with the weakness of "lossy compression" as model selection.
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T0fc0d7591fcf61c5-M34b02ac693743894b722d9b9>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0fc0d7591fcf61c5-M1030ad8f00ba56ae47106a3a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to