All algorithmic information entails "two parts" (as in "two part messages"
a phrase frequently appearing in this context) just as all computer
programs entail instructions and literals in their executable images.  So
let's talk about two different algorithms that output the same set of
observations:  One has more "literal data" than the other and less
"instructions" than the other.  It will generally be a larger executable
image.  There is a sense in which the failure to convert literal data into
instructions can be seen as "loss" both because it more poorly compresses
(without loss) the data and also because according to AIT, it is more prone
to overfitting hence less likely to produce correct predictions.  I mean,
you can take any data set and say, "That's just a bunch of random noise."
and "lose" it in one big literal string that, when executed, outputs it
literally.  Another approach is to "lose" the portions of the program that
are literals and replace them with a random number generator -- resulting
in a shorter program.  In some circumstances, that's a much more economical
way to go but that depends on what you value in the output of the program.
If you randomize the wrong literals, you may find >>>for some purposes<<<
the resulting output to be of less >>>value<<<.  It all depends on the
value function of your decision tree.

On Tue, Nov 9, 2021 at 9:43 AM John Rose <[email protected]> wrote:

> On Tuesday, November 09, 2021, at 9:06 AM, James Bowery wrote:
>
> First of all, in the word "value" you finally touch on the relevant
> characteristic of lossy compression that brings them into confluence with
> sensors and measurement:  the value function that drives both evolution and
> AGI's decisions (aka "utility function") is an open parameter in AGI
> formulations such as AIXI = AIT⊗SDT = Algorithmic Information Theory ⊗
> Sequential Decision Theory.  Lossy compression belongs on the Sequential
> Decision Theory side of the ⊗ just as does the fitness function of
> evolution belong on the decision side of biological evolution.  It's a
> matter of economics -- of engineering -- of technology -- of "ought" but
> NOT of "is".
>
> "is" belongs on the AIT side of the ⊗.  "is" is lossless but only with
> respect to the data provided by the evolutionary process in the form of
> sensors.  The cognitive process also has economic constraints about what
> data can be retained -- memorized -- and that _to_ involves a "lossy"
> process that limits what AIT can do -- even given infinite resources, which
> are not available to AIT in any event.  But that doesn't mean AIT throws
> data away -- it just means AIT never has certainty that it has achieved the
> smallest, lossless, memorization of the data provided by evolution.
>
>
> That is a very interesting relationship you are describing. Haven’t
> thought about it like that. Part of the reason I’m researching this to read
> the post you just made explaining it so simultaneously general and
> succinct. That’s quite amazing I’ll be pondering it for a while…
>
> Granted this discussion became confused (or more confused) when two topics
> got mixed, the issue of measurement loss as you describe and a previous
> discussion on the topic of lossy/lossless duality in compressors.
>
> Are the topics closely coupled?
>
> It was moving towards compressive actions, verses compressors, the 1 to 1
> mapping Matt is referring to.
>
> A simple example is a compressor categorized as lossy performing lossless
> compression depending on the data being compressed, post measurement. Put
> simply, is it possible that a lossy compressor compresses some data
> losslessly? You have to agree that it happens, no?
>
> The sort of interzone region in the lossy/lossless duality is important I
> believe as my research led there, one basic reason is power consumptive
> choices made by an AGI on processing data. Another reason is the
> transmissive processing characteristics of the two… but I believe there are
> deeper things going on when you took at the topology of all compressors and
> their data mappings with the physical thermodynamics and informational
> thermodynamics. You're description "AIXI = AIT⊗SDT = Algorithmic
> Information Theory ⊗ Sequential Decision Theory" encourages that.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T5ff6237e11d945fb-M07b800498befb920660cf9ec>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5ff6237e11d945fb-M645fb2a7b3fec12d36a7f483
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to