No.  I'm saying precisely the opposite:

First of all, consider Popper's "falsification" dogma:

Someone has a beautiful physical theory that takes only 10k to encode as an
algorithm.  It predicts water boils at 100C.  They conduct 20 experiments
and in each one, water boils at 100C.  Popper says this is meaningless
because you can't validate theories you can only falsify them.  So our
intrepid scientist continues to run experiments since he can never be
certain that his theory is true.  After all, he wants INFORMATION on the
natural world and only FALSIFICATION can provide Popperian INFORMATION.  So
our intrepid scientist continues to perform 1023 experiments, all
reproducing the same result of 100C boiling point for water.  Then on
experimental run #1023, he strikes scientific Gold!  Water boils _not_ at
100C but at 100.001C!!!!!!   FALSIFICATION AT LAST!!!  Our hapless
experimentalist, lobotomized by Popper, has now concluded that his theory
is FALSE and in the rapture of the damned, at his advanced age after all
those useless non-validations he dances off into the sunset having slain
The Devil Spawn of Pseudoscience!

The lossless compression scientist's treatment of this dataset:

The theory itself requires an algorithmic binary of 10k*8bits for
80,000bits, plus a run length of 1022 requires 10 bits plus encoding of
"100" requires 7 bits for a total of 80,017 bits of algorithmic
information. BUT that's only up to experiment #1023!  Popper says, "You
IDIOT!  If your so-called 'elegant theory' were true, it would require only
10k*8bits+10bits+7bits, but now watch as I POKE A HOLE in your 'elegant'
theory:  To encode experiment #1023, you require _more_ than 80,000bits!
Your 'elegant' theory is shown to be merely a balloon inflated by your own
biases, now lying flat and deflated on the floor following my
scientifically rigorous debunk that POKED A HOLE in its thin skin.   You
must now go back to the drawing board because, as we all know, post hoc
theorization based on the same data is mere pseudoscientific
rationalization of confirmation bias inherent to your monkey brain.  I mean
_all_ machine learning experts _know_ that the only way you can do
additional model selection is to set aside data that your algorithm has not
yet seen, lest you be guilty of circular reasoning of type I errors.  Stop
looking under the lamp-post you drunk!"

A more rational observer observes merely that there was imperfect
measurement in one experimental run and is set forth to wonder what
variables might have been missed and better controlled -- but his, again
quite rationally, he considers the _degree_ of increase in the algorithmic
information represented by the error and decides to go out and fly a kite
or read Blood Meridian instead.  Perhaps if there were 1022 experiments
reporting 100.001C and only one reporting 100C, it might be interesting
enough to drop the kite and the book and come up with a theory that
"brackets" the thermometer in use, saying, "This thermometer is biased by
.001C."

So we see how "bias", under lossless compression, not only gets factored
out, but paraded around for all to recognize AS BIAS.

On Tue, Nov 16, 2021 at 5:33 AM John Rose <[email protected]> wrote:

> On Monday, November 15, 2021, at 7:03 PM, James Bowery wrote:
>
> 1) Adopted lossless compression of a wide range of longitudinal measures
> of social relevance, including GWAS, as causal model selection in their
> macrosocial models upon which they rely for their centralization of social
> policy.
>
>
> Are you saying that in lossless compression the distortions of reality,
> untruths, would compress more, have less complexity than truths? And
> produce better models? I’m trying to intuitively understand what you’re
> saying.
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Tfc4d42f7fb128a4f-M8d6efaa65f3e67acfcaec982>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tfc4d42f7fb128a4f-M2076b2aea841da4eda17aa74
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to