On Sat, Feb 15, 2020 at 2:48 PM Robert Levy <[email protected]> wrote:

> Are the mods just going to ignore James Bowery?
>

Since algorithmically correcting "bias" is now seen as a central
responsibility of network effect content monopolies like Google, Youtube,
Twitter, Facebook, etc. rigorously measuring a dataset's "bias" is even
more urgent than is measuring "intelligence" or even "friendliness".

Exactly _how_ urgent?

Consider this:

These content monopolies are intent on avoiding "a repeat of the 2016
election", whatever that means.  One thing is for certain:  Claims that
they are attempting to provide an unbiased view of the world via their
machine learning algorithms in the run up to the 2020 election is viewed
with a great deal of suspicion by people wielding on the order of 400 guns
in the US alone.

That's _exactly_ how urgent.

Since we're stuck with some form of "prior" (speed prior, space prior,
etc.), and any prior will introduce bias in some sense, it seems the more
minimal that prior, the less bias it introduces to a minimum description
length of all available data.

So why aren't all these content giants striving to create the largest
database of diverse, longitudinal social measures that their hardware and
human resources can support, and losslessly compressing it, so as to have
an unbiased platform upon which to measure "bias" in new data being added
to their content stores?

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9ab9fba591214e64-M0b4a22449a4845769a7a524a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to