This is the script from my national network radio report yesterday on
"Artificial Stupidity", with an example of a specific stupid answer from
Google AI Overviews. As always there may have been minor wording
variations from this script as I presented this report live on air.

- - -
Yeah, so I'm a big fan of truth in labeling, and I really do feel that
calling these generative AI systems "artificial intelligence" isn't
accurate labeling and that "artificial stupidity" would be much more
on target.

Now I want emphasize as always that there are many really valuable and
important applications for what we would call non-generative AI -- in
all kinds of situations where masses of data need to be analyzed or
sorted through. So for example, non-generative AI systems have shown
great promise in analyzing x-rays or other medical test results
quickly and reportedly in many cases very accurately, often much
faster and sometimes more accurately than humans could. That's just
one example where this tech shows great promise; there are many
others.

BUT, it's generative AI, like these systems Big Tech is pushing to
summarize information or write reports or answer questions in search
or chatbots and the like, where the real problems are. Because that's
where you really need actual intelligence and human-level
understanding, and these systems just cannot be trusted to always give
reliable and accurate and repeatable answers.

And these are being used now to write summaries of doctor visits and
to write police reports and other areas where people's lives could be
seriously negatively impacted when there are errors.  People are
supposed to proofread this stuff but we know that will usually be
cursory, and even that will slack off over time as people become more
dependent on these systems.

Over the weekend I found a great example of just how incredibly stupid
these systems can be at very basic questions, even as the continuous
stream of hype from these firms is increasingly about achieving human
level or even super-intelligence levels.

Google used to give high quality answers to math questions in Google
Search. After all most ordinary math is straightforward for computers
and has been for a very long time.  But now Google will often instead
give one of their notoriously inaccurate AI Overview answers to math
questions. And when I made the Google Search query "3.18 inches as a
fraction" -- the AI Overview answer was: "3.18 inches is equal to 1/8
of an inch".

Other people were able to replicate this. Some got different wrong
answers. Slight changes in query wording also had effects, and where
you're located may also matter.

Now clearly this answer is nonsensical. How did Google make this kind
of wacky error? It appears that Google AI was confusing inches and
millimeters based on conversion charts it had sucked in from the Web -- it's just not intelligent. These Generative AI large language
models just don't understand what they're doing in the way we normally
use the term intelligence -- so it makes these kinds of really stupid
mistakes.

Some of the other wrong answers that other people got are harder to
explain, and other folks got correct answers that were not AI Overview
answers but older style Google Search calculation answers.

A big part of the problem, even with uncomplicated fact questions, is
that you can't be sure that an answer you get at one moment will be
the same answer you get a minute later, an hour later, a day later or
whatever.

And yeah this particular crazy wrong answer is obvious. But the real
danger is the flood of frequent errors that are NOT so obvious,
whether they're wrong in whole or in part, but that appear to be
reasonable so people trust them to be correct. This is especially
nightmarish with medical questions, for example, and these firms
generally are not taking responsibility for inaccuracies in these
answers.

They continue to pour billions and billions of dollars into these
systems that demand more and more electricity, in desperation to try
find ways to make a profit with them.

It's deeply ironic that despite all of these vast sums spent and such
enormously ridiculous hype, the result has been massively expensive
boondoggles that are often less accurate, less dependable, and less
trustworthy than a 99 cents pocket calculator.

- - -
L

- - -
--Lauren--
Lauren Weinstein lau...@vortex.com (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Founder: Network Neutrality Squad: https://www.nnsquad.org
        PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
pfir mailing list
https://lists.pfir.org/mailman/listinfo/pfir

Reply via email to