9to5Mac - Friday, November 1, 2024 at 7:42AM

Apple researchers ran an AI test that exposed a fundamental ‘intelligence’ 
flaw
 
Apple just shipped its first Apple Intelligence features and launched new 
AI-optimized Macs. But for all the AI hype, there are clearly limitations 
with the technology’s intelligence. And one of those limits was highlighted 
by Apple’s AI research through a recent experiment.
Testing AI’s capabilities
Last month, a team of Apple’s researchers published a new paper about a key 
AI limitation.
Michael Hiltzik writes for The Los Angeles Times:
See if you can solve this arithmetic problem:

Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On 
Sunday, he picks double the number of kiwis he did on Friday, but five of 
them were a bit smaller than average. How many kiwis does Oliver have?

If you answered “190,” congratulations: You did as well as the average 
grade school kid by getting it right. (Friday’s 44 plus Saturday’s 58 plus 
Sunday’s 44 multiplied by 2, or 88, equals 190.)

You also did better than more than 20 state-of-the-art artificial 
intelligence models tested by an AI research team at Apple. The AI bots, 
they found, consistently got it wrong.

The research paper explains that the best and brightest LLM models saw 
“catastrophic performance drops” when trying to answer simple math problems 
that were written out like this.
It happened primarily when those problems included irrelevant data, which 
even schoolchildren quickly learn to disregard.

Thus calling into question AI’s current intelligence capabilities.
Apple’s AI research finds ‘intelligence’ is not what it appears
Due to the variety of tests Apple’s AI research entailed, the paper 
concludes that current AI models are ‘not capable of genuine logical 
reasoning.’
Which might be something we’re generally aware of, but it stands as an 
important cautionary note as more and more trust is given to AI’s 
‘intelligence.’

Top comment by â“‹âš½ï¸ ðŸŒž
Liked by 5 people 
Grady Booch, father UML, has been saying this for years. LLMs aren't 
intelligent and never will be, though they may get large and complex enough 
to simulate it. The problem really isn't the amount of data you feed it, 
it's the foundational architecture. LLMs are based on probability, not 
logic and understanding.
View all comments

AI optimists might assume the problem is an easy fix, but Apple’s team 
disagreed. “Can scaling data, models, or compute fundementaly solve this? 
We don’t think so!”
Ultimately, Apple’s paper is not meant to dampen enthusiasm over AI’s 
capabilities, but rather provide a measure of common sense.

AI can perform some tasks as though it’s extremely intelligent, but in many 
ways that ‘intelligence’ isn’t what it might appear.
What do you make of Apple’s AI findings? Let us know in the comments.

Original Article at:
https://9to5mac.com/2024/11/01/apple-researchers-ran-an-ai-test-that-exposed-a-fundamental-intelligence-flaw/


-- 
The following information is important for all members of the Mac Visionaries 
list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your Mac Visionaries list moderator is Mark Taylor.  You can reach mark at:  
mk...@ucla.edu and your owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/macvisionaries@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"MacVisionaries" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to macvisionaries+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/macvisionaries/1bfe68bc-52cf-4a97-b5f6-407999b9476cn%40googlegroups.com.

Reply via email to