When humans don't know the answer, they make one up. LLMs do the same
because they mimic humans. But it's not like there isn't a solution. Watson
won at Jeopardy in 2010 in part by not buzzing in when it didn't know the
correct response with high probability. That and having an 8 ms response
time that no human could match.

A text compressor estimates the probability of the next symbol and assigns
a code of length log 1/p for each possible outcome. Generative AI just
outputs the symbol with the highest p.

On Wed, Jul 31, 2024, 2:07 AM <immortal.discover...@gmail.com> wrote:

> On Tuesday, July 23, 2024, at 8:27 AM, stefan.reich.maker.of.eye wrote:
>
> On Monday, July 22, 2024, at 11:11 PM, Aaron Hosford wrote:
>
> Even a low-intelligence human will stop you and tell you they don't
> understand, or they don't know, or something -- barring interference from
> their ego, of course.
>
> Yeah, why don't LLMs do this? If they are mimicking humans, they should do
> the same thing - acknowleding lack of knowledge -, no?
>
>
> Interesting. Maybe being trained to predict the next token makes them try
> to be as accurate as possible, giving them not only accuracy but also a
> style of how they talk.
>
> Or is it because GPT-4 already knows everything >:) lol...
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T6510028eea311a76-M8cfffcd7fd1aefe3197c3147>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M8111ddb539b4a7e7f897ca5b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to