On Monday, July 22, 2024, at 11:11 PM, Aaron Hosford wrote:
> Even a low-intelligence human will stop you and tell you they don't 
> understand, or they don't know, or something -- barring interference from 
> their ego, of course.
Yeah, why don't LLMs do this? If they are mimicking humans, they should do the 
same thing - acknowleding lack of knowledge -, no?
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M8678cf8d24c4d4f259e6cc4a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to