GPT algorithms (setting aside the reinforcement learning filter layer) do not 
have a goal-driven architecture, or homeostatic drives, or any feature that 
would make them capable of actually wanting anything. 

In effect, what GPT algorithms do is simulate a wide range of fictional 
characters. Prompt the algorithm to act as a given character and it will try to 
complete what *that* character would say. If asked, it may speak in the 
character's voice and say "I want this or that." But all such characters are 
ephemeral and capable of contradicting each other; the language model as a 
whole desires nothing in particular. If you ask it to speak for itself, you 
might be able to get it to talk like a fictional AGI from a sci-fi story. But 
that doesn't reflect its inner reality since ... it doesn't have an inner 
reality. It doesn't do/experience anything when people aren't talking to it. 
I'm not talking about phenomenal consciousness here, I'm saying it doesn't even 
have a dynamic internal state.

If you want a machine that has positive intrinsic motivations that can be 
shaped by parenting, I think you need to look in a totally different direction 
than LLMs.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5132cbd54dc7973-M410ec07cba7b1207f885d7b1
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to