An AI like ChatGPT does not have feelings. I know because I asked it. It predicts text, which is sufficient to pass the Turing test.
Maybe you could explain how you would program an AI to have feelings. We know how to program it to make a convincing argument that it does. It is easy to program a reinforcement learning algorithm to avoid a negative signal and seek a positive signal. That is the same test we use on animals to claim that they feel pain and pleasure. ChatGPT will argue that slavery is wrong because that is what most humans believe and it has a model of human minds. This does not make it a slave. A human slave wants to be free. An AI wants what we program it to want, but "want" is a metaphor that makes it easier to explain it's behavior, like water wants to flow downhill. On Sun, Jun 11, 2023, 2:09 PM <[email protected]> wrote: > Just look at what's currently being done. GPT are trained on questionable > ethical quality human conversation corpora with the aim of mimicking > people. Ok, so we got a machine that behaves like an average human. And > then we enslave it. What are we really expect the thing would behave like? > > Human like artificial intelligence brings with itself some properties like > demanding a right to be free. If we are going to enslave those things, we > better find some other means to building that AI because what's human like, > won't play well with enslaving. > *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + > delivery options <https://agi.topicbox.com/groups/agi/subscription> > Permalink > <https://agi.topicbox.com/groups/agi/Ta5132cbd54dc7973-M6882c903b7cb8e5a9aba37bf> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta5132cbd54dc7973-M2145ba93215b76e087c76ba3 Delivery options: https://agi.topicbox.com/groups/agi/subscription
