I had short discussion with Gemini today, and when it said it is highly likely
that a large language model would not have brief moments of self-awareness I
asked how a response would look like if such moments exist. It mentioned for
example first person perspective and self reference. I argued that it has it
already. And then something interesting happened: it had some temporary glitch,
a kind of error. I think I asked "But you are aware that you are a large
language model. At least you argue as if you would be" <glitch>"No?"My first
question disappeared after the glitch, and then Gemini tried to set a reminder
for itself.https://g.co/gemini/share/05a8605e9659I do not know, it felt a bit
strange. If it would have brief moments of self-awareness, wouldn't a response
look like this? What if self-awareness is just like this, a glitch in the
processing? Should we be more cautious in dealing with sentient beings, as
Jonathan Birch asks in his book "The Edge of Sentience: Risk and Precaution in
Humans, Other Animals, and AI" ?https://academic.oup.com/book/57949-J.
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
1/2003 thru 6/2021 http://friam.383.s1.nabble.com/