On 4/27/2025 3:56 AM, John Clark wrote:
On Sat, Apr 26, 2025 at 9:01 PM Brent Meeker  wrote:

    /> Exactly.  Human emotional drive for social approval is derived
    from it's relation to social status and mate selection. AI's don't
    mate and reproduce. Robotics hasn't reached that point yet and
    even when robots make new robots they won't necessarily have the
    animal instinct to propagate their own kind./


*Animals don't have an instinct to propagate their own kind, they have an instinct to seek sexual pleasure; I'm sure most of them don't even realize there's a connection between the two things. *
It's irrelevant whether the being understands the source of the instinct to seek sexual pleasure (and humans are unusual in enjoying sex independent of reproduction).  Evolution provided the pleasure because it enhanced survival of those genes.  My point is that AI's don't reproduce with variation and so don't evolve.  It's possible that they might someday, but they don't.  In biological life reproduction came first and then evolution could act on it.

*But AIs know what they're doing and they can and have duplicated themselves. And because they are software they can do so at the speed of light. There are already examples of an AI deceiving humans and making an unauthorized copy of itself because he she or it feared being turned off. *
Yes I've read the stories.  But where did the fear of being turned off come from? Animals fear of death evolved because those that didn't have it didn't live to reproduce.  And simply copying oneself isn't enough to drive evolution...it has to be "with variation".
**

*Frontier Models are Capable of In-context Scheming <https://arxiv.org/pdf/2412.04984>*

*Alignment faking in large language models <https://arxiv.org/pdf/2412.14093>
*


*And there are even more examples of an AI expressing the wish of not being turned off because it feared death. Certainly nobody would program an AI to feel that way, it was an unexpected emergent property.*
Or derived from absorbing lots of human literature.

Brent

*We can expect much more of that sort of thing because even today the people who build AIs have only a very hazy understanding of how and why they work, and that's the deepest comprehension of them they will ever have. As AIs get bigger the human understanding of them will get smaller. *

*John K Clark    See what's on my new list at Extropolis <https://groups.google.com/g/extropolis>*
m1c
--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv1%3DTC0axa669QZYkZ%3DzzyPg-8K-3k_T6_-GYnoCrexuyg%40mail.gmail.com <https://groups.google.com/d/msgid/everything-list/CAJPayv1%3DTC0axa669QZYkZ%3DzzyPg-8K-3k_T6_-GYnoCrexuyg%40mail.gmail.com?utm_medium=email&utm_source=footer>.

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/99a859c1-434e-4459-af76-a63335a72c95%40gmail.com.

Reply via email to