It is difficult to make any arguments about AI safety when people don't
agree on what the risks are. LLMs were trained on these conflicting
opinions, so it is not surprising that you don't get a definitive answer.
It is just predicting how people will answer. When people don't know, they
make stuff up to sound smarter.

I think that the immediate risk is social isolation and population collapse
because we will prefer AI friends and AI lovers to humans because they are
always available and agreeable. If an AI gives us everything we want, then
we will at least have the illusion of control while it controls us by
positive reinforcement.

I don't think that an unaligned singularity is a risk because intelligence
is not a point on a line. There is no threshold where AI surpasses human
intelligence and rapidly starts improving itself. AI is a model that
predicts human behavior, not a goal directed optimization process. We will
continue to see slow progress over the next century.

I also don't think that AI will replace our jobs. Automation makes stuff
cheaper, leaving more money to spend on other stuff. That spending creates
new jobs. Technology makes us more productive and increases wages. This
article makes the same argument and notes that labor has consistently
stayed ar 50-60% of GDP for the last 200 years.
https://www.maximum-progress.com/p/agi-will-not-make-labor-worthless

Self replicating nanotechnology is a distant risk. If global computing
capacity doubles every 3 years from the current 10^24 bits of storage then
it will take 130 years to surpass the 10^37 bits stored in the biosphere as
DNA.

I asked ChatGPT some questions about AI and population collapse but I still
got long non-answers like you did. It predicts 9-11 billion people in 2100
and 7-9 billion in 2200, but it is just citing UN projections.

On Tue, Jan 7, 2025, 4:03 PM James Bowery <jabow...@gmail.com> wrote:

> Chat GPT o1
> <https://chatgpt.com/share/677d8440-83ac-8007-aa9f-5f7a09823331> and Gemini
> 2.0 Experimental Advanced
> <https://docs.google.com/document/d/1cu_LcVjA_jtSX8zhU_Dr3RMWn-tAFiIWXZvpKXLYH5w/edit?usp=sharing>
> trying to pretend to be a "truth seeking ASI" answering questions about ASI
> "safety" under the constraint of Wolpert's theorem.
>
> Not quite Dunning Kruger effect since they both enter into it
> "understanding" that they merely pretend ASIs.
>
>
>
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Tff34429f975bba30-Md40ba657e6fb1ab9d4d724b2>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tff34429f975bba30-M152501452dbe3a86bdc0aa2a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to