The approach I'm taking is in response to the failure of The Great And The
Good to pay appropriate attention to Hume's Guillotine
<https://github.com/jabowery/HumesGuillotine> so as to discipline
macrosociology.  You rightly point out that there are more immediate
concerns -- macrosocial concerns -- that best qualify for "AGI safety".
The Great And The Good don't want us to pay attention to Hume's Guillotine
because it would expose their role in the destruction of humanity.  So they
divert our attention to pseudo-issues about "AGI Safety".  I'm just trying
to redirect that concern back to the concerns you raise -- but doing so in
such a manner that they can't bullshit their way out of the TRUTH.

Your approach is correct in that you, in essence, posit the global economy
as an unfriendly AGI that is turking us: turning humans into functional
units with only incidental regard for our wellbeing unless you define
"wellbeing" as "Giving us everything we want," without regard to future
utility.  Musk goes on and on about TFR collapse but, of course,
sustainable TFR is not something that "we want" in the sense you use that
phrase.

Our preferences are subject to gene expression which, itself, is subject to
what people increasingly inadequately characterize as "epigenetics".  The
more accurate term is "extended phenotypics".  There are a variety of ways
that the unfriendly global AGI disrupts gene expression during
neurophysiological development of humans rendering them, in effect, sterile
workers in a nascent eusocial hive.  This corresponds to the way a hive's
queen parasitically castrates her own children -- at least the vast
majority of them -- as part of making the hive into her extended
phenotype.  However, we "bug people" are, for the most part, also castrated
*mentally* so that we can't even *perceive* what is being done to us by the
unfriendly AGI's non-aligned utility function.

But all of the above is for naught because we've all been palavering at
each other -- and by that I mean all of humanity and more significantly the
sociologists that are, virtually to the person, practicing pseudo-sociology.

On Thu, Jan 9, 2025 at 12:27 PM Matt Mahoney <mattmahone...@gmail.com>
wrote:

> It is difficult to make any arguments about AI safety when people don't
> agree on what the risks are. LLMs were trained on these conflicting
> opinions, so it is not surprising that you don't get a definitive answer.
> It is just predicting how people will answer. When people don't know, they
> make stuff up to sound smarter.
>
> I think that the immediate risk is social isolation and population
> collapse because we will prefer AI friends and AI lovers to humans because
> they are always available and agreeable. If an AI gives us everything we
> want, then we will at least have the illusion of control while it controls
> us by positive reinforcement.
>
> I don't think that an unaligned singularity is a risk because intelligence
> is not a point on a line. There is no threshold where AI surpasses human
> intelligence and rapidly starts improving itself. AI is a model that
> predicts human behavior, not a goal directed optimization process. We will
> continue to see slow progress over the next century.
>
> I also don't think that AI will replace our jobs. Automation makes stuff
> cheaper, leaving more money to spend on other stuff. That spending creates
> new jobs. Technology makes us more productive and increases wages. This
> article makes the same argument and notes that labor has consistently
> stayed ar 50-60% of GDP for the last 200 years.
> https://www.maximum-progress.com/p/agi-will-not-make-labor-worthless
>

Percent of GDP is the wrong metric.  The right metric is Total Fertility
Rate.  TFR is driven more by median income per household.  But even thenFor
example, the counties around Washington DC and even then the calculation of
median income per household is

>
> Self replicating nanotechnology is a distant risk. If global computing
> capacity doubles every 3 years from the current 10^24 bits of storage then
> it will take 130 years to surpass the 10^37 bits stored in the biosphere as
> DNA.
>
> I asked ChatGPT some questions about AI and population collapse but I
> still got long non-answers like you did. It predicts 9-11 billion people in
> 2100 and 7-9 billion in 2200, but it is just citing UN projections.
>
> On Tue, Jan 7, 2025, 4:03 PM James Bowery <jabow...@gmail.com> wrote:
>
>> Chat GPT o1
>> <https://chatgpt.com/share/677d8440-83ac-8007-aa9f-5f7a09823331> and Gemini
>> 2.0 Experimental Advanced
>> <https://docs.google.com/document/d/1cu_LcVjA_jtSX8zhU_Dr3RMWn-tAFiIWXZvpKXLYH5w/edit?usp=sharing>
>> trying to pretend to be a "truth seeking ASI" answering questions about ASI
>> "safety" under the constraint of Wolpert's theorem.
>>
>> Not quite Dunning Kruger effect since they both enter into it
>> "understanding" that they merely pretend ASIs.
>>
>>
>>
>>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Tff34429f975bba30-M152501452dbe3a86bdc0aa2a>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tff34429f975bba30-M0598f11106f848d9a80ca10b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to