Well, I "agree" with the open letter, for different reasons than Steve. Just 
yesterday, a colleague (who should know better) made a similar assertion to Nick's (and 
mine, and maybe Marcus' etc.) that *we* may be in the same category as a transformer 
decoder assembly. The context was whether a structure like GPT, designed specifically so 
that it can add high-order Markovian token prediction, can possibly 
embody/encapsulate/contain mechanistic models.

While I don't subscribe to the fideistic write-off (or Luddite-like) of such structures as vapid or 
even "non-sentient", there *is* something we're doing they are not. I can't quite 
articulate what it is we do that they don't. But I think there is. And I think it (whatever 
"it" is) is being targeted by mechanism-based (or physics-based) machine learning.

Being either a skeptic (as I am) or a proponent (as Marcus portrays, here), 
pre-emptively regulating (or attempting to regulate) the research and training 
is a bad, perhaps Pyrrhic Victory, thing to do. From a skeptical perspective, 
it slows our *falsification* of transformer decoder assemblies as containers 
for mechanistic reasoning. For proponents, it puts us behind others who would 
continue to make progress.

So, yes, it has a feedback effect, a deleterious one.

On 3/29/23 21:00, Marcus Daniels wrote:
This is the solution to getting control of greenhouse gases.  Japan, Korea, 
China all have decreasing populations.   Men in Japan, used to have lifelong 
jobs with their big companies, now many are gig workers.    People that can’t 
ensure an income stream don’t have children.   AI further raises the bar to 
getting into the workforce.   No babies, no busybodies driving around in cars, 
consuming massive amounts of meat, plastics, etc.

*From:* Friam <friam-boun...@redfish.com> *On Behalf Of *Steve Smith
*Sent:* Wednesday, March 29, 2023 4:20 PM
*To:* friam@redfish.com
*Subject:* Re: [FRIAM] emergent mind - ai news by ai

GPR (not to be confused with GPT) -

    It's ridiculous. Suddenly, I feel more akin to that Chinese guy who GE'd some babies 
... or the biohackers growing glowing dogs in their shed. You can't control people with 
open letters and calls to "good behavior".

It is definitely "toothless" a bit like the "thoughts and prayers" we throw at 
school shootings...  (nearly daily now?)

and then we have the Doomsday Clock 
<https://thebulletin.org/doomsday-clock/timeline/>...   which added climate 
change to it's calculus of doom but haven't tossed AI (et al.) in yet.

We *do* seem to have some (weak/partial/??) extant mechanisms for collective self-regulation, but 
at some level, I think it always grounds out in *some* form of coercion at some scale?   I don't 
think authors of Open Letters think that they their pre/pro-scriptions will be followed as a direct 
consequence.   But *does* the public airing of a "dire caution" have any feedback effect, 
or is it in fact just "meh"?

I'm a Luddite at heart so their appeal appealed to me, but thjen *I'm* not developing 
these tools (even if I am engaged in guerilla "socratic engineering")!

meh,

  - Steve


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to