Right. I put "agree" in scare quotes because I mean it more in terms of "in line with", 
"alongside", affine to, etc. What should happen, IMNSHO, is equivalent attention should be paid to mechanism-based 
machine learning, equivalent to the attention that prediction is getting. You know, heuristic vs predictive power. So, I 
"agree" with the open letter if we interpret what they're saying is "Wait a minute, these bullshit generators 
don't represent the real power of what ai/ml can do." Sure, we love bullshit. What's the market for fiction? Movies? Books? 
Etc? It's fscking huge. So, sure. Bullshit generators are great. More power to them.

But we *also* need non-bullshit generators. One avenue for generating 
non-bullshit is science. (Maybe there are others, but I'm a Scientismist.) It's 
just plain sad that the (~$45 bil?) budget of the NIH is dwarfed by the movie 
market (~280 bil?). Granted, some of the movie market is for documentaries ... 
which are somewhere between complete bullshit [⛧] and science. But whatever. My 
point is that I line up with the open letter in bemoaning the attention 
garnered by postmodern chatbots like GPT.

[⛧] Graham Hancock? Lots of "true crime". [sigh]

On 3/30/23 10:19, Steve Smith wrote:

/GePR/ -
Well, I "agree" with the open letter, for different reasons than Steve. Just 
yesterday, a colleague (who should know better) made a similar assertion to Nick's (and 
mine, and maybe Marcus' etc.) that *we* may be in the same category as a transformer 
decoder assembly. The context was whether a structure like GPT, designed specifically so 
that it can add high-order Markovian token prediction, can possibly 
embody/encapsulate/contain mechanistic models.
Can you elaborate how this is an "agreement" with the open letter? I'm not 
clear what you are agreeing with or on what principle?

While I don't subscribe to the fideistic write-off (or Luddite-like) of such structures as vapid or 
even "non-sentient", there *is* something we're doing they are not. I can't quite 
articulate what it is we do that they don't. But I think there is. And I think it (whatever 
"it" is) is being targeted by mechanism-based (or physics-based) machine learning.

Being either a skeptic (as I am) or a proponent (as Marcus portrays, here), 
pre-emptively regulating (or attempting to regulate) the research and training 
is a bad, perhaps Pyrrhic Victory, thing to do. From a skeptical perspective, 
it slows our *falsification* of transformer decoder assemblies as containers 
for mechanistic reasoning. For proponents, it puts us behind others who would 
continue to make progress.

I do agree that when we are in an "arms race" it feels like there is nothing to do except 
"run faster" and don't for the love of all that is good, take a pause for any reason.

To quote Thomas Jefferson (referring to Slavery):  "I think we have a wolf by the 
ears, we can neither continue to hold it, nor can we afford to let it go".


So, yes, it has a feedback effect, a deleterious one.

My inner-Luddite believes that we are always in spiritual/social debt and that 
most if not all of our attempts to dig out with more technology has, at best, 
the benefit of rearranging the shape of the hole we are in, and generally 
deepening and steepening it's profile.

That said, I live my life with a shovel in one hand and a digging bar in the 
other, even if I've (mostly) put away the diesel excavator, dynamite and 
blasting caps...  I *am* homo-faber and this is *in* my destiny, but I want to 
believe that I am also the superposition of many other modes: 
https://en.wikipedia.org/wiki/Names_for_the_human_species, with perhaps /homo 
adaptabalis/ most significantly?   If we do not at least consider our own 
self-regulation as a collective then I think we risk degenerating to /homo 
avarus/ or /homo apathetikos./


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to