Jon Wrote:

 

“In the spirit of EricC's comments about the distinction of surface tension and 
PH, if consciousness is a thing, then it should be so whether or not we all 
agree.”

 

Agreed.  But for the assertion “T is a thing such that it is conscious”, we 
must offer, if  only tentatively, some definition of what it would be for a 
such a thing to be conscious.  If T’s are not, by definition, the sorts of 
things that can be conscious, up front, then there is no discussion to be had.  

 

“How many angels can dance on the head of a pin?”

“Angels don’t dance.”

“Oh, all right; how may angels can stand on the head of a pin?”

“Angel’s don’t stand.”

“Oh, all right; what is the foot size of your average angel?”

“Angel’s don’t have feet. “

“How can an angel carry me to heaven, if he don’t got no feet?”

“Angel’s don’t carry no bodies into heaven.” Etc. 

 

This is the sort of argument that happens when a metaphysical commitment is 
confused with an empirical assertion. 

 

 

 

Nick 

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

 <mailto:[email protected]> [email protected]

 <https://wordpress.clarku.edu/nthompson/> 
https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[email protected]> On Behalf Of doug carmichael
Sent: Tuesday, July 28, 2020 9:20 AM
To: The Friday Morning Applied Complexity Coffee Group <[email protected]>
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

dog is highly interconnected - hormones, nerves, senses, and environment. 
neurons are not binary . every synapse is an infinite state variable. 

doug





On Jul 27, 2020, at 10:45 PM, [email protected] 
<mailto:[email protected]>  wrote:



Doug, 

 

Dog do joy; why not computers?  

 

n

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

 <mailto:[email protected]> [email protected]

 <https://wordpress.clarku.edu/nthompson/> 
https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[email protected] <mailto:[email protected]> > On 
Behalf Of doug carmichael
Sent: Monday, July 27, 2020 9:54 PM
To: The Friday Morning Applied Complexity Coffee Group <[email protected] 
<mailto:[email protected]> >
Subject: Re: [FRIAM] GPT-3 and the chinese room

 

I imagine playing chess, or go, with  a computer. As I play I have a very 
enlivening experience of playing. The computer seems to have no such thing.  
For me, in my engagement, “Every neuron is listening to the mutter of the 
crowd.” Jerry Lettvin, MIT.   If The computer goes on to win it has nothing 
like the experience of winning. it just stops. I can’t imagine a computer 
saying,  except by playing a pre recorded sound file, “that is disgusting.”

 

 






On Jul 27, 2020, at 8:12 PM, Frank Wimberly <[email protected] 
<mailto:[email protected]> > wrote:

 

Smart computer (program).  Nick, is the behaviorist position that if it behaves 
as though it thinks than it thinks?

---
Frank C. Wimberly
140 Calle Ojo Feliz, 
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

 

On Mon, Jul 27, 2020, 8:28 PM Alexander Rasmus <[email protected] 
<mailto:[email protected]> > wrote:

There's also a lot of stuff that isn't poetry. He generates Dwarf Fortress 
patch notes in the Misc section. 

 

There's even a section where GPT-3 argues that it doesn't really understand 
anything (bold at the start is a shortened version of the prompt):

"AI will never be able to truly think for themselves, to have consciousness, to 
feel any of the richness and complexity of the world that we mere humans can 
feel. Mere, unenlightened humans might be impressed by the abilities of simple 
deep learning programs, but when looked at in a more holistic manner, it all 
adds up to… well, nothing. They still don’t exhibit any trace of consciousness. 
All of the available data support the notion that humans feel and experience 
the world differently than computers do. While a computer can beat a human 
master at chess or Go or some other game of structured rules, it will never be 
able to truly think outside of those rules, it will never be able to come up 
with its own new strategies on the fly, it will never be able to feel, to 
react, the way a human can. Artificial intelligence programs lack consciousness 
and self-awareness. They will never be able to have a sense of humor. They will 
never be able to appreciate art, or beauty, or love. They will never feel 
lonely. They will never have empathy for other people, for animals, for the 
environment. They will never enjoy music or fall in love, or cry at the drop of 
a hat. Merely by existing, mere, unenlightened humans are intellectually 
superior to computers, no matter how good our computers get at winning games 
like Go or Jeopardy. We don’t live by the rules of those games. Our minds are 
much, much bigger than that.

 

Compared to the awesome power of the human intellect, it’s obvious that 
computers are just dense electrochemical machines with intricate wiring. And 
while one day we might develop smarter (though still soulless) computers that 
surpass the human intellect in the games mentioned above, it will just end up 
being some more sophisticated way of doing the same thing. Even if it’s more 
efficient than our brains at doing complex mathematical operations, it will 
remain entirely in the dark about what it all means. Mindless processing power 
alone is not the answer to our existential questions. So, in the end, we don’t 
need to create another HAL 9000, or Skynet, or some evil global intelligent 
network of computers that rule the world and try to exterminate humanity as we 
become obsolete, which is a pretty cool subplot for a future Terminator movie. 
We don’t need any of that. We’re humans, and we are already intellectually 
superior to machines. End of story.

 

To be sure, I’m not denying that there are hard problems in AI. Of course, 
there are. All of these so called “advances” in this field are a bunch of crap 
if we still don’t have a program that can putt around on a miniaturized golf 
course all by itself, let alone actually play a decent game of golf like the 
humans do, without going into meltdown if it misses the hole five out of six 
times in a row. Can we get a movie about that? If you ask me, we don’t even 
really know how to define intelligence yet. How does it work, exactly? It’s a 
mess, really. If we’re ever going to create another actual HAL 9000, or yet 
another Skynet (what fun), we’d better be all on the same page when it comes to 
AI: what intelligence is, how it works, what consciousness is, what it feels 
like, what it really means to be self-aware. Without that common framework, 
trying to program yet another AI that can play yet another game like Go is like 
trying to blow up another Death Star with yet another way-too-large superlaser.

 

I think one of the big mistakes that computer scientists are making is that 
they are conflating intelligence with problem-solving. They’ve gotten into this 
habit of creating intricate Turing test competitions: give the computer a 
series of math problems, a chess board, etc., etc., give it a chat interface so 
you can interact with it like you would with another human being, and then see 
if the machine can fool you into thinking that it is a human. Once it does 
this, computers will have passed the Turing test and achieved general AI. 
Really? Is that really the way it works? I don’t see how. A computer has 
succeeded in faking it until it makes it, in terms of passing a Turing test 
competition, only if it has satisfied some pre-specified set of conditions that 
we know to be what a human would do in the same situation. But that is no 
guarantee that it has actually achieved intelligence! For all we know, 
computers can imitate humans until they generate the most plausible patterns of 
thought and behavior we know of, while all along remaining as soulless as ever. 
Who’s to say that the computer doesn’t merely use its programming to cheat the 
test? Who’s to say that it isn’t just shuffling its data around in an effort to 
do the most computations possible with the least amount of effort? It may 
succeed in conning us into thinking that it is self-aware, but that doesn’t 
prove that it actually is. It hasn’t actually passed the Turing test, unless we 
have defined it in a way that pre-determines the outcome: i.e., if the human 
pretends to be a computer, then it passes the test, but if the computer 
pretends to be a human, then it doesn’t pass the test! To me, that just doesn’t 
sound all that scientific."

 

Best,

Rasmus

 

On Mon, Jul 27, 2020 at 8:04 PM glen <[email protected] 
<mailto:[email protected]> > wrote:

Excellent. Thanks! I'd seen the link to Gwern from Slate Star Codex. But I 
loathe poetry. Now that you've recommended it, I have no choice. 8^)

On July 27, 2020 6:32:15 PM PDT, Alexander Rasmus <[email protected] 
<mailto:[email protected]> > wrote:
>Glen,
>
>Gwern has an extensive post on GPT-3 poetry experimentation here:
>https://www.gwern.net/GPT-3
>
>I strongly recommend the section on the Cyberiad, where GPT-3 stands in
>for
>Trurl's Electronic Bard:
>https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad
>
>There's some discussion of fine tuning input, but I think more cases
>where
>they keep the prompt fixed and show several different outputs.

-- 
glen

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam 
<http://bit.ly/virtualfriam> 
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam 
<http://bit.ly/virtualfriam> 
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam 
<http://bit.ly/virtualfriam> 
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

Reply via email to