If you are saying that the more AI acts like a person, the more People will 
believe they understand it, I totally agree. Whether they believe truthfully is 
a whole ‘nother that matter.  If ever there were a cradle for manipulation, AI 
is it.  

 

N

 

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

 <mailto:[email protected]> [email protected]

 <https://wordpress.clarku.edu/nthompson/> 
https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[email protected]> On Behalf Of Prof David West
Sent: Tuesday, December 1, 2020 2:01 PM
To: [email protected]
Subject: Re: [FRIAM] New ways of understanding the world

 

Nick,

 

Everything I do in software is grounded in personification / 
anthropomorphization of objects - small bits of software. I would contend that 
this is the best way to understand and design such software. So I see no reason 
to avoid personification of AI software and would, in fact, argue that current 
approaches to designing an AI will fail precisely because they do not take that 
perspective.

 

davew

 

 

On Tue, Dec 1, 2020, at 12:16 PM, [email protected] 
<mailto:[email protected]>  wrote:

“AI needs enough time to perform experiments to learn the consequences and 
meaning of different patterns of numbers”

 

Abused metaphor alert !!!!!  Haven’t you “personalized” AI?  It would seem to 
me that the one thing you are NOT allowed to do with AI is personalize it.

 

N

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[email protected] <mailto:[email protected]> 

https://wordpress.clarku.edu/nthompson/

 

 

 

-----Original Message-----

From: Friam <[email protected] <mailto:[email protected]> > On 
Behalf Of Marcus Daniels

Sent: Tuesday, December 1, 2020 12:13 PM

To: The Friday Morning Applied Complexity Coffee Group <[email protected] 
<mailto:[email protected]> >

Subject: Re: [FRIAM] New ways of understanding the world

 

Map Nick's list of numbers to a spatiotemporal snapshot of the physical world.  
The dog and the human have both learned how to learn about it.  Whether it took 
1 year, 8000 years, or 2.7 billion years sort of doesn’t matter in the argument 
except that the new AI needs enough time to perform experiments to learn the 
consequences and meaning of different patterns of numbers.   If the list of 
numbers describes every possible action that the AI could take and how that 
particular path would be recorded, then any given experiment could in principle 
be encapsulated in a single set of numbers; it is just a matter of what cells 
in the hyperspace the AI decides to look at.

 

-----Original Message-----

From: Friam < <mailto:[email protected]> [email protected]> On 
Behalf Of u?l? ???

Sent: Tuesday, December 1, 2020 9:54 AM

To:  <mailto:[email protected]> [email protected]

Subject: Re: [FRIAM] New ways of understanding the world

 

Well, as I've tried to make clear, machines can *accrete* their machinery. I 
think this is essentially arguing for "genetic memory", the idea that there's a 
balance between scales of learning rates. What your dog learns after its birth 
is different from what it "knew" at its birth. I'm fine with tossing the word 
"theory" for this accreted build-up of inferences/outcomes/state. But it's as 
good a word as any other.

 

I suspect that there are some animals, like humans, born with FPGA-like 
learning structures so that their machinery accretes more after birth than 
other animals. And that there are some animals born with more of that machinery 
already built-in. And it's not a simple topic. Things like retractable claws 
are peculiar machinery that kindasorta *requires* one to think in terms of 
clawing, whereas our more rounded fingernails facilitate both clawing and, say, 
unscrewing flat head screws.

 

But this accreted machinery is *there*, no matter how much we want to argue 
where it came from. And it will be there for any given AI as well. Call it 
whatever you feel comfortable with.

 

On 12/1/20 9:39 AM, Marcus Daniels wrote:

> Dogs and humans share 84% of their DNA, so that almost sounds plausible on 
> the face of it.  However, humans have about 16 billion neurons in the 
> cerebral cortex but the whole human genome is only about 3 billion base 
> pairs, and only about 30 million of it codes for proteins.   This seems to me 
> to say that learning is more important than inheritance of "theories" if you 
> must insist on using that word.

 

 

--

↙↙↙ uǝlƃ

 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .

FRIAM Applied Complexity Group listserv

Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe  
<http://redfish.com/mailman/listinfo/friam_redfish.com> 
http://redfish.com/mailman/listinfo/friam_redfish.com

archives:  <http://friam.471366.n2.nabble.com/> 
http://friam.471366.n2.nabble.com/

FRIAM-COMIC  <http://friam-comic.blogspot.com/> http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .

FRIAM Applied Complexity Group listserv

Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe  
<http://redfish.com/mailman/listinfo/friam_redfish.com> 
http://redfish.com/mailman/listinfo/friam_redfish.com

archives:  <http://friam.471366.n2.nabble.com/> 
http://friam.471366.n2.nabble.com/

FRIAM-COMIC  <http://friam-comic.blogspot.com/> http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .

FRIAM Applied Complexity Group listserv

Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam 
<http://bit.ly/virtualfriam> 

un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com

archives: http://friam.471366.n2.nabble.com/

FRIAM-COMIC http://friam-comic.blogspot.com/ 

 

 

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

Reply via email to