Eric,

 

I apologize forwhat may seem sophomoric smarminess but…..

 

To me, you are a model, right?  Whatever you are, it is my model of you with 
which I am dealing.  So, when you intend something  by a model, it is a case of 
a model intending a model, right?  So, models intend, right?  So why not just 
say so, in the first instance. 

 

Nick 

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

 <mailto:thompnicks...@gmail.com> thompnicks...@gmail.com

 <https://wordpress.clarku.edu/nthompson/> 
https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <friam-boun...@redfish.com> On Behalf Of Eric Charles
Sent: Wednesday, January 15, 2020 1:27 PM
To: The Friday Morning Applied Complexity Coffee Group <friam@redfish.com>
Subject: Re: [FRIAM] description - explanation - metaphor - model - and reply

 

There is an interesting issue that often comes up in these contexts, in which 
someone asserts that the models mean something all on their own.  If it is 
someone who has picked up our language,  they might,  for example,  ask "What 
does the model intend? The Model, itself? "

 

Glen does this by saying "there's good reason to believe you will *never* 
actually understand how your model works."

 

I have seen Nick oscillate in those discussions, towards and away from thinking 
he needs to rewrite everything.  

 

I insist that is not the direction should be going in.  The model doesn't 
intend anything.  A person,  who is offering a model,  intends something by it, 
 and does not intend other things.  Because THAT is what we'r are talking 
about.... There IS a chance (though no guarentee) that the person offering a 
model (fully) understands what they do or do not intend to match between the 
model and the situation that is modeled.  

 

We aren't talking about anything other than people doing things. X is "a model" 
if/when someone thinks an aspect of X matches something happening somewhere 
else,  and all models contain both intended and unintended implications.  This 
makes a question of whether or not someone "fully understands their model" a 
question primarily about the understanding,  not primarily about "the model 
itself". 

 

 

 

 

 

On Wed, Jan 15, 2020, 1:13 PM uǝlƃ ☣ <geprope...@gmail.com 
<mailto:geprope...@gmail.com> > wrote:

Did Epstein ever respond to your criticism?

For what little it's worth, I disagree with your lesson. Obtuse models can be 
very useful. In fact, there's good reason to believe you will *never* actually 
understand how your model works, any more than you'll ever understand how that 
model's referent(s) work. I may even be able to use Pierce to argue that to 
you. 8^)

On 1/15/20 9:23 AM, thompnicks...@gmail.com <mailto:thompnicks...@gmail.com>  
wrote:
> The lesson is, if you
> don’t understand how your model works, you aren’t doing yourself any favors 
> by inventing it.  This led to my war with Epstein in the pages of JSSS about 
> the relation between explanation and prediction.  

-- 
☣ uǝlƃ

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/ 
<http://friam.471366.n2.nabble.com/FRIAM-COMIC> 
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to