Not clear why one excludes the other.   Experience helps to estimate whether a 
particular system will better yield to careful study or to chaotic 
perturbation.   Witness the success of stochastic gradient descent in machine 
learning.

On 7/29/19, 11:57 PM, "Friam on behalf of glen∈ℂ" <friam-boun...@redfish.com 
on behalf of geprope...@gmail.com> wrote:

    It's in these crevices of the discussion that it becomes obvious that the 
"painted surface" analogy [†] fails completely. This is why Hoffman's 
"interface" theory is so attractive. The same core point is made, just with 
more explanatory power. It's analogous to how we play video games. Good 
examples are fighting games where you push buttons in different sequences to 
make the (physics-based) avatar do things in its environment. Our mental map of 
the controller interface has literally no similarity to the "physical" 
realities inside the game. But it *works*. You can control the avatar even 
though the control surface is nothing like the physics it's controlling.
    
    As Hoffman points out in one of his papers, it can even be *bad* to have an 
accurate understanding of the controlled system. Competent players don't get 
hung up on, e.g. whether a rapier or a broadsword being wielded by their avatar 
reflects the real thing. They simply (randomly?) try lots of game play 
techniques and "git gud". What's being selected for is not a good/true/real 
mapping, but an effective mapping.
    
    cf. Here's a video I haven't had the chance to watch yet: 
https://iai.tv/video/the-reality-illusion
    FWIW, I find Hoffman's style and attitude in both his writing and 
presentation very off-putting. But I really like the fundamental idea.
    
    [†] Which I first heard of by D. Dennett, I think, where our mind is 
projecting a movie onto an opaque screen. The world is projecting an image onto 
the other side of the screen. And there's some functionality to the screen so 
that when the two projections *match*, there's some feedback to both. When the 
projections are too different, then perhaps there's negative feedback.
    
    On 7/29/19 11:28 AM, Marcus Daniels wrote:
    > Steve writes:
    > 
    > < What I'm trying to expose is the meta-heuristic of being a facile model 
builder/adopter/fitter... and how our technological prosthetics (precut colored 
plexiglass and stain-by-number patterns or GPS/routing systems that present 
opaque-to-the-user preferences or predictive SDE programming environments).  >
    > 
    > When technology doesn’t work, take it apart and figure out what is wrong 
with it or how it could be improved.    Human experts, or skilled 
practitioners, can hurt more they help because they have no incentive to unpack 
their expertise into reusable automated systems.   The trick is to look at 
skills as technology and to be facile evolving the technology.
    
    
    ============================================================
    FRIAM Applied Complexity Group listserv
    Meets Fridays 9a-11:30 at cafe at St. John's College
    to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
    archives back to 2003: http://friam.471366.n2.nabble.com/
    FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
    

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to