Frank,
I am way out of my territory, here, but are “differential equasions” necessarily “first principles”. It seems to me that one could derive differential equasions based on any fictions. How do I misunderstand what is going on, here? Nick Nicholas Thompson Emeritus Professor of Ethology and Psychology Clark University <mailto:thompnicks...@gmail.com> thompnicks...@gmail.com <https://wordpress.clarku.edu/nthompson/> https://wordpress.clarku.edu/nthompson/ From: Friam <friam-boun...@redfish.com> On Behalf Of Frank Wimberly Sent: Thursday, May 14, 2020 9:20 AM To: The Friday Morning Applied Complexity Coffee Group <friam@redfish.com> Subject: Re: [FRIAM] PSC Tornado Visualization (2008) [720p] - YouTube Above I should have written "both/and is better than either/or". For clarity. Marc Raibert founded Boston Dynamics which was bought by Google. They're the people that develop the walking animals, etc that appear in so many videos. Marc and I did an experiment that involved solving differential equations (first principles) offline and storing the results in very large tables. In real time the walking machine fits curves (not first principles) to the tables to determine how to move a joint to achieve balance. Is that an example of a synthesis? --- Frank C. Wimberly 140 Calle Ojo Feliz, Santa Fe, NM 87505 505 670-9918 Santa Fe, NM On Thu, May 14, 2020, 9:08 AM Marcus Daniels <mar...@snoutfarm.com <mailto:mar...@snoutfarm.com> > wrote: Steve writes: “I *think* this discussion (or this subthread) has devolved to suggesting that predictive power is the only use of modeling (and simulation) whilst explanatory power is not (it is just drama?). “ First principles explanations start with some assumptions and reason forward. The explanation will be wrong if the assumptions are wrong. If the validation data is inadequate in depth or breadth, or at the wrong scale, the validation that is achieved will be wrong or illusory too. In Nick’s example, the problem was that flight evidence was on the wrong scale. If the flight continued for 120 years, I’d argue that is a distinction without a difference. There won’t be a widow, because she’ll be dead too. I suspect a lot of the appeal of explanatory power does not come from the elaboration or analysis that derivations provide, but simply from a desire for control, and a desire to have something to talk about. Some machine learning approaches give simple models, models that do not involve thousands of parameters. If one gets to the same equations from an automated process, nothing prevents derivations or deconstruction starting from them. Other machine learning approaches generalize, but give black boxes that are inscrutably complex. When the latter is far more powerful than the former, what is one to do? Ignore their utility? Marcus .-. .- -. -.. --- -- -..-. -.. --- - ... -..-. .- -. -.. -..-. -.. .- ... .... . ... FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam <http://bit.ly/virtualfriam> unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/
.-. .- -. -.. --- -- -..-. -.. --- - ... -..-. .- -. -.. -..-. -.. .- ... .... . ... FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/