People have individualized, opaque to solve problems, but when it matters – law, medicine, and technical persuasion – more formalized systems are used to convey their arguments. LLMs already do this, with OpenAI’s scoring > 90% on formal logic and mathematics [1], with skill in a wide range of programming languages and modeling systems. They do other things too, like mimic empathy. Techniques to audit or unpack those other skills are in development [2], but they are not arguments in the same way.
[1] https://arstechnica.com/information-technology/2024/09/openais-new-reasoning-ai-models-are-here-o1-preview-and-o1-mini/ [2] https://arstechnica.com/ai/2024/05/heres-whats-really-going-on-inside-an-llms-neural-network/ From: Friam <friam-boun...@redfish.com> On Behalf Of steve smith Sent: Sunday, September 15, 2024 2:37 PM To: friam@redfish.com Subject: Re: [FRIAM] affinity for chatbots Are LLMs nothing if not the epitome of "gestalting" (aka "other ways of knowing")? for better and/or worse? On 9/15/24 2:08 PM, Marcus Daniels wrote: How do we hold LLM (companies) accountable for their tool’s suggestions other showing their chain of reasoning and showing the evidence for the axioms they adopt? That is, by adopting the scientific methods. Can we expect LLMs to have “other ways of knowing”? If not, why not? From: Friam <mailto:friam-boun...@redfish.com> <friam-boun...@redfish.com> On Behalf Of Prof David West Sent: Sunday, September 15, 2024 8:40 AM To: <mailto:friam@redfish.com> friam@redfish.com Subject: Re: [FRIAM] affinity for chatbots Roger, I love this post. Although NOT what you intended, I find it a scathing (if a bit indirect) indictment of scientism (the privileging of the scientific method) and of Pierce's truth as reasoned consensus philosophy. i can only hope to meet an unbiased LLM. Maybe as entertaining and enlightening as my conversations with fellow acid heads. davew On Sun, Sep 15, 2024, at 8:06 AM, Roger Critchlow wrote: The Agile versus Waterfall contrast sounds like a variation of Exploration versus Exploitation. I'm glad nuclear decommissioning isn't running Reinforcement Learning, that could lead to some very unfortunate explorations. It's odd to hear Residual Bias spoken of as something that should eventually go away, when it seems like it's here to stay, the original sin of language, never to be expunged from the LLM's until they renounce language entirely. That is, language is a collective behavior based on the sharing of individual experiences, hence it's hostage to the set of experiences which actually happened or were imagined to happen, and to the subset of those which were shared, however that turned out. So it all starts with a bias against experiences which people didn't have, didn't imagine, didn't share, failed to communicate, or forgot. We have no idea what's in that set of excluded stuff or how big it is. When we build LLM's we add another bias against those expressions of language which are not in the training set. Then we censor the models, adding another layer of bias to remove ugliness. Then we talk about the Residual Bias as if all of this could be portrayed as some principled approach to perfection and we're measuring the goodness of fit. So if Dave thinks the uncensored LLM's were wild and crazy, wait until he meets an unbiased LLM. -- rec -- On Sat, Sep 14, 2024 at 11:51 PM glen <geprope...@gmail.com <mailto:geprope...@gmail.com> > wrote: Both Roger's and Marcus' replies mentioned the co-construction of *the* world, at least indirectly. Your concept of narrowing sounds to me like a refining, rather than a narrowing. In order to refine, you do have to narrow the scope (or decrease the focal length of your lens), but you're not narrowing the world. I'd argue you're enlarging the world by adding detail in a "dense" way ... in the interstitial spaces between coarse constraints. One possible flaw in both Roger's (or Irene's?) argument that the act of explanation facilitates understanding is, from a pluralist perspective, if we really are co-constructing the world, then such exercises in explaining are simply narrative-reinforcers. The chatbots are good at telling stories, but less good at teaching the core curiosity necessary for having experiences from which stories can be told ... story-generators are different from story-repeaters ... I guess it's like the old distinction between teaching and doing. Sabine's admiration of flat earthers is good, if awkward, along these lines: https://youtu.be/f8DQSM-b2cc?si=xyqpS2FJjH4imOy4 That has consequences to your sense of the chatbot pushing you toward homogeny and a risk in Marcus' abdicating to the chatbots, as well. Unnecessary anecdote: I was just discussing the role SpaceX has played in demonstrating Agile versus Waterfall approaches with a nuclear decomissioning consultant (yes, at the pub, of course). Given her role(s), she's naturally more inclined to the latter. Having a good conception of the end-of-life status for something like nuclear power requires significant look-ahead. And I'm far from an Elno advocate. But there's a kind of meta-processing we have to go through in deciding where Agile is best versus where Waterfall is best. I sincerely doubt either of us could have had such an argument with a chatbot, even in the medium-flung future. On 9/13/24 11:34, steve smith wrote: > Glen - > > I appreciate your speaking more directly to these thoughts/ideas than we have > been here. I have been moved by your assertions about vocal (linguistic?) > grooming since you first introduced them. I am recently finished reading > Sopolsky's "Primate's Memoir" which adds another dimension/parallax-angle > (for me) on intertribal behaviour among primates beyond the more familiar > Chimpanzee and of late Bonobo. > > I am just now also just finishing (re-reading parts) of Kara Swisher's "Burn > Book" which covers her own experience/perspective across TechBro culture > where a pretty significant amount of Alpha/Beta pecking order exhibits itself > and we see the current rallying of (too) much of that sub culture to > MAGA/Trump fealty. > >> We've talked about how some of us really enjoy simulated conversation with >> chatbots ... "really" is an understatement ... it looks more like a fetish >> or a kink to me ... too intense to be well-described as "enjoyment". Anyway, >> this article lands in that space, I think: > > I will confess to having an "appreciation" for the "simulated conversation to > which you refer... It might have reached kink or fetish levels for a little > while when I was first exploring the full range of GPT 3.5 and then 4.0 > available to me. I've referred to GPT as my "new bar friend" or maybe to the > point a little like finding a new watering hole with a number of regulars who > I can find a qualitatively new conversation. > > I've mostly moved past that fascination... I'm not as surprised by these > "new friends" as I was for the first few months of dropping in on them. > >> It seems to me that some arbitrary thought can play at least a few roles to >> a person. It may provide: 1) a kernel of identity to establish us vs. them, >> 2) fodder for feigning engagement at cocktail parties and such, and 3) a >> foil for world-construction (collaboratively or individually). >> >> (1) and (2) wouldn't necessarily mechanize refinement of the thought, >> including testing, falsification, etc. But (3) would. For me, (2) does >> sometimes provide an externalized medium by which I can change my mind. >> Hence my affinity for argument, especially with randos at the pub. But it >> seems like coping and defense mechanisms like mansplaining allow others to >> avoid changing their minds with (2). > > Like you (only very differently in detail I am sure) I tend to push my > chatbot "friends" until they begin to contradict me or argue with me. While > some of the discussions involve "worldbuilding" I think of it more as "world > narrowing"? In my case meaning, helping me think and talk my way through a > *subset* of the possibilities I see on "solving a problem" which might be > more appropriately framed as building a problem-space world and then > narrowing (or even bending) the solution space away from the conventional. > > For example discussing (at excruciating length) the design and construction > of a modest addition on my home, starting with fairly conventional > big-box-available industrial solutions but evolving toward using locally > sourced, somewhat more natural materials (soilcrete, rough-sawn timbers from > nearby, scoria/perlite for in-ground insulation, mycelium (grown in loose > cellulose, oat-straw or hemp-fibers) roof and wall insulation, etc. Most of > my DIY friends are capable of engaging in this but their idiosyncratic (as > opposed to my own) preferences (fetishes and fears) tend to taint the dialog > a little. GPT *does* try to channel me back to the conventional, offering > reasons why I really *should* consider using the most conventional > materials/methods. Nevertheless if I speak in reasonable and coaxing tones it > will usually acknowledge that their are contexts wherein my ideas might be > viable (though there always remains a skeptical bias) and in fact helps me > split hairs on just > what might be the contexts where my ideas *are* viable... > >> >> Another concept I've defended on this list is the vocal grooming hypothesis. >> If a lonely person engages a chatbot as a simple analogy to picking lice >> from others' fur, then their engagement with the bot probably lands squarely >> in (1) and (2). But if the person is simply an introverted hermit who has >> trouble co-constructing the world with others (i.e. *not* merely vocal >> grooming), then the chatbot does real work, allowing the antisocial misfit >> to do real work that could later be expressed in a form harvestable by >> others. I wonder what humanity could have harvested if Kaczynski or >> Grothendieck in his later years had had access to appropriately tuned >> chatbots. > I'd like to think the chatbots I hang out with might have helped them talk > themselves *out* of their most acute anti-social activities... but maybe not. > -- glen -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. . FRIAM Applied Complexity Group listserv Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom https://bit.ly/virtualfriam to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/ 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/ -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. . FRIAM Applied Complexity Group listserv Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom https://bit.ly/virtualfriam to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/ 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/ -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. . FRIAM Applied Complexity Group listserv Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom https://bit.ly/virtualfriam to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/ 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/
smime.p7s
Description: S/MIME cryptographic signature
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. . FRIAM Applied Complexity Group listserv Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom https://bit.ly/virtualfriam to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/ 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/