I read that Bing won’t write a cover letter if asked.  I love the idea of a set 
of files sitting on a filesystem at Microsoft that represent human ethics.  It 
reminds me of people that complain about being characterized by their skill 
sets.  I think we are going to learn just how little we are.
> On Feb 8, 2023, at 10:14 AM, glen <geprope...@gmail.com> wrote:
> 
> I've recently developed a taste for judging people by the content of their 
> character, something I used to and kindasorta still do denigrate as hubris 
> (because our models of others' character are models, always wrong, rarely 
> useful). And one of the best measures of character is how someone responds 
> when presented with a "learning opportunity". ChatGPT is an extraordinary 
> mansplainer. And even though, when you show it facts that contradict it's 
> prior opinion, it gives lip-service with words like "sorry", it will continue 
> to *confidently* spout half-truths and rhetorical bullsh¡t (even if you ask 
> it to, say, write a LCG pRNG in C). Just like the tendency of apps like 
> Stable Diffusion to "pornify" women, ChatGPT encodes the culture of its 
> input. So if ChatGPT is a mansplainer or evil, in any sense, it's *because* 
> the culture from which it draws its input is that. I.e. It's a mirror. We are 
> evil. We are dreadworthy.
> 
> BTW, I did a back of the envelope calculation and the cost of operating one 
> (small) 777 per day seems to be about the same as operating the one ChatGPT 
> per day (~$100k). Presumably, if/when OAI begins distributing the model, such 
> that there are several of them out there (like the fleets of 777s), the costs 
> will be lower. At that point, the semantic content of one 777 might exceed 
> that of one ChatGPT instance.
> 
>> On 2/8/23 09:08, Santafe wrote:
>> It’s funny.  I was reading some commentary on this last week (can’t even 
>> remember where now; that was _last week_!), and I remember thinking that the 
>> description reminded me of Williams Syndrome in people.  They have a 
>> grammatical sense that is at the stronger end of the human range, but their 
>> train of meaning has come to be characterized (again, a now-tropish 
>> short-hand) as “word salad”.
>> That there should be several somewhat-autonomous processes running in 
>> parallel in people, and coupled by some kind of message-passing, as Ray 
>> Jackendoff proposes, seems quite reasonable and in keeping with brain 
>> biology, and if there is, it would be a compact way to account for the 
>> seeming independence in refinement of grammatical sense and whatever other 
>> part of sentence-coherence we have come to term “semantics”.
>> Last year, too, someone (I think my boss at the time, which would make it 
>> two years ago) told me about some nature paper saying that a comparative 
>> genome analysis of domestic dogs and wolves had shown a mutation in the dogs 
>> at the cognate locus to the one that results in Williams Syndrome in people. 
>>  That would be an easy indulgent interpretation: the greater 
>> affectionateness preserved into adulthood, and the increased verbal-or-other 
>> communicativeness.  Though Barry Lopez, I think it was, argues that wolves 
>> have higher social intelligence, which I guess would be making some claim 
>> about a “semantics”.
>> The chatbot has, however, a knd of pure authentic evil that Philip K. Dick 
>> tried to mimic (the argument with the door), and came close enough to be 
>> laughing-through-tears, but could not truly simulate as it shines through in 
>> the Ginsparg exchange.  Or dealing with the maddening, horrifying computer 
>> interfaces that every company puts up to its customers, after they have 
>> fired all the human problem-solvers.  Few things put me in a real dread, 
>> because I am now fairly old, and getting older as fast as I can.  But the 
>> prospect of still being alive in a world where that interface is all that is 
>> left to any of us, is dreadworthy.
>> Eric
>>>> On Feb 8, 2023, at 11:51 AM, glen <geprope...@gmail.com> wrote:
>>> 
>>> I wrote and deleted a much longer response. But all I really want to say is 
>>> that these *models* are heavily engineered. TANSTAAFL. They are as 
>>> engineered, to intentional purpose, as a Boeing 777. We have this tendency 
>>> to think that because these boxes are opaque (more so to some than others), 
>>> they're magical or "semantic-less". They simulate a human language user 
>>> pretty well. So even if there's little structural analogy, there's good 
>>> behavioral analogy. Rather than posit that these models don't have 
>>> semantics, I'd posit *we* don't have semantics.
>>> 
>>> The problem with communication is the illusion that it exists.
>>> 
>>> On 2/7/23 14:16, Steve Smith wrote:
>>>> DaveW -
>>>> I really don't know much of/if anything really about these modern AIs, 
>>>> beyond what pops up on the myriad popular science/tech feeds that are part 
>>>> of *my* training set/source.   I studied some AI in the 70s/80s and then 
>>>> "Learning Classifier Systems" and (other) Machine Learning techniques in 
>>>> the late 90s, and then worked with folks who did Neural Nets during the 
>>>> early 00s, including trying to help them find patterns *in* the NN 
>>>> structures to correlate with the function of their NNs and training sets, 
>>>> etc.
>>>> The one thing I would say about what I hear you saying here is that I 
>>>> don't think these modern learning models, by definition, have neither 
>>>> syntax *nor* semantics built into them..   they are what I colloquially 
>>>> (because I'm sure there is a very precise term of art by the same name) 
>>>> think of or call "model-less" models. At most I think the only models of 
>>>> language they have explicit in them might be the Alphabet and conventions 
>>>> about white-space and perhaps punctuation?   And very likely they span 
>>>> *many* languages, not just English or maybe even "Indo European".
>>>> I wonder what others know about these things or if there are known good 
>>>> references?
>>>> Perhaps we should just feed thesemaunderings into ChatGPT and it will sort 
>>>> us out forthwith?!
>>>> - SteveS
>>>> On 2/7/23 2:57 PM, Prof David West wrote:
>>>>> I am curious, but not enough to do some hard research to confirm or deny, 
>>>>> but ...
>>>>> 
>>>>> Surface appearances suggest, to me, that the large language model AIs 
>>>>> seem to focus on syntax and statistical word usage derived from those 
>>>>> large datasets.
>>>>> 
>>>>> I do not see any evidence in same of semantics (probably because I am but 
>>>>> a "bear of little brain.")
>>>>> 
>>>>> In contrast, the Cyc project (Douglas Lenat, 1984 - and still out there 
>>>>> as an expensive AI) was all about semantics. The last time I was, 
>>>>> briefly, at MCC, they were just switching from teaching Cyc how to read 
>>>>> newspapers and engage in meaningful conversation about the news of the 
>>>>> day, to teaching it how to read the National Enquirer, etc. and 
>>>>> differentiate between syntactically and literally 'true' news and the 
>>>>> false semantics behind same.
>>>>> 
>>>>> davew
>>>>> 
>>>>> 
>>>>> On Tue, Feb 7, 2023, at 11:35 AM, Jochen Fromm wrote:
>>>>>> I was just wondering if our prefrontal cortex areas in the brain contain 
>>>>>> a large language model too - but each of them trained on slightly 
>>>>>> different datasets. Similar enough to understand each other, but 
>>>>>> different enough so that everyone has a unique experience and point of 
>>>>>> view o_O
>>>>>> 
>>>>>> -J.
>>>>>> 
>>>>>> 
>>>>>> -------- Original message --------
>>>>>> From: Marcus Daniels <mar...@snoutfarm.com>
>>>>>> Date: 2/6/23 9:39 PM (GMT+01:00)
>>>>>> To: The Friday Morning Applied Complexity Coffee Group 
>>>>>> <friam@redfish.com>
>>>>>> Subject: Re: [FRIAM] Datasets as Experience
>>>>>> 
>>>>>> It depends if it is given boundaries between the datasets.   Is it 
>>>>>> learning one distribution or two?
>>>>>> 
>>>>>> 
>>>>>> *From:* Friam <friam-boun...@redfish.com> *On Behalf Of *Jochen Fromm
>>>>>> *Sent:* Sunday, February 5, 2023 4:38 AM
>>>>>> *To:* The Friday Morning Applied Complexity Coffee Group 
>>>>>> <friam@redfish.com>
>>>>>> *Subject:* [FRIAM] Datasets as Experience
>>>>>> 
>>>>>> 
>>>>>> Would a CV of a large language model contain all the datasets it has 
>>>>>> seen? As adaptive agents of our selfish genes we are all trained on 
>>>>>> slightly different datasets. A Spanish speaker is a person trained on a 
>>>>>> Spanish dataset. An Italian speaker is a trained on an Italian dataset, 
>>>>>> etc. Speakers of different languages are trained on different datasets, 
>>>>>> therefore the same sentence is easy for a native speaker but impossible 
>>>>>> to understand for those who do not know the language.
>>>>>> 
>>>>>> 
>>>>>> Do all large language models need to be trained on the same datasets? Or 
>>>>>> could many large language models be combined to a society of mind as 
>>>>>> Marvin Minsky describes it in his book "The society of mind"? Now that 
>>>>>> they are able to understand language it seems to be possible that one 
>>>>>> large language model replies to the questions from another. And we would 
>>>>>> even be able to understand the conversations.
>>>>>> 
> 
> 
> -- 
> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
> 
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to