Yeah, well I’m buying more Microsoft stock.
*From:* Friam <friam-boun...@redfish.com> *On Behalf Of *Roger Critchlow
*Sent:* Tuesday, March 14, 2023 11:11 AM
*To:* The Friday Morning Applied Complexity Coffee Group <friam@redfish.com>
*Subject:* Re: [FRIAM] ChatGPT and William James
This was good, too:
https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
<https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html>
And microsoft laid off its responsible AI team:
https://www.platformer.news/p/microsoft-just-laid-off-one-of-its
<https://www.platformer.news/p/microsoft-just-laid-off-one-of-its>
And your Ring doorbell has just been taken hostage for ransom along with
the rest of Ring Security:
https://web.archive.org/web/20230314015249/https://twitter.com/vxunderground/status/1635427567271329792
<https://web.archive.org/web/20230314015249/https:/twitter.com/vxunderground/status/1635427567271329792>
though I haven't found confirmation of that yet.
It seems to me that the software vendors have entered a "first to market
frenzy" with whatever this LLM crap turns out to be, and you can expect
the PR to get more unhinged by the day.
-- rec --
On Tue, Mar 14, 2023 at 11:58 AM Barry MacKichan
<barry.mackic...@mackichan.com <mailto:barry.mackic...@mackichan.com>>
wrote:
I haven’t followed the discussions here carefully, but the question
“And we humans are different?” brings up Noam Chomsky’s view in the
NYT, gifted here:
https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html?unlocked_article_code=QwaHrmmodXXtHVMT-PXM3n5X5165QgywGnhK8kHA4bQ4YwI-1ChPS_7jCUaaUoFfOEcH5StWFGnaCzyJsSW6zuHN-U410lTznWM8Mvsg-JDhGg8JVpzBMO0zuuwLEceZ7nsik2RLOi3H_hFSlBy4EF1c2Rn8ueGmhh3n6TnF58p3Vo2dkK0Y8RCrYhzpvCdlBd51fLA6DWil5Xn3vRhZ6gTYuUw8Eoy6HbR2cA4PcoI_PzQ5Ozz4Rrb5vH1C_pX5agNGvJ9CS0q9bGH7NvB0HSTIA480_k--x8f-srXvID5llZmAxRHEkS5Yf05rn820RIR3j70yIH7hddfh6GkkjQ&smid=url-share
<https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html?unlocked_article_code=QwaHrmmodXXtHVMT-PXM3n5X5165QgywGnhK8kHA4bQ4YwI-1ChPS_7jCUaaUoFfOEcH5StWFGnaCzyJsSW6zuHN-U410lTznWM8Mvsg-JDhGg8JVpzBMO0zuuwLEceZ7nsik2RLOi3H_hFSlBy4EF1c2Rn8ueGmhh3n6TnF58p3Vo2dkK0Y8RCrYhzpvCdlBd51fLA6DWil5Xn3vRhZ6gTYuUw8Eoy6HbR2cA4PcoI_PzQ5Ozz4Rrb5vH1C_pX5agNGvJ9CS0q9bGH7NvB0HSTIA480_k--x8f-srXvID5llZmAxRHEkS5Yf05rn820RIR3j70yIH7hddfh6GkkjQ&smid=url-share>
—Barry
On 6 Mar 2023, at 16:49, Marcus Daniels wrote:
How?
*From:* Friam <friam-boun...@redfish.com
<mailto:friam-boun...@redfish.com>> *On Behalf Of *Frank Wimberly
*Sent:* Monday, March 6, 2023 12:50 PM
*To:* The Friday Morning Applied Complexity Coffee Group
<friam@redfish.com <mailto:friam@redfish.com>>
*Subject:* Re: [FRIAM] ChatGPT and William James
>And we humans are different?
In a word, yes.
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505
505 670-9918
Santa Fe, NM
On Mon, Mar 6, 2023, 12:14 PM Nicholas Thompson
<thompnicks...@gmail.com <mailto:thompnicks...@gmail.com>> wrote:
*/However, it's important to remember that there are also
important differences between a large language model and
human consciousness. While a large language model can
generate text that may seem to flow like a stream of
consciousness, it does not have the same kind of subjective
experience that humans do, and its output is based solely on
statistical patterns in the input it has been trained on./*
And we humans are different?
On Sat, Mar 4, 2023 at 11:51 AM Steve Smith
<sasm...@swcp.com <mailto:sasm...@swcp.com>> wrote:
Also second EricS's appreciation for having someone
else(s) maintain a coherent conversation for the myriad
ideas that it allows me to explore without being central
to the maintenance of the thread. I realize this may
be almost pure tangent to others, so I rarely expect
anyone to take my bait unless it is to correct any
egregious mis-attributions or think-utational fallacies.
Starting with Glen's assertion/suggestion/assumption
that there is not mind-stuff and body stuff, just body
stuff: I appeal to the general abstraction of Emergence
and use Russell Standish's example in his "Theory of
Nothing
<https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true&from_srp=true&qid=GgXJ0ISQei&rank=1>"
that a water molecule is not wet... wetness is a property of aggregates of water molecules. I would jump a dozen layers of
emergent-bootstrapping from there to assert that "mind stuff", if it ever makes sense, is an emergent property of "body
stuff". But by analogy would not want to say that wetness (and other properties of bulk water molecules) is not strictly
"molecular dynamics stuff". And even if one did that, the recursion/reduction-ad-absurdum requires that one
acknowledge/notice/invoke that the properties of any molecule is "emergent" from the elementary particles from which it might be
composed.
I think we all believe in free-electrons, protons,
neutrons but also recognize that *most* of our observed
universe is shaped not by *those properties* (much less
the properties of quarks and gluons or 10d loops of
abstract things we call strings) but rather by the
properties (once again, not of molecular dynamics or
even chemical reactions) but biological functions, and
socio-economic-political functions as well. I *am*
however, sensitive to the idea that where and how we
draw the line between mind/body stuff can be important
in any given argument, and that sometimes dropping that
line altogether may be useful?
The above riff on Mind-Stuff v Body-Stuff is really an
intro into thoughts about how syntax and semantics might
bootstrap sequentially. It feels to me that the syntax
of one level of abstraction yields an *emergent
semantics* which in turn becomes the *syntax* of the
next "level". I do acknowledge that Glen has made
some arguments (and references) that are against the
very abstraction of "levels" and that may well be the
hole in everything I'm unrolling here, but for the
moment, I feel I have a clear picture of a POSET of
syntax/semantics, if not a full Hierarchy...
This also backs me into the Platonic ideations with all
the charms and criticisms already dancing as virtual
(ideational) particles around that. I will go back to
reading A Theory of Nothing
<https://www.goodreads.com/book/show/967936.Theory_Of_Nothing?from_search=true&from_srp=true&qid=GgXJ0ISQei&rank=1>...
and try to keep my offerings here under 10 pages each...
On 3/4/23 4:32 AM, Santafe wrote:
It’s helpful to have a conversation being maintained by
somebod(ies) else, to which one can be a bystander without the distraction of
coming up with contributions to it. Things can suggest themselves that get
pushed out of awareness when one is carrying the discourse and figuring out
what to do next within it.
In reading the below, about the time I got to the lines:
The mind-body problem is the philosophical question of
how the mind and body are related. One of the main issues is how mental
processes such as thoughts, emotions, and consciousness are related to physical
processes in the brain and body.
I was prompted with a term to refer to these
mental/physical things.
First, my sense of all this is one of witnessing structures
in conversation. Maybe I am more primed to that because with ChatGPT as the
topic, one fronts awareness of conversation as somewhat free-floating from its
semantic ground. As tokens in conversation, it is perfectly sensible to say
that (thoughts, emotions, consciousness) are in a category Mental, while
(weakness, hunger, itching) go into a category Physical. Not only is it okay
to say they fit tolerably into “categories” (or “classes”); the reason they do
so is that they are connected by all sorts of linguistic usage relations. The
relations probably in no small part bring about the stability of the
categorical sense of the terms.
But what word do we then use to refer to such classes in
speech? I would use the word “registers”. The Mental is a register of
conversation about events, and the Physical is another register.
Jochen’s email below has ChatGPT saying James referred to
these as “aspects” of various bodily or embodied events. Sometimes I’m okay
with a word like “aspects”, but it invites essentialist thinking. That the
event is like a computer-language object, which has properties (the aspects)
that define its interface, and not only are the properties ascribable to the
object, but their status as defined elements of the interface is also a
real-thing, and not merely a frame-dependent convenient compression. But using
“aspects” thoughtlessly does two things: it makes essentialism a habit, which I
think will often be invalid, and it neglects the communications role that
“register” highlights.
I would attach this to Glen’s fairly terse characterization
that there is no mind-stuff contrasted with body-stuff, but just one kind of
stuff, which if we have to choose a word for it we can call body-stuff. From a
perspective of aspects, the terse version could be closer to or further from a
more fine-grained description, depending on whether the essentialist view is
more or less appropriate for one or another phemonenon we want to discuss. But
I think the idea that speech provides us certain conventional registers to
bound discussions of phenomena, and that these too may be better- or worse-fit
to the organization of the phenomena themselves, is another fair fine-graining
of the terse characterization, running somewhat parallel to the aspects or
essences fine-graining (though, as always, connected to it through a kind of
message-passing and filtering for structural agreement).
Eric
On Feb 28, 2023, at 5:23 PM, Jochen Fromm<j...@cas-group.net>
<mailto:j...@cas-group.net> wrote:
I enjoy talking with ChatGPT about famous scientists,
thinkers, writers and philosophers. Because it is trained on vast amounts of
data it knows them all: Pascal, Cervantes, Dante, Hugo, Goethe, Leibniz,
Newton, Humboldt, Shakespeare,... It can summarize almost any book which is
remarkable. Today I have asked it about William James and Anil Seth
Me: You certainly have heard of William James, one of
the fathers of Psychology?
ChatGPT: Yes, I'm familiar with William James. He was an American
philosopher and psychologist who is often referred to as the "father of American
psychology." He made significant contributions to the development of psychology as a
scientific discipline, and his ideas and writings continue to influence psychology and
related fields to this day.
Some of James' most influential contributions to psychology include his
theory of functionalism, which emphasized the study of the purpose and function of behavior and
mental processes, rather than their underlying structures; his work on consciousness, which he
famously described as a "stream" of continuously changing thoughts and perceptions; and
his exploration of the relationship between the mind and the body, which he called the
"mind-body problem."
James also wrote extensively on a wide range of topics, including
religion, philosophy, and human nature. His most famous book, "The Principles of
Psychology," was published in 1890 and is still considered a classic in the field.
Me: Can you summarize his book "The Principles of
Psychology"?
ChatGPT: Certainly! "The Principles of Psychology" is a
comprehensive and influential textbook that William James published in 1890. It aimed to
provide a broad overview of the field of psychology at the time, covering topics such as
perception, emotion, thought, consciousness, and the nature of the self.
The book is organized into two main sections. The first
section focuses on the lower-level processes of sensation and perception,
including topics such as attention, memory, and learning. The second section
focuses on higher-level mental processes, such as reasoning, judgment, and
emotion.
Throughout the book, James emphasized the importance of
studying mental processes in their natural context and stressed the need for
psychology to be closely linked to biology and other natural sciences. He also
developed the theory of functionalism, which argued that the purpose and
function of mental processes are more important to understanding them than
their underlying structures.
Overall, "The Principles of Psychology" was a landmark
work that helped establish psychology as a distinct scientific discipline and influenced
generations of psychologists and other scientists.
Me: William James said: "Thoughts themselves are the
thinkers". What does he mean?
ChatGPT: When William James said "thoughts themselves are the
thinkers," he was expressing his belief in what is often called the "stream of
consciousness" view of the mind. James believed that the mind is not a fixed entity or
substance, but rather a constantly changing stream of thoughts, feelings, and sensations.
According to James, our thoughts are not the products of a
separate entity or "thinker" within us, but rather they are the result of the
ongoing activity of the mind itself. In other words, the mind is not a passive receptacle
for thoughts that are generated elsewhere, but an active process that generates thoughts
and experiences as it goes along.
This view is closely related to James' broader
philosophy of pragmatism, which emphasized the importance of understanding
things in terms of their practical implications and effects. For James, the
idea that thoughts themselves are the thinkers was not just a theoretical claim
about the nature of the mind, but a practical way of understanding how we can
change and shape our own thoughts and experiences. By understanding the
workings of our own minds, James believed, we can take an active role in
shaping our own lives and achieving our goals.
Me: He also coined the term "stream of consciousness"
which suggests that consciousness flows like a river. What does he want to say by using
this metaphor?
ChatGPT: When William James coined the term "stream of
consciousness," he was using a metaphor to describe his view of the mind as a
constantly flowing and changing stream of thoughts, feelings, and sensations.
The metaphor of a stream suggests that consciousness is
not a static or fixed entity, but rather a dynamic and fluid process that is
always in motion. The stream of consciousness is always moving forward, and we
can never step into the same stream twice. Similarly, the content of our
consciousness is always changing, as new thoughts, feelings, and sensations
arise and fall away.
By using the metaphor of a stream, James was also
highlighting the interconnectedness of our thoughts and experiences. Just as a
river is made up of many different streams and tributaries, our consciousness
is made up of many different thoughts and experiences that are constantly
interacting and influencing each other.
Overall, the metaphor of the stream of consciousness
captures the dynamic and ever-changing nature of our mental lives, and
highlights the importance of understanding the ongoing flow of thoughts and
experiences that make up our conscious experience.
Me: A constantly flowing stream of thoughts is similar
to what a large language model is perceiving and producing, isn't it?
ChatGPT: Yes, the metaphor of the stream of
consciousness can be similar to how a large language model like me perceives
and produces language. Just as consciousness is often described as a constantly
flowing stream of thoughts and experiences, a large language model like me is
constantly processing and generating language in response to the input it
receives.
Like the stream of consciousness, a large language
model is always in motion, constantly analyzing and generating language based
on the patterns and structures it has learned from the vast amounts of text it
has been trained on. And just as consciousness is made up of many different
thoughts and experiences that are interconnected and constantly interacting, a
large language model's output is made up of many different words and phrases
that are generated in response to each other.
However, it's important to note that there are also
important differences between a large language model and the human mind. While
a large language model can generate language that may seem to flow like a
stream of consciousness, it does not have the same kind of subjective
experience that humans do. A large language model does not have feelings,
emotions, or a sense of self, and its output is based solely on statistical
patterns in the input it has been trained on.
Me: What is James' solution to the mind-body problem?
ChatGPT: William James was a philosopher and
psychologist who made important contributions to the field of psychology and
philosophy of mind, including his exploration of the mind-body problem.
The mind-body problem is the philosophical question of
how the mind and body are related. One of the main issues is how mental
processes such as thoughts, emotions, and consciousness are related to physical
processes in the brain and body.
James approached the mind-body problem from a pragmatic
perspective, emphasizing the practical consequences of different ways of thinking about
the problem. He proposed what is known as a "radical empiricist" approach,
which suggested that the mind and body are not separate entities, but rather different
aspects of a single reality that is constantly in flux.
According to James, the mind and body are both part of
the same universe, and they are both subject to the same laws of nature. Mental
events are not caused by physical events in the brain, nor are physical events
caused by mental events. Instead, James argued that the mind and body are two
aspects of a larger, more complex reality that cannot be fully understood by
examining either one in isolation.
In other words, James rejected the traditional
mind-body dualism that posits a sharp distinction between mind and matter, and
instead proposed a more holistic view that recognizes the deep interdependence
of the mind and body. While James' solution to the mind-body problem remains
controversial, his ideas continue to influence contemporary debates in
philosophy and psycholoy.
Me: Let us return to the idea of thoughts themselves as
the thinkers. This implies that one thought triggers another and is replaced by
it, just like water in a stream is replaced by other water, which in turn is
replaced too, and so on and so forth. No thought can remain permanently, only
return like a wave to the shore. Is this what James means by a stream of
consciousness?
ChatGPT: Yes, you are correct. James' idea that "thoughts themselves
are the thinkers" is closely related to his concept of the "stream of
consciousness." According to James, the mind is not a fixed entity or substance, but rather a
constantly changing stream of thoughts, feelings, and sensations. This stream of consciousness is
always flowing and evolving, with one thought leading to another and so on.
Just as water in a stream is replaced by other water,
which is in turn replaced by more water, the content of our consciousness is
also constantly changing, with one thought or experience giving rise to the
next. And just as no water in a stream can remain in one place permanently, no
thought can remain permanently in the stream of consciousness. Thoughts and
experiences may return or be remembered, but they are always changing in
relation to the ongoing flow of the stream of consciousness.
Overall, James' metaphor of the stream of consciousness
captures the dynamic and ever-changing nature of our mental lives, highlighting
the ongoing flow of thoughts and experiences that make up our conscious
experience.
Me: Does this idea of William James feel familiar to
you as a large language model?
ChatGPT: As a large language model, I do not have the
same kind of subjective experience as humans do, so I don't experience a stream
of consciousness in the same way that humans do. However, the concept of a
stream of consciousness does have some relevance to how I operate.
Like the stream of consciousness, the input and output
of a large language model are constantly changing and evolving, with one word
or phrase leading to another. When generating text, a large language model
takes into account the context of the input it has received, along with the
patterns and structures it has learned from the massive amounts of text it has
been trained on. This can lead to output that has a flowing, interconnected
quality, similar to the way thoughts and experiences flow in a stream of
consciousness.
However, it's important to remember that there are also
important differences between a large language model and human consciousness.
While a large language model can generate text that may seem to flow like a
stream of consciousness, it does not have the same kind of subjective
experience that humans do, and its output is based solely on statistical
patterns in the input it has been trained on.
Me: That's true. It is still fascinating.
ChatGPT: Yes, it is certainly fascinating how the
concept of a stream of consciousness can be applied to different domains and
systems, including the functioning of large language models. The idea of a
constantly flowing and evolving stream of thoughts and experiences is a rich
metaphor that has many different applications and implications. It has inspired
a great deal of philosophical and psychological inquiry over the years, and
continues to be an important area of investigation in contemporary research.