https://inspirobot.me/mindfulnessmode
I'm unilaterally against posting URLs without any kind of summary or qualifier.
But this one simply must be experienced to be understood.
--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FR
/
/
/ "What are you trying to do to me Glen?"///
https://inspirobot.me/mindfulnessmode
I'm unilaterally against posting URLs without any kind of summary or
qualifier. But this one simply must be experienced to be understood.
I asked *my* favorite AI assistant to generate a response to this
One of the best things about that daily affirmation schtick was the plaiting of
reflective sarcasm and authenticity. Didn't it predate Forrest Gump? IDK, I
haven't seen Gump. But when others talk about it, they combine the feeling of
making fun of him with empathizing.
This is why most of the
I was just wondering if our prefrontal cortex areas in the brain contain a
large language model too - but each of them trained on slightly different
datasets. Similar enough to understand each other, but different enough so that
everyone has a unique experience and point of view o_O-J.
I think you're too specific by calling out the prefrontal cortex areas. So my answer would be "no" or "not quite
but close enough". To the gist of your question, I'd say "yes", because you used "contain" rather than
"is/are". It's reasonable to model our CNS as a collection of things in the same
I do appreciate your addition of mal-anthropic to mis-anthropic and
-plait- it into the LLM of my own mind/soul/self alongside dis-ease vs
disease and anti/a-social.
I also appreciate the reference to the "dose is the poison" which
jives/jibes well with the ideation that "our allergies are ou
I also appreciate the reference to the "dose is the poison" which
jives/jibes well with the ideation that "our allergies are our
addictictions"...
before anyone else notices/comments, I suppose a "addictiction" an
addiction with an added "tic"!
-. --- - / ...- .- .-.. .. -.. / -- --- .-.
I am curious, but not enough to do some hard research to confirm or deny, but
...
Surface appearances suggest, to me, that the large language model AIs seem to
focus on syntax and statistical word usage derived from those large datasets.
I do not see any evidence in same of semantics (probably
DaveW -
I really don't know much of/if anything really about these modern AIs,
beyond what pops up on the myriad popular science/tech feeds that are
part of *my* training set/source. I studied some AI in the 70s/80s and
then "Learning Classifier Systems" and (other) Machine Learning
techniq