EricS -

these are good observations... I believe it will lead to more and more value in *good curation* and a finer distinction on "authority through reputation".   I also believe that formal blockchain and/or some informal analog will become critical to authenticating sources.   I already find myself questioning my friends and colleagues (gently/quietly) with "why do you believe what you believe" because it feels like they (we) oftentimes expose ourselves to "this or that" and then risk passing it on.

The metaphor of "global pandemic" is still fresh enough to us that *maybe* some folks will be more discriminate of who they "conjugate with".   Also many of us grew up through the AIDS and/or general STD period when the motto "when you have sex with someone you are having sex with everyone *they* have had sex with".

<tedious anecdote> I had two roomates a few years ago (they are peppered through my anecdotes here) who were somewhat polar opposites.  One was halfway to Q at the time, actually just waiting for a Q-like figure to show up.  She had one-liners like: "I think people should be able to believe what they want to" and "*I* have an open mind".   The other was an artist-by-training but also very rational if not always fully informed.   His response to those ideations of hers (rarely to her face, he was too polite) was: "I don't want to know what you believe, I want to know what you *think*" and "the problem with an open mind, is just about anyone  can pour anything into it".

He also had his own pet "conspiracy theories" IMO, but much less wild and destructive than hers. (and of course, then there is me and all my rattling on)

IN her environmental/anti-globalism she managed to turn it into a hate/fear of the Democrats while holding an entirely gullible receptive posture toward the Republicans, the more absurd the better.  She held (and propogated) various extreme beliefs about the personal lives and circumstances of her anti-heros (Clintons and Obamas in particular) yet did not recognize the term _ad hominem_  except when applied to the likes of her heroes (think Alex Jones and Donald Trump).  She did not openly endorse either of the latter, seeming to recognize that they were at least widely perceived as the epitome of toxic public personalities, but she was known to defend them on-principle...   quoting free-speech and decrying the term "conspiracy-theory" as if anyone labeled with it had earned by being a true-visionary and hero-of-the-people whistleblower.

I dropped nearly all conversation with her mid-COVID for her rabid anti-vaxx rhetoric which I could withstand when directed toward me, but her brush was pretty broad when it came to impugning just about anyone and everyone who might actually believe in any part of modern medicine.

</tedious anecdote>

SteveS

This is fun.  Will have to watch it when I have time.

Is there a large active genre just now combining ChatGPT wiht deepfakes, to 
generate video of whomeever-saying-whatever?

I was thinking a couple of years ago about what direction in big-AI would be 
the most distructive, in requiring extra cognitive load to check what was 
coming in through every sense channel all the time.  Certainly, as much as we 
must live by habit, because doing everything through the prefrontal cortex all 
the time is exhausting (go to a strange country, wake up in the middle of the 
night, where are the lightswitches in this country and how do they work?), 
there clearly are whole sensory modalities that we have just taken for granted 
as long as we could.  I have assumed that the audiovisual channel of watching a 
person say something was near the top of that list.

Clearly a few years ago, deepfakes suddenly took laziness off the table for 
that channel.   The one help was that human-generated nonsense still takes 
human time, on which there is some limit.

But if we have machine-generated nonsense, delivered through machine-generated 
rendering, we can put whole servers onto it full-time.  Sort of like bitcoin 
mining.  Burn a lot of irreplaceable carbon fuel to generate something of no 
value and some significant social cost.

So I assume there is some component of the society that is bored and already 
doing this (?)

Eric



On Feb 28, 2023, at 9:10 PM, Gillian Densmore <gil.densm...@gmail.com> wrote:

This john oliver piece might either amus, and or mortify you.
https://www.youtube.com/watch?v=Sqa8Zo2XWc4&ab_channel=LastWeekTonight

On Tue, Feb 28, 2023 at 4:00 PM Gillian Densmore <gil.densm...@gmail.com> wrote:


On Tue, Feb 28, 2023 at 2:06 PM Jochen Fromm <j...@cas-group.net> wrote:
The "Transformer" movies are like the "Resident evil" movies based on a similar idea: we take a 
simple, almost primitive story such as "cars that can transform into alien robots" or "a bloody fight 
against a zombie apocalypse" and throw lots of money at it.

But maybe deep learning and large language models are the same: we take a 
simple idea (gradient descent learning for deep neural networks) and throw lots 
of money (and data) at it. In this sense transformer is a perfect name of the 
architecture, isn't it?

-J.
😁😍🖖👍🤔

-------- Original message --------
From: Gillian Densmore <gil.densm...@gmail.com>
Date: 2/28/23 1:47 AM (GMT+01:00)
To: The Friday Morning Applied Complexity Coffee Group <friam@redfish.com>
Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

Transformer architecture works because it's cybertronian technology. And is so 
advanced as to be almost magic.

On Mon, Feb 27, 2023 at 3:51 PM Jochen Fromm <j...@cas-group.net> wrote:
Terrence Sejnowski argues that the new AI super chatbots are like a magic Harry Potter mirror that 
tells the user what he wants to hear: "When people discover the mirror, it seems to provide 
truth and understanding. But it does not. It shows the deep-seated desires of anyone who stares 
into it". ChatGPT, LaMDA, LLaMA and other large language models would "take in our words 
and reflect them back to us".
https://www.nytimes.com/2023/02/26/technology/ai-chatbot-information-truth.html

It is true that large language models have absorbed unimaginably huge amount of 
texts, but what if our prefrontal cortex in the brain works in the same way?
https://direct.mit.edu/neco/article/35/3/309/114731/Large-Language-Models-and-the-Reverse-Turing-Test

I think it is possible that the "transformer" architecture is so successful 
because it is - like the cortical columns in the neocortex - a modular solution for the 
problem what comes next in an unpredictable world
https://en.wikipedia.org/wiki/Cortical_column

-J.

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,XKDdAgXrCfpoC2czoqsQ9I2QS6Z7arx5VBS1PKhXgnwGqxA482S0XHeB_UmUO1pRqxEjzni1fo0kcwHnUbucO6GejR2tJnN5JXQuvvFrqWmIxxWP1lV_pQ,,&typo=1
to (un)subscribe 
https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,LkHI2A6qJ91CVdpn59WpZQCIUAI8TeUJn4RL7g5iRlOrUQ6M0vWSIg-zD7M4rG-qinJeRDk9tzwyeqp5HthXJJW8wWq3adqRS6WoiuTkRSNqvLM,&typo=1
FRIAM-COMIC 
https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,YiwNnhVuSW7uaKO2gg2UXnrT8sgs02dQcDSe4f0f91N25uZUu0F6x2ST74PAhsCA9UitUWKAb59bu0pcFkXoEyoJqkB5NDywRzy_BSIFsx3BHajxiJo9hkqYFBiU&typo=1
archives:  5/2017 thru present 
https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f&c=E,1,9U2wm6t-m9PsDtZGG4lKAY3UYzEVEzQruDE8ORYUfMm2ROBCRT6hi4MYEYAlrk8iKRf55b4Z8mcNR_hx3nm-ANxzewBfLlR9eWPqYrdvOIKZ2UoCiFIjO82YRoOi&typo=1
1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to