Yep, that's the fundamental problem with the "chat" usage pattern. But it's much less of a problem 
with other usage patterns. For example, we have a project at UCSF where we're using GPT3.5 to help us with 
the embeddings for full text biomedical articles. This produces opportunities for several other usage 
patterns that preserve the inherent uncertainty, allowing the user to both gain some new insight without the 
"mansplaining" confidence of the chat mode. We're way upstream of the clinic so far, though. FDA 
approval for such a "device" might be sticky.

On 3/1/23 08:19, Barry MacKichan wrote:
When I bought back my company about 25 years ago, the mantra for programmers 
was “Google the error message!” Now ChatGPT will write some of the code for 
you. The job of programming still requires a lot of knowledge and experience 
since using ChatGPT-generated code without quality checking is far from 
failsafe.

—Barry

On 1 Mar 2023, at 11:04, Marcus Daniels wrote:

    I have seen doctors run internet searches in front of me. If a LLM is given 
all the medical journals, biology textbooks, and hospital records for training, 
that could be a useful resource for society.

    -----Original Message-----
    From: Friam <friam-boun...@redfish.com> On Behalf Of Santafe
    Sent: Wednesday, March 1, 2023 4:45 AM
    To: The Friday Morning Applied Complexity Coffee Group <friam@redfish.com>
    Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

    This is fun. Will have to watch it when I have time.

    Is there a large active genre just now combining ChatGPT wiht deepfakes, to 
generate video of whomeever-saying-whatever?

    I was thinking a couple of years ago about what direction in big-AI would 
be the most distructive, in requiring extra cognitive load to check what was 
coming in through every sense channel all the time. Certainly, as much as we 
must live by habit, because doing everything through the prefrontal cortex all 
the time is exhausting (go to a strange country, wake up in the middle of the 
night, where are the lightswitches in this country and how do they work?), 
there clearly are whole sensory modalities that we have just taken for granted 
as long as we could. I have assumed that the audiovisual channel of watching a 
person say something was near the top of that list.

    Clearly a few years ago, deepfakes suddenly took laziness off the table for 
that channel. The one help was that human-generated nonsense still takes human 
time, on which there is some limit.

    But if we have machine-generated nonsense, delivered through 
machine-generated rendering, we can put whole servers onto it full-time. Sort 
of like bitcoin mining. Burn a lot of irreplaceable carbon fuel to generate 
something of no value and some significant social cost.

    So I assume there is some component of the society that is bored and 
already doing this (?)

    Eric


        On Feb 28, 2023, at 9:10 PM, Gillian Densmore <gil.densm...@gmail.com> 
wrote:

        This john oliver piece might either amus, and or mortify you.
        https://www.youtube.com/watch?v=Sqa8Zo2XWc4&ab_channel=LastWeekTonight 
<https://www.youtube.com/watch?v=Sqa8Zo2XWc4&ab_channel=LastWeekTonight>

        On Tue, Feb 28, 2023 at 4:00 PM Gillian Densmore 
<gil.densm...@gmail.com> wrote:


        On Tue, Feb 28, 2023 at 2:06 PM Jochen Fromm <j...@cas-group.net> wrote:
        The "Transformer" movies are like the "Resident evil" movies based on a similar idea: we 
take a simple, almost primitive story such as "cars that can transform into alien robots" or "a bloody 
fight against a zombie apocalypse" and throw lots of money at it.

        But maybe deep learning and large language models are the same: we take 
a simple idea (gradient descent learning for deep neural networks) and throw 
lots of money (and data) at it. In this sense transformer is a perfect name of 
the architecture, isn't it?

        -J.
        😁😍🖖👍🤔

        -------- Original message --------
        From: Gillian Densmore <gil.densm...@gmail.com>
        Date: 2/28/23 1:47 AM (GMT+01:00)
        To: The Friday Morning Applied Complexity Coffee Group
        <friam@redfish.com>
        Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?

        Transformer architecture works because it's cybertronian technology. 
And is so advanced as to be almost magic.

        On Mon, Feb 27, 2023 at 3:51 PM Jochen Fromm <j...@cas-group.net> wrote:
        Terrence Sejnowski argues that the new AI super chatbots are like a magic Harry Potter 
mirror that tells the user what he wants to hear: "When people discover the mirror, it seems 
to provide truth and understanding. But it does not. It shows the deep-seated desires of anyone who 
stares into it". ChatGPT, LaMDA, LLaMA and other large language models would "take in our 
words and reflect them back to us".
        https://www.nytimes.com/2023/02/26/technology/ai-chatbot-information-t 
<https://www.nytimes.com/2023/02/26/technology/ai-chatbot-information-t>
        ruth.html

        It is true that large language models have absorbed unimaginably huge 
amount of texts, but what if our prefrontal cortex in the brain works in the 
same way?
        https://direct.mit.edu/neco/article/35/3/309/114731/Large-Language-Mod 
<https://direct.mit.edu/neco/article/35/3/309/114731/Large-Language-Mod>
        els-and-the-Reverse-Turing-Test

        I think it is possible that the "transformer" architecture is so
        successful because it is - like the cortical columns in the neocortex
        - a modular solution for the problem what comes next in an
        unpredictable world https://en.wikipedia.org/wiki/Cortical_column 
<https://en.wikipedia.org/wiki/Cortical_column>

        -J.


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to