Yes. The point of quoting the part of the article I quoted, as Marcus noted, 
Altman exhibits 2 conflicting behaviors: 1) optimism about AI and 2) prepping 
for apocalypse. One *could* give him the benefit of the doubt. People are 
complex. He's extracted plenty of rent from the earth. So why not *hedge* and 
prep for the worst while working to mitigate it?

On the other hand, one might say he's acting in bad faith. There are 2 options 
there, too. Sarte's bad faith actor is actually fooling himself. So Altman's 
disintegrated behaviors are evidence that he's lying to himself (as well as 
us). The cynical take is that Altman's an apocalyptic profiteer, knows the end 
is coming, has his exploited profits to keep him and his safe, tough luck to 
the rest of us.

https://futurism.com/the-byte/openai-ceo-survivalist-prepper

I don't really care one way or the other (or gray in betweens). Optimism is 
poison, though. This recent paper is interesting on that front:

Random Number Simulations Reveal How Random Noise Affects the Measurements and 
Graphical Portrayals of Self-Assessed Competency
https://digitalcommons.usf.edu/numeracy/vol9/iss1/art4/

Maybe we all think we're better than average. But the lucky compound that by 
being born on third base, thinking they hit a triple.

On 5/9/23 00:34, Tom Johnson wrote:
It doesn't have to be either/or. I suspect most likely a mix of the two will 
evolve as is the case with the whole Digital Revolution.
TJ

=======================
Tom Johnson
Inst. for Analytic Journalism
Santa Fe, New Mexico
505-577-6482
=======================

On Mon, May 8, 2023, 9:43 PM Pieter Steenekamp <piet...@randcontrols.co.za 
<mailto:piet...@randcontrols.co.za>> wrote:

    People have different ideas about AI. Naomi Klein thinks that the idea that 
AI will solve all our problems is a big joke. She thinks the tech people are 
trying to trick us! She thinks AI is not just a tool but also a creation of the 
people who made it. Naomi is afraid that if we keep believing in this lie, we 
won't fix the real problems we have.

    On the other hand, Sam Altman is excited about AI! He thinks AI can help us 
solve things like diseases and climate change, and even drive us around and 
cook for us! He doesn't think AI will take over the world or hurt people. Sam 
thinks humans will always be in charge of AI.

    So, who's right? I don't know! My magic ball's batteries are dead, so I 
can't tell you. But I guess we'll have to wait and see what happens!

    On Mon, 8 May 2023 at 23:42, Marcus Daniels <mar...@snoutfarm.com 
<mailto:mar...@snoutfarm.com>> wrote:

        He's not lying, he is running his softmax function at a higher 
temperature to collect more samples in the vicinity of the truth.

         > On May 8, 2023, at 12:50 PM, glen <geprope...@gmail.com 
<mailto:geprope...@gmail.com>> wrote:
         >
         > AI machines aren’t ‘hallucinating’. But their makers are.
         > 
https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein
 
<https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein>
         >> Is all of this overly dramatic? A stuffy and reflexive resistance 
to exciting innovation? Why expect the worse? Altman reassures us: “Nobody wants to 
destroy the world.” Perhaps not. But as the ever-worsening climate and extinction 
crises show us every day, plenty of powerful people and institutions seem to be just 
fine knowing that they are helping to destroy the stability of the world’s 
life-support systems, so long as they can keep making record profits that they 
believe will protect them and their families from the worst effects. Altman, like 
many creatures of Silicon Valley, is himself a prepper: back in 2016, he boasted: “I 
have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the 
Israeli Defense Force and a big patch of land in Big Sur I can fly to.”
         >> I’m pretty sure those facts say a lot more about what Altman 
actually believes about the future he is helping unleash than whatever flowery 
hallucinations he is choosing to share in press interviews.
         >


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to