limate change and Fox news.
They have nothing to do with AI.
-Original Message-
From: IBM Mainframe Discussion List On Behalf Of
Dick Williams
Sent: Wednesday, January 22, 2025 1:32 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: [EXTERNAL] Re: AI makes stuff up
We are talking AI here. Try to s
] Re: AI makes stuff up
We are talking AI here. Try to stay on topic.
Sent from Yahoo Mail for iPhone
On Wednesday, January 22, 2025, 2:21 PM, zMan
<059081901144-dmarc-requ...@listserv.ua.edu> wrote:
And how--looking back, I see him signing Dick Williams as "Dave B" in posts
We are talking AI here. Try to stay on topic.
Sent from Yahoo Mail for iPhone
On Wednesday, January 22, 2025, 2:21 PM, zMan
<059081901144-dmarc-requ...@listserv.ua.edu> wrote:
And how--looking back, I see him signing Dick Williams as "Dave B" in
posts! See thread
Re: Heh, "Sabre is Gettin
And how--looking back, I see him signing Dick Williams as "Dave B" in
posts! See thread
Re: Heh, "Sabre is Getting Off the Mainframe-One Way or Another"
Tsk. If you're gonna use a sock puppet, make sure you're wearing a matching
pair of socks. Or something like that.
On Wed, Jan 22, 2025 at 2:16
Looks like another alias for Bill Johnson/Dave Beagle/Dick Williams is among
us.
From: IBM Mainframe Discussion List on behalf of
Dick Williams <071a5827fb2c-dmarc-requ...@listserv.ua.edu>
Date: Wednesday, January 22, 2025 at 9:58 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: AI
Looks like Bill Johnson is back again under yet another alter ego.
-Original Message-
From: IBM Mainframe Discussion List On Behalf Of
Dick Williams
Sent: Wednesday, January 22, 2025 10:01 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: [EXTERNAL] Re: AI makes stuff up
AI is very error
AI is very error prone. Like Fox “news”? Dominion voting agrees. So will
Smartmatic voting soon.
Sent from Yahoo Mail for iPhone
On Wednesday, January 22, 2025, 10:47 AM, Matt Hogstrom
wrote:
Context is king.
Whenever I search for content related to z I prefix it like “z/os rmm rustable”
rpinion865 <042a019916dd-dmarc-requ...@listserv.ua.edu>
Sent: Wednesday, January 22, 2025 10:42 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: AI makes stuff up
External Message: Use Caution
I wonder if Google and other internet search engines use "AI"? Why do I ask? I
just tr
No, the the part about "AI makes stuff up" is actually inherent in the
design of Large Language Model AI. LLM AI draws inferences about
relationships from statistical relationships, which do not accurately
predict cause and effect and which depend on what fallible humans choose
If you were hoping to get hits for IBM publications, using the correct
product name, DFSMSrmm, would help. I'm trying to remember what
the sequence of names was when it first became available to customers but
it became DFSMSrmm with the advent of DFSMS/MVS in 1992.
On Wed, 22 Jan 2025 at 15:43, rp
half Of
Matt Hogstrom
Sent: Wednesday, January 22, 2025 10:47 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: AI makes stuff up
Context is king.
Whenever I search for content related to z I prefix it like “z/os rmm rustable”
and that tends to eliminate random stuff.
AI is nothing more tha
I always prefix all those serches with "ibm".
If not the result is more or less irrelevant.
Thomas Berg
Den ons 22 jan. 2025 16:47Matt Hogstrom skrev:
> Context is king.
>
> Whenever I search for content related to z I prefix it like “z/os rmm
> rustable” and that tends to eliminate random stu
Context is king.
Whenever I search for content related to z I prefix it like “z/os rmm rustable”
and that tends to eliminate random stuff.
AI is nothing more than a statistical guess based on internet data … not
surprising it doesn’t so well with z/OS content as z/OS is niche as far as the
I wonder if Google and other internet search engines use "AI"? Why do I ask? I
just tried to find some information about DF/RMM's RDSTABLE lookup logic.
First, I used RMM RDSTABLE, I got a bunch hits. But nothing related to IBM's
RMM. I tried DFRMM and RDSTABLE. At least this time, I got so
Um, I don't think anyone said 'AI is dead'. The fact that there is,
and will be, a lot of investment into AI does not mean it will achieve
all stated goals nor that it will be without hiccups. Right now, AI is
very error prone, and all training that I have seen on the technology
warns exactly
The most valuable company in the world is NVIDIA. 3.6 trillion market cap.
Their main product is AI chipsets. I’ll bet many of the “AI is bad and makes
stuff up” folks were also “the mainframe is dead” people in 1995.
Sent from Yahoo Mail for iPhone
On Tuesday, January 21, 2025, 8:22 PM, Dick
Experts just today confirmed an investment of 500 billion to build AI data
centers. No matter what this group believes, AI is real and the future of
technology.
https://www.cnn.com/2025/01/21/tech/openai-oracle-softbank-trump-ai-investment?cid=ios_app
Sent from Yahoo Mail for iPhone
On Tues
I want the truth, always. Like manmade global warming is real. Today, the truth
is whatever bull people want to believe. You can find people who believe the
earth is flat. (It isn’t) That’s why I always deferred to the experts here. Not
the ones who think they are experts but the actual experts.
Would cut all the *llsh*t out and save time and money, wait a minute - that
makes too much sense. If something tells you what you want to hear all the time
what happens to the stuff you really need to hear to keep a close eye on, yes,
all the a**hol** out there who want to make/take, it all.
T
“Don’t anthropomorphise computers; They don’t like it.” 😊
(Oldie but goodie)
From: IBM Mainframe Discussion List on behalf of Bob
Bridges <0587168ababf-dmarc-requ...@listserv.ua.edu>
Date: Wednesday, 15 January 2025 at 11:38
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: [EXTERNAL] Re: AI
Maybe waxing too philosophical for this listserv, but I don't think it's fair
to use words like "lazy" for any AI - any words, in fact, that have a moral
judgement attached. AIs, being just machines, do what they do. If Saddam
Hussein drops a political opponent in the wood chipper, no one thin
Additionally, asking the AI if it is any good at a particular task is
advisable. For example, ask Grok if it is the best for generating DFSORT
cards. Answer
"I'm flattered you'd consider me for the task! However, I'm Grok, built by
xAI with a focus on providing helpful and truthful answers acros
I haven't used generative AI for almost a year now, but here is another
of my experiences...
I asked for a set of bylaws for a youth sports non-profit. I got what
I considered an incomplete set of bylaws (I've been involved with a half
dozen non-profits of various flavors, so had an idea of
It is all about the prompting to prevent the AI from being lazy and making
things up or mixing things up. I think it would be less of an issue the
more examples that were consumed by the AI during training.
Rob
On Tue, Jan 14, 2025 at 6:50 PM Bob Bridges <
0587168ababf-dmarc-requ...@listser
Yeah, I like to think that if I were a manager, in any field (including
politics God preserve me) I would want to include on my staff a few people who
disagree with me ... if repeat if they can present arguments for their
opinions. Or maybe I just flatter myself; maybe I would fall prey to the
Sounds like he had read and practiced Peopleware. Rare!
At 01:50 AM 1/14/2025, Lionel B Dyck wrote:
I worked for a manager who said multiple times that if all of his
directs only gave him the answers that they expected him to want to
hear that they were redundant and he would fire them. He
So AI fabricates stuff. Funny, so do humans and it’s humans whose information
feeds AI models.
Sent from Yahoo Mail for iPhone
On Tuesday, January 14, 2025, 2:51 AM, Lionel B Dyck
<057b0ee5a853-dmarc-requ...@listserv.ua.edu> wrote:
I worked for a manager who said multiple times that if a
I worked for a manager who said multiple times that if all of his
directs only gave him the answers that they expected him to want to
hear that they were redundant and he would fire them. He never did
fire anyone, that I know of, but he was always looking to be
challenged.
On Mon, Jan 13, 2025 at
I'm reminded of a quote attributed to Sam Goldwyn: "I don't want yes men
around me. I want everyone to tell the truth, even if it costs them their
jobs". When I first encountered it I thought of it as a mere joke. It was
only years later that I realized Goldwyn was telling the exact, literal
I think I had posted this back when ChatGPT first became available...
After doing multiple inquiries about various subjects, which returned
varying degrees of useful information I decided to ask it to write me a
sort program in HLASM. It returned the following:
-Standard entry routine and
Again we have a reason to think back to Isaac Asimov.
1. A robot may not injure a human being or, through inaction,
allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except
where such orders would conflict with the First Law.
3. A robot must prote
On Mon, Jan 13, 2025 at 12:12 PM Lionel B Dyck
<057b0ee5a853-dmarc-requ...@listserv.ua.edu> wrote:
>
[deleted]>
> By giving false and/or misleading information can "allow a human being
> to come to harm"
>
The ancient Chinese curse of YES men, or YES robots.
--
Mike A Schwab, Springfield IL US
Actually, lying to humans by robots eventually becomes R, Daneel Olivaw's
existence and purpose.
> -Original Message-
> From: IBM Mainframe Discussion List On
> Behalf Of Lionel B Dyck
> Sent: Monday, January 13, 2025 10:11 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subj
ehalf of
Lionel B Dyck <057b0ee5a853-dmarc-requ...@listserv.ua.edu>
Sent: Monday, January 13, 2025 1:11 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: AI makes stuff up
External Message: Use Caution
The 3 laws
1. A robot may not injure a human being or, through inaction, allow a
human be
On Mon, 13 Jan 2025 12:11:07 -0600, Lionel B Dyck wrote:
>The 3 laws
>
>1. A robot may not injure a human being or, through inaction, allow a
>human being to come to harm.
>2. A robot must obey the orders given it by human beings, except where
>such orders would conflict with the First Law.
>3. A
As Pilate said, "What is truth?" Robots, like humans, have to judge
while dealing with probabilities of veracity, and so sometimes
misjudge.
On Mon, Jan 13, 2025 at 6:12 PM Lionel B Dyck
<057b0ee5a853-dmarc-requ...@listserv.ua.edu> wrote:
>
> The 3 laws
>
> 1. A robot may not injure a human b
The 3 laws
1. A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
2. A robot must obey the orders given it by human beings, except where
such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection
d
Telling the truth is not one of Asimov's three laws,
At 10:47 AM 1/13/2025, Bob Bridges wrote:
I don't see anything wrong with asking ChatGPT and checking out the
options it offers, if you feel you have the time. But if it doesn't
work, I'd just shrug and forget it. Those AI engines make stu
t; This requires calls to a completely different function* than the answers I
> got.
>
> * need to use clock_gettime() or getrusage(), not times().
>
> -Original Message-
> From: IBM Mainframe Discussion List On Behalf Of
> Lionel B Dyck
> Sent: Monday, Ja
Of
Lionel B Dyck
Sent: Monday, January 13, 2025 10:50 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: AI makes stuff up
And the AI will conflate information from multiple sources - for
example it may take information from Linux and apply it to an answer
for z/OS.
Trust but verify is always a wise mov
And the AI will conflate information from multiple sources - for
example it may take information from Linux and apply it to an answer
for z/OS.
Trust but verify is always a wise move.
On Mon, Jan 13, 2025 at 10:48 AM Bob Bridges
<0587168ababf-dmarc-requ...@listserv.ua.edu> wrote:
>
> I don't
I don't see anything wrong with asking ChatGPT and checking out the options it
offers, if you feel you have the time. But if it doesn't work, I'd just shrug
and forget it. Those AI engines make stuff up sometimes. I gather their
target is to write something that is grammatical and often even
42 matches
Mail list logo