I'll cheerfully leave political partisanship aside.  But if I may attribute 
this equally to both sides (and thus avoid partisanship), I'm with Joel ~and~ 
Lionel on this.  Most folks who misuse their power start out, at least, in 
hopes of doing good.  What I'm saying is that although we value altruism, I 
don't trust even altruists in the matter of exercising power, especially when 
in pursuit of The Good of Humanity.

Doesn't mean we won't keep building robots.  Doesn't even mean we shouldn't.  
But even altruists can be villains.  Ultron and Colossus both wanted to save 
the world, after all.

---
Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313

/* The historian Macaulay famously said that the Puritans opposed bearbaiting 
not because it gave pain to the bears but because it gave pleasure to the 
spectators. The Puritans were right: Some pleasures are contemptible because 
they are coarsening. They are not merely private vices, they have public 
consequences in driving the culture's downward spiral.  -George Will, "The 
challenge of thinking lower", 2001-06-22 */

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Lionel B Dyck
Sent: Monday, May 11, 2020 11:22

Joel - can we please keep politics out of this listserv. Personally I wouldn't 
trust anyone in power to act against their own self interests and that applies 
to politicians and anyone else with power (as in money, influence, etc.).

There are altruistic individuals in the world and when it comes to the 
development of an AI robot one prays/hopes that those are the software 
developers who implement the code for the three laws.

-----Original Message-----
From: IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU> On Behalf Of 
Joel C. Ewing
Sent: Monday, May 11, 2020 10:12 AM

I've greatly enjoyed Asimov's vision of future possibilities, but when I step 
back to reality it occurs to me that his perfect laws of robotics would have to 
be implemented by fallible human programmers.  Even if well-intentioned, how 
would they unambiguously convey to a robot the concepts of "human", "humanity", 
"hurt", and "injure" when there have always been minorities or "others" that 
are treated by one group of humans as sub-human to justify injuring them in the 
name of "protecting"
them or protecting humanity?  And then there is the issue of who might make the 
decision to build sentient robots:   For example, who in our present White 
House would you trust to pay any heed to logic or scientific recommendations or 
long-term consequences, if they were given the opportunity to construct 
less-constrained AI robots that they perceived offered some short-term 
political advantage?

Humanity was also fortunate that when the hardware of Asimov's Daneel began to 
fail, that he failed gracefully, rather than becoming a menace to humanity.

--- On 5/11/20 8:43 AM, scott Ford wrote:
> Well done Joel....I agree , But I can help to to be curious about the 
> future of AI.
> a bit of Isaac Asimov ....
>
> --- On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing <jcew...@acm.org> wrote:
>>     And of course the whole point of Colossus, Dr Strangelove, War 
>> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was 
>> to try to make it clear to all the non-engineers and non-programmers 
>> (all of whom greatly outnumber us) why putting lethal force in the 
>> hands of any autonomous or even semi-autonomous machine is something 
>> with incredible potential to go wrong.  We all know that even if the 
>> hardware doesn't fail, which it inevitably will, that all software 
>> above a certain level of complexity is guaranteed to have bugs with 
>> unknown consequences.
>>     There is another equally cautionary genre in sci-fi about society 
>> becoming so dependent on machines as to lose the knowledge to 
>> understand and maintain the machines, resulting in total collapse 
>> when the machines inevitably fail.  I still remember my oldest sister 
>> reading E.M.
>> Forster, "The Machine Stops" (1909), to me  when I was very young.
>>     Various Star Trek episodes used both of these themes as plots.
>>     People can also break down with lethal  side effects, but the 
>> potential  damage one person can create is more easily contained by
>> other people.   The  only effective way to defend again a berserk lethal
>> machine may be with another lethal machine, and Colossus-Guardian 
>> suggests why that may be an even worse idea.
>>>
>>> -----Original Message-----
>>> From: Bob Bridges
>>> Sent: Sunday, May 10, 2020 10:21 PM
>>>
>>> I've always loved "Colossus: The Forbin Project".  Not many people 
>>> have seen it, as far as I can tell.  The only problem I have with
>>> that movie - well, the main problem - is that no programmer in the
>>> world would make such a system and then throw away the Stop button.
>>> No engineer would do that with a machine he built, either.  Too many
>>> things can go wrong.  But a fun movie, if you can ignore that.
>>>
>>> -----Original Message-----
>>> From: scott Ford
>>> Sent: Sunday, May 10, 2020 11:38
>>>
>>> Like the 1970s flick , ‘Colossus , The Forbin Project’, Colossus and
>>> American computer and Guardian a Russian computer take over saying
>>> ‘Colossus and Guardian we are one’....

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to