Anch'io ritengo che gli interventi ispirati a animismo, fantascienza, effective 
altruism, long-termism &C

non aiutino affatto a vedere la pubblicità ingannevole, le truffe e i danni 
presenti.

L'etica è uno specchietto per le allodole.
In alcuni casi, servirà forse il legislatore.
Sempre più spesso, ormai, mi sembra che si debba chiamare la polizia
e che anche la tesi della necessità di nuova regolazione faccia parte del 
pacchetto narrativo.

Su questa linea, segnalo l'ottimo e disincantato

“Early Thoughts on Generative AI”
Prepared Remarks of Commissioner Alvaro M. Bedoya, Federal Trade Commission
Before the International Association of Privacy Professionals
April 5, 2023


[...] [grassetti in parte miei]

Last, let’s turn to the idea that the creators of this technology are “a little 
bit scared” to quote the CEO of ChatGPT. Personally, and I say this with 
respect – I do not see the existential threats to our society that others do. 
Yet when you combine these statements with the unpredictability and 
inexplicability of these models, the sum total is something that we as consumer 
protection authorities have never reckoned with.
Let me put it this way. When the iPhone was first released, it was many things: 
a phone, a camera, a web browser, an email client, a calendar, and more.
Imagine launching the iPhone – having 100 million people using it – but not 
knowing what it can do or why it can do those things, all while claiming to be 
frightened of it.
That is what we’re facing today. So we need to think quickly about how these 
new dynamics map onto consumer protection law.

3. The Adult (Human) in the Room
And so, I’ll offer four observations that double as notes of caution.
• First, generative AI is regulated.
• Second, much of that law is focused on impacts to regular people. Not 
experts, regular people.
• Third, some of that law demands explanations. “Unpredictability” is rarely a 
defense.
• And fourth, looking ahead, regulators and society at large will need 
companies to do much more to be transparent and accountable.
Let’s start with that first point. There is a powerful myth out there that “AI 
is unregulated.” You see it pop up in New York Times op-ed columns, in civil 
society advocacy, and in scholarship. It has a powerful intuitive appeal — it 
just sounds right. How could these mysterious new technologies be regulated 
under our dusty old laws?
If you’ve heard this, or even said it, please take a step back and ask: Who 
does this idea help? It doesn’t help consumers, who feel increasingly helpless 
and lost. It doesn’t help most companies. It certainly doesn’t help privacy 
professionals like you, who now have to deal with investors and staff who think 
they’re operating in a law-free zone.
I think that this idea that “AI is unregulated” helps that small subset of 
companies who are uninterested in compliance. And we’ve heard similar lines 
before. “We’re not a taxi company, we’re a tech company.” “We’re not a hotel 
company, we’re a tech company.” These statements were usually followed by 
claims that state or local regulations could not apply to said companies.

The reality is, AI is regulated. Just a few examples:

• Unfair and deceptive trade practices laws apply to AI. At the FTC our core 
section 5 jurisdiction extends to companies making, selling, or using AI. If a 
company makes a deceptive claim using (or about) AI, that company can be held 
accountable. If a company injures consumers in a way that satisfies our test 
for unfairness when using or releasing AI, that company can be held accountable.
Civil rights laws apply to AI. If you're a creditor, look to the Equal Credit 
Opportunity Act. If you're an employer, look to Title VII of the Civil Rights 
Act. If you're a housing provider, look to the Fair Housing Act.
• Tort and product liability laws apply to AI. There is no AI carve-out to 
product liability statutes, nor is there an AI carve-out to common law causes 
of action.
AI is regulated. Do I support stronger statutory protections? Absolutely. But 
AI does not, today, exist in a law-free environment.

Here’s the second thing. There’s a back-and-forth that’s playing out in the 
popular press. There will be a wave of breathless coverage – and then there 
will be a very dry response from technical experts, stressing that no, these 
machines are not sentient, they’re just mimicking stories and patterns they’ve 
been trained on. No, they are not emoting, they are just echoing the vast 
quantities of human speech that they have analyzed.
I worry that this debate may obscure the point. Because the law doesn’t turn on 
how a trained expert reacts to a technology – it turns on how regular people 
understand it.
At the FTC, for example, when we evaluate whether a statement is deceptive, we 
ask what a reasonable person would think of it. When analyzing unfairness, we 
ask whether a reasonable person could avoid the harms in question. In tort law, 
we have the “eggshell” plaintiff doctrine: If your victim is particularly 
susceptible to an injury you caused, that is on you.
The American Academy of Pediatrics has declared a national emergency in child 
and adolescent mental health. The Surgeon General says that we are going 
through an epidemic of loneliness.
I urge companies to think twice before they deploy a product that is designed 
in a way that may lead people to feel they have a trusted relationship with it 
or think that it is a real person. I urge companies to think hard about how 
their technology will affect people’s mental health – particularly kids and 
teenagers.
Third, I want to note that the law sometimes demands explanation – and that the 
inexplicability or unpredictability of a product is rarely a legally cognizable 
defense.
What do I mean by that?
Looking solely on laws that the FTC enforces, both the Fair Credit Reporting 
Act and the Equal Credit Opportunity Act require explanations for certain kinds 
of adverse decisions.
Under our section 5 authority, we have frequently brought actions against 
companies for the failure to take reasonable measures to prevent reasonably 
foreseeable risks. And the Commission has historically not responded well to 
the idea that a company is not responsible for their product because that 
product is a “black box” that was unintelligible or difficult to test.
I urge companies who are creating or using AI products for important 
eligibility decisions to closely consider that the ability to explain your 
product and predict the risks that it will generate may be critical to your 
ability to comply with the law.
Fourth and last, I want to end on a call for a maximum of transparency and 
accountability.
I recently saw that the technical report accompanying the GPT-4 rejected the 
need to be transparent about the building blocks of the technology. It says:
“Given both the competitive landscape and the safety implications of 
large-scale models like GPT-4, this report contains no further details about 
the architecture (including model size), hardware, training compute, dataset 
construction, training method, or similar.”
This is a mistake. External researchers, civil society, and government need to 
be involved in analyzing and stress testing these models; it is difficult to 
see how that can be done with this kind of opacity.

4. Focusing on Threats Today
I keep thinking about the now-infamous survey conducted by Oxford University 
last year that found that the median expert gave a 5% chance that the long-run 
effect of advanced AI on humanity would be, and I quote, “extremely bad (e.g. 
human extinction).” I also keep thinking about the GPT-4 white paper’s focus on 
“safety.”
I’m worried that inchoate ideas of existential threats will make us – at least 
in the short and medium term – much less “safe.” I’m worried that these ideas 
are being used as a reason to provide less and less transparency. And I worry 
that they might distract us from all the ways that AI is already being used in 
our society today.
Automated systems new and old are routinely used today to decide who to parole, 
who to fire, who to hire, who deserves housing, who deserves a loan, who to 
treat in a hospital – and who to send home. These are the decisions that 
concern me the most. And I think we should focus on them.

<https://www.ftc.gov/system/files/ftc_gov/pdf/Early-Thoughts-on-Generative-AI-FINAL-WITH-IMAGES.pdf>


Un saluto,
Daniela



________________________________
Da: nexa <nexa-boun...@server-nexa.polito.it> per conto di M. Fioretti 
<mfiore...@nexaima.net>
Inviato: lunedì 10 aprile 2023 19:13
A: nexa@server-nexa.polito.it
Oggetto: Re: [nexa] "Pausing AI Developments Isn't Enough. We Need to Shut it 
All Down"

On Mon, Apr 10, 2023 18:38:03 PM +0200, David Orban wrote:
> Lo scetticismo sui rischi dell'intelligenza artificiale può avere diverse
> origini. Ambartsoumean e Yampolskiy hanno condotto un'analisi di una serie di
> questi argomenti.

grazie per gli interessanti link e, giusto per chiarificare la mia posizione:

io non escludo affatto che l'intelligenza artificiale possa risultare
in "distruzione" della societa', ANZI. Sono e rimango profondamente
scettico solo sul SINGOLO modo in cui Yudkowsky lo racconta, cioe'
distruzione fisica con attacchi fisici diretti, alla skynet/terminator.

Per coincidenza, ho appena trovato qui:
https://es.sonicurlprotection-fra.com/click?PV=2&MSGID=202304101713170758924&URLID=3&ESV=10.0.19.7431&IV=A5A46C647E9C52B9BBBB8FB6F32E90D7&TT=1681146798428&ESN=21YkbsgfGXlqI2TOXxEWpN5lvpf6eRglzIDib5zWC74%3D&KV=1536961729280&B64_ENCODED_URL=aHR0cHM6Ly9jb250aW51YXRpb25zLmNvbS9wb3N0LzcxNDA0NTMwNDU4Mzg4MDcwNC90aGlua2luZy1hYm91dC1haS1wYXJ0LTMtZXhpc3RlbnRpYWwtcmlzaw&HK=4977CA4EA393ABABEABAC16047DEAD2018CA5275828DF341DC37A96F30A36B8E

chi risponde alla mia domanda "can’t we just unplug it?" dicendo che:

"in case you are still wondering why we can’t just unplug it: by the
time we discover it is a superintelligence it will have spread itself
across many computers and built deep and hard defenses for these. That
could happen for example by manipulating humans into thinking they are
building defenses for a completely different reason."

ma quello per me rimane lo scenario che ho gia' discusso: se NOI siamo
tanto imbecilli da spargere ai quattro venti qualcosa DANDOGLI
controllo diretto ai bottoni sbagliati, siamo noi a sterminarci, non
quella cosa. A questo proposito, concludo segnalando un'altra grossa sorgente 
di confusione:

> to take all necessary steps to ensure the responsible and ethical
> development of AI technologies."

il problema e' l'ADOZIONE INCONTROLLATA, non lo sviluppo. Cioe' il
fatto che anche se smettessero stasera e per sempre di sviluppare AI
piu' potenti di chatGPT3 o 4, la sola diffusione di plugin a tutti i
settori e casi possibili fara' gia' danni enormi senza interventi
adeguati nei modi e livelli giusti. Che poi e' una delle critiche
principali, se non LA critica, alla famosa lettera di moratoria.

Cioe', e' vero che, come con qualsiasi altro software, fra sviluppo e
diffusione a macchia d'olio il confine e' ben labile, ma scordarsi che
esista non fa che aumentare la confusione.

            Marco
--
Please help me write my NEXT MILLION WORDS for digital awareness:
https://es.sonicurlprotection-fra.com/click?PV=2&MSGID=202304101713170758924&URLID=2&ESV=10.0.19.7431&IV=346F0B9EC8B4FAC9BDD55EFB6991C036&TT=1681146798428&ESN=l1mnmeV9SKyBPYbBgGm%2Bn8WU3uNZy%2Fqvl529K6DSZHc%3D&KV=1536961729280&B64_ENCODED_URL=aHR0cHM6Ly9tZmlvcmV0dGkuc3Vic3RhY2suY29t&HK=241A8657D32A0857209F1A4446D2419C4F9CEA720E795D9584AAA17800B85CB2
_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://es.sonicurlprotection-fra.com/click?PV=2&MSGID=202304101713170758924&URLID=1&ESV=10.0.19.7431&IV=FEE1A0C30002C300877486CB06E38CD2&TT=1681146798428&ESN=wJdNMQx8pvrtSowpa3%2FuDtepYRvOb7rgKXSiQjdNIPo%3D&KV=1536961729280&B64_ENCODED_URL=aHR0cHM6Ly9zZXJ2ZXItbmV4YS5wb2xpdG8uaXQvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL25leGE&HK=CB0CB0F2138AB31DD473B71946D356765F1706B699AD17DE7257D48658403355
_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to