Buongiorno,

Daniela Tafani <daniela.taf...@unipi.it> writes:

[...]

> <https://www.law.kuleuven.be/ai-summer-school/open-brief/open-letter-manipulative-ai>

«Open Letter: We are not ready for manipulative AI – urgent need for
action»

--8<---------------cut here---------------start------------->8---

[...] But one of the main risks associated with human-imitating AI was
reflected in the recent chatbot-incited suicide in Belgium: the risk of
manipulation. While this tragedy illustrates one of the most extreme
consequences of this risk, emotional manipulation can also manifest
itself in more subtle forms. As soon as people get the feeling that they
interact with a subjective entity, they build a bond with this
"interlocutor" – even unconsciously – that exposes them to this risk and
can undermine their autonomy. This is hence not an isolated
incident. Other users of text-generating AI also described ([1], n.d.r.)
its manipulative effects.

[...] It is, however, in our human nature to react emotionally to
realistic interactions, even without wanting it. This also means that
merely obliging companies to clearly indicate that “this is an AI system
and not a human being” is not a sufficient solution.

[...] The European Union is currently working on new legislation that
will impose stronger rules on “high-risk” AI systems and stricter
liability on their suppliers, the so-called AI Act. However, the
original proposal does not classify chatbots and generative AI systems
as “high risk”, and their providers must only inform users that it is a
chatbot and not a human being. A prohibition on manipulation was
included, but only insofar as the manipulation leads to 'physical or
mental harm', which is by no means easy to prove.

[...] In the meantime, we ask that all necessary measures be taken –
through data protection law, consumer law, and if need be the imposition
of targeted moratoria – to prevent the tragic case of our compatriot
from repeating itself. Let this be a wake-up call to us all. The AI
playtime is over: it's time to draw lessons and take responsibility.


--8<---------------cut here---------------end--------------->8---

è ovvio che chiunque SPACCI un sistema del genere come "psicoterapeuta"
o anche solo "telefono amico" andrebbe fermato subito e in casi come
quello citato sopra di (presunta?) indizione al suicidio dovrebbe essere
aperta un'inchiesta giudiziaria ed celebrato l'eventuale relativo
processo...

...e vorrei _sottolineare_ che i SALAMI non arrivano nel "vuoto cosmico
del diritto", come /autorevolmente/ (non autoritariamente) spiegato da
Alvaro M. Bedoya [2]:

--8<---------------cut here---------------start------------->8---

Let’s start with that first point. There is a powerful myth out there
that “AI is unregulated.” You see it pop up in New York Times op-ed
columns, in civil society advocacy, and in scholarship. It has a
powerful intuitive appeal — it just sounds right. How could these
mysterious new technologies be regulated under our dusty old laws?

If you’ve heard this, or even said it, please take a step back and ask:
Who does this idea help? It doesn’t help consumers, who feel
increasingly helpless and lost. It doesn’t help most companies. It
certainly doesn’t help privacy professionals like you, who now have to
deal with investors and staff who think they’re operating in a law-free
zone.

I think that this idea that “AI is unregulated” helps that small subset
of companies who are uninterested in compliance. And we’ve heard similar
lines before. “We’re not a taxi company, we’re a tech company.” “We’re
not a hotel company, we’re a tech company.” These statements were
usually followed by claims that state or local regulations could not
apply to said companies.

[...] I worry that this debate may obscure the point. Because the law
doesn’t turn on how a trained expert reacts to a technology – it turns
on how regular people understand it.  At the FTC, for example, when we
evaluate whether a statement is deceptive, we ask what a reasonable
person would think of it. When analyzing unfairness, we ask whether a
reasonable person could avoid the harms in question. In tort law, we
have the “eggshell” plaintiff doctrine: If your victim is particularly
susceptible to an injury you caused, that is on you.

The American Academy of Pediatrics has declared a national emergency in
child and adolescent mental health. The Surgeon General says that we are
going through an epidemic of loneliness.

I urge companies to think twice before they deploy a product that is
designed in a way that may lead people to feel they have a trusted
relationship with it or think that it is a real person. I urge companies
to think hard about how their technology will affect people’s mental
health – particularly kids and teenagers.

--8<---------------cut here---------------end--------------->8---

Il discrimine _esiste_: il prodotto non deve essere progettato in modo
tale da indurre le persone a credere di avere una relazione di _fiducia_
(col prodotto)

Discrimine **fondamentale**: il "prodotto" che NON è il LLM, es. GPT4,
ma è l'INTERFACCIA, es. ChatGPT; è l'interfaccia che /determina/
l'interazione uomo-macchina, spesso _mascherandola_, rendendola
opaca... a volte _esattamente_ per ingannare l'utilizzatore, facendogli
credere che è lui a controllare il software invece che il contrario

Una interfaccia che faccia chiaramente comprendere a chi utilizza un
servizio del genera che si tratta di /fiction/, una sorta di RPG
interattivo (magari con tanto di /visualizzazione/ di un lancio di dadi
per ottenere la risposta :-O), non sarebbe sufficiente a /disattivare/
il nesso di /fiducia/?

Oppure, un avviso di "fictioning" **incluso in ogni messaggio** non
sarebbe sufficiente a /disattivare/ il nesso di /fiducia/?

Poi bisognerebbe anche interrogarsi sulla responsabilità genitoriale nel
lasciare che i bimbi utilizzino certi prodotti senza alcuna
supervisione... e comunque (anche) su questo il Garante per la
protezione dei dati personali è intervenuto sulla base del GDPR

Comunque concordo con la conclusione dell'articolo: educazione, ci vuole
una /valanga/ di educazione informatica.

[...]

saluti, 380°


P.S.: l'analisi del pericolo dell'utilizzo allegro dell'AI non si deve
fermare all'utilizzo "diretto" da parte dell'"uomo qualunque" ma anche
(soprattutto) da parte degli esperti di settore... /alienati/ (periti
finanziari, medici, insegnanti...)



[1] 
https://www.theverge.com/2023/2/15/23599072/microsoft-ai-bing-personality-conversations-spy-employees-webcams

[2] grazie della segnalazione! 
(Msg-id:46c18e05fc1042be90f83bde3c241...@unipi.it); la relazione
completa (grandiosa relazione, da incorniciare) è qui:
https://www.ftc.gov/system/files/ftc_gov/pdf/Early-Thoughts-on-Generative-AI-FINAL-WITH-IMAGES.pdf

-- 
380° (Giovanni Biscuolo public alter ego)

«Noi, incompetenti come siamo,
 non abbiamo alcun titolo per suggerire alcunché»

Disinformation flourishes because many people care deeply about injustice
but very few check the facts.  Ask me about <https://stallmansupport.org>.

Attachment: signature.asc
Description: PGP signature

_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to