Colgo entusiasmo per queste ipotesi e interpretazioni che la FTC fa.

Io ritengo che siano invece, al di là del fatto che si tratta appunto
di ipotesi e interpretazioni, non fatti conclamati, considerazioni
sostanzialmente frutto di bias cognitivi e background culturale che
poca conoscenza hanno del funzionamento degli LLM.

Affermare che OpenAI sarebbe  «direttamente responsabile per i danni
eventualmente provocati da un prodotto (significativamente chiamato
"product" e non "service") che permette a terze parti, cioè agli
utilizzatori, di generare affermazioni false o diffamatorie a loro
insaputa», è come dire che se io mettessi online un generatore di
frasi casuali, ma di senso compiuto, che  prendessero spunto
dall'input dell'utente, e se l'input dell'utente fosse il nome e
cognome di una persona esistente, e se il software generasse,
casualmente, frasi potenzialmente diffamatorie per quella persona se
l'utente le pubblicasse, e se poi l'utente le pubblicasse, il
responsabile dell'eventuale diffamazione sarei io e non l'utente che
ha fornito l'input e ha poi divulgato l'output.

Trovo che sia una posizione sostanzialmente cretina.

Fabio

Il giorno sab 15 lug 2023 alle ore 12:37 Maurizio Borghi via nexa
<nexa@server-nexa.polito.it> ha scritto:
>
> Si può leggere qui il documento della FTC:
>
> https://www.washingtonpost.com/documents/67a7081c-c770-4f05-a39e-9d02117e50e8.pdf?itid=lk_inline_manual_4
>
> L'indagine ha per oggetto la possibilità che i prodotti che incorporano Large 
> Language Models configurino pratiche sleali o ingannevoli in relazione a 1) 
> privacy e data security e 2) danni ai consumatori, inclusi danni 
> reputazionali (diffamazione, ecc.). Si compone di 49 "interrogazioni" e 17 
> richieste di documentazione, inclusa ad esempio la richiesta di documenti 
> interni come linee guida o “dizionari” (sic) che spieghino che cosa intende 
> OpenAI per: a) "freely and openly available data", b) "reinforcement learning 
> from human feedback" e c) "hallucination" o "hallucinate".
>
> Tra i molti aspetti interessanti dell'indagine, c'è il fatto che la FTC 
> sembri considerare OpenAI direttamente responsabile per i danni eventualmente 
> provocati da un prodotto (significativamente chiamato "product" e non 
> "service") che permette a terze parti, cioè agli utilizzatori, di generare 
> affermazioni false o diffamatorie a loro insaputa. E ciò a dispetto degli 
> "avvertimenti" riguardo alle c.d. allucinazioni ecc.
>
> Nella sua impostazione generale, l'investigazione mi sembra inoltre 
> sgomberare il campo dalla distinzione tra "input" e "output" su cui si basa 
> molta della narrativa AI-innocentista, ovvero l'argomento secondo cui 
> l’utilizzo di training data è comunque lecito, indipendentemente dal tipo di 
> dato e dalle modalità di utilizzo, mentre è solo a livello dell’output che 
> può sorgere qualche responsabilità – eventualmente da “condividere” con 
> l’utilizzatore di turno. Nell’impostazione della FTC, invece, la pratica 
> commerciale che presiede al prodotto in questione va considerata nel suo 
> insieme,e in relazione agli effetti complessivamente provocati sul 
> consumatore.
>
> Infine, alcune alcune delle interrogazioni (ad es. la n. 22) riguardano in 
> maniera dettagliata la raccolta e il trattamento di dati personali, con la 
> richiesta di specificare le modalità impiegate da Open AI per escludere 
> l’impiego di informazioni personali come training data. Qui la FTC da’ 
> chiaramente per acquisito che il trattamento di dati personali come training 
> data senza consenso sia illegale. Forse una riprova del fatto che i tanto 
> vituperati rilievi del Garante italiano non erano così infondati?
>
> Un saluto cordiale,
>
> _______________
> Maurizio Borghi
> Università di Torino
> https://www.dg.unito.it/persone/maurizio.borghi
> Co-Director Nexa Center for Internet & Society
>
>
> Il giorno gio 13 lug 2023 alle ore 17:40 Daniela Tafani 
> <daniela.taf...@unipi.it> ha scritto:
>>
>> The FTC is investigating whether ChatGPT harms consumers
>>
>> The agency’s demand for OpenAI’s documents about AI risks mark the company’s 
>> greatest U.S. regulatory threat to date
>>
>> By Cat Zakrzewski
>>  Updated July 13, 2023 at 10:44 a.m. EDT|Published July 13, 2023 at 6:00 
>> a.m. EDT
>> The Federal Trade Commission has opened an expansive investigation into 
>> OpenAI, probing whether the maker of the popular ChatGPT bot has run afoul 
>> of consumer protection laws by putting personal reputations and data at risk.
>>
>> The agency this week sent the San Francisco company a 20-page demand for 
>> records about how it addresses risks related to its AI models, according to 
>> a document reviewed by The Washington Post. The salvo represents the most 
>> potent regulatory threat to date to OpenAI’s business in the United States, 
>> as the company goes on a global charm offensive to shape the future of 
>> artificial intelligence policy.
>>
>> Analysts have called OpenAI’s ChatGPT the fastest-growing consumer app in 
>> history, and its early success set off an arms race among Silicon Valley 
>> companies to roll out competing chatbots. The company’s chief executive, Sam 
>> Altman, has emerged as an influential figure in the debate over AI 
>> regulation, testifying on Capitol Hill, dining with lawmakers and meeting 
>> with President Biden and Vice President Harris.
>>
>> Big Tech was moving cautiously on AI. Then came ChatGPT.
>>
>> But now the company faces a new test in Washington, where the FTC has issued 
>> multiple warnings that existing consumer protection laws apply to AI, even 
>> as the administration and Congress struggle to outline new regulations. 
>> Senate Majority Leader Charles E. Schumer (D-N.Y.) has predicted that new AI 
>> legislation is months away.
>>
>> The FTC’s demands of OpenAI are the first indication of how it intends to 
>> enforce those warnings. If the FTC finds that a company violates consumer 
>> protection laws, it can levy fines or put a business under a consent decree, 
>> which can dictate how the company handles data. The FTC has emerged as the 
>> federal government’s top Silicon Valley cop, bringing large fines against 
>> Meta, Amazon and Twitter for alleged violations of consumer protection laws.
>>
>> The FTC called on OpenAI to provide detailed descriptions of all complaints 
>> it had received of its products making “false, misleading, disparaging or 
>> harmful” statements about people. The FTC is investigating whether the 
>> company engaged in unfair or deceptive practices that resulted in 
>> “reputational harm” to consumers, according to the document.
>>
>> The FTC also asked the company to provide records related to a security 
>> incident that the company disclosed in March when a bug in its systems 
>> allowed some users to see payment-related information, as well as some data 
>> from other users’ chat history. The FTC is probing whether the company’s 
>> data security practices violate consumer protection laws. OpenAI said in a 
>> blog post that the number of users whose data was revealed to someone else 
>> was “extremely low.”
>>
>> OpenAI and the FTC did not immediately respond to requests for comment sent 
>> on Thursday morning.
>>
>> News of the probe comes as FTC Chair Lina Khan is likely to face a combative 
>> hearing Thursday before the House Judiciary Committee, where Republican 
>> lawmakers are expected to analyze her enforcement record and accuse her of 
>> mismanaging the agency. Khan’s ambitious plans to rein in Silicon Valley 
>> have suffered key losses in court. On Tuesday, a federal judge rejected the 
>> FTC’s attempt to block Microsoft’s $69 billion deal to buy the video game 
>> company Activision.
>>
>> The agency has repeatedly warned that action is coming on AI, in speeches, 
>> blog posts, op-eds and news conferences. In a speech at Harvard Law School 
>> in April, Samuel Levine, the director of the agency’s Bureau of Consumer 
>> Protection, said the agency was prepared to be “nimble” in getting ahead of 
>> emerging threats.
>>
>> “The FTC welcomes innovation, but being innovative is not a license to be 
>> reckless,” Levine said. “We are prepared to use all our tools, including 
>> enforcement, to challenge harmful practices in this area.”
>>
>> The FTC also has issued several colorful blog posts about its approach to 
>> regulating AI, at times invoking popular science fiction movies to warn the 
>> industry against running afoul of the law. The agency has warned against AI 
>> scams, using generative AI to manipulate potential customers and falsely 
>> exaggerating the capabilities of AI products. Khan also participated in a 
>> news conference with Biden administration officials in April about the risk 
>> of AI discrimination.
>>
>> “There is no AI exemption to the laws on the books,” Khan said at that event.
>>
>> The FTC’s push faced swift pushback from the tech industry. Adam Kovacevich, 
>> the founder and CEO of the industry coalition Chamber of Commerce, said it’s 
>> clear that the FTC has oversight of data security and misrepresentation. But 
>> he said it’s unclear if the agency has the authority to “police defamation 
>> or the contents of ChatGPT’s results.”
>>
>> "AI is making headlines right now, and the FTC is continuing to put flashy 
>> cases over securing results,” he said.
>>
>> Among the information the FTC is seeking from Open AI is any research, 
>> testing or surveys that assess how well consumers understand “the accuracy 
>> or reliability of outputs” generated by its AI tools. The agency made 
>> extensive demands about records related to ways OpenAI’s products could 
>> generate disparaging statements, asking the company to provide records of 
>> the complaints people send about its chatbot making false statements.
>>
>>
>> The agency’s focus on such fabrications comes after numerous high-profile 
>> reports of the chatbot producing incorrect information that could damage 
>> people’s reputations. Mark Walters, a radio talk show host in Georgia sued 
>> OpenAI for defamation, alleging the chabot made up legal claims against him. 
>> The lawsuit alleges that ChatGPT falsely claimed that Walters, the host of 
>> “Armed American Radio,” was accused of defrauding and embezzling funds from 
>> the Second Amendment Foundation. The response was provided in response to a 
>> question about a lawsuit about the foundation that Walters is not a party 
>> to, according to the complaint.
>>
>> ChatGPT also said that a lawyer had made sexually suggestive comments and 
>> attempted to touch a student on a class trip to Alaska, citing an article 
>> that it said had appeared in The Washington Post. But no such article 
>> existed, the class trip never happened and the lawyer said he was never 
>> accused of harassing a student, The Post reported previously.
>>
>> The FTC in its request also asked the company to provide extensive details 
>> about its products and the way it advertises them. It also demanded details 
>> about the policies and procedures that OpenAI takes before it releases any 
>> new product to the public, including a list of times that OpenAI held back a 
>> large language model because of safety risks.
>>
>>
>> The agency also demanded a detailed description of the data that OpenAI uses 
>> to train its products, which mimic humanlike speech by ingesting text, 
>> mostly scraped from Wikipedia, Scribd and other sites across the open web. 
>> The agency also asked OpenAI to describe how it refines its models to 
>> address their tendency to “hallucinate,” making up answers when the models 
>> don’t know the answer to a question.
>>
>> OpenAI also has to turn over details about how many people were affected by 
>> the March security incident and information about all the steps it took to 
>> respond.
>>
>> The FTC’s records request, which is called a Civil Investigative Demand, 
>> primarily focuses on potential consumer protection abuses, but it also asks 
>> OpenAI to provide some details about how it licenses its models to other 
>> companies.
>>
>> Europe moves ahead on AI regulation, challenging tech giants’ power
>>
>> The United States has trailed other governments in drafting AI legislation 
>> and regulating the privacy risks associated with the technology. Countries 
>> within the European Union have taken steps to limit U.S. companies’ chatbots 
>> under the bloc’s privacy law, the General Data Protection Regulation. Italy 
>> temporarily blocked ChatGPT from operating there due to data privacy 
>> concerns, and Google had to postpone the launch of its chatbot Bard after 
>> receiving requests for privacy assessments from the Irish Data Protection 
>> Commission. The European Union is also expected to pass AI legislation by 
>> the end of the year.
>>
>> There is a flurry of activity in Washington to catch up. On Tuesday, Schumer 
>> hosted an all-senator briefing with officials from the Pentagon and 
>> intelligence community to discuss the national security risks of artificial 
>> intelligence, as he works with a bipartisan group of senators to craft new 
>> AI legislation. Schumer told reporters after the session that it’s going to 
>> be “very hard” to regulate AI, as lawmakers try to balance the need for 
>> innovation with ensuring there are proper safeguards on the technology.
>>
>> On Wednesday, Vice President Harris hosted a group of consumer protection 
>> advocates and civil liberties leaders at the White House for a discussion on 
>> the safety and security risks of AI.
>>
>> “It is a false choice to suggest that we either can advance innovation or we 
>> protect consumers,” Harris said. “We can do both.”
>>
>>
>> https://www.washingtonpost.com/technology/2023/07/13/ftc-openai-chatgpt-sam-altman-lina-khan/
>>
>> _______________________________________________
>> nexa mailing list
>> nexa@server-nexa.polito.it
>> https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
>
>
>
> --
> _______________
> Maurizio Borghi
> Università di Torino
> https://www.dg.unito.it/persone/maurizio.borghi
> Co-Director Nexa Center for Internet & Society
>
> My Webex room: https://unito.webex.com/meet/maurizio.borghi
> _______________________________________________
> nexa mailing list
> nexa@server-nexa.polito.it
> https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to