Si può leggere qui il documento della FTC:

https://www.washingtonpost.com/documents/67a7081c-c770-4f05-a39e-9d02117e50e8.pdf?itid=lk_inline_manual_4

L'indagine ha per oggetto la possibilità che i prodotti che incorporano
Large Language Models configurino pratiche sleali o ingannevoli in
relazione a 1) privacy e data security e 2) danni ai consumatori, inclusi
danni reputazionali (diffamazione, ecc.). Si compone di 49 "interrogazioni"
e 17 richieste di documentazione, inclusa ad esempio la richiesta di
documenti interni come linee guida o “dizionari” (sic) che spieghino che
cosa intende OpenAI per: a) "freely and openly available data", b)
"reinforcement learning from human feedback" e c) "hallucination" o
"hallucinate".

Tra i molti aspetti interessanti dell'indagine, c'è il fatto che la FTC
sembri considerare OpenAI direttamente responsabile per i danni
eventualmente provocati da un prodotto (significativamente chiamato
"product" e non "service") che permette a terze parti, cioè agli
utilizzatori, di generare affermazioni false o diffamatorie a loro
insaputa. E ciò a dispetto degli "avvertimenti" riguardo alle c.d.
allucinazioni ecc.

Nella sua impostazione generale, l'investigazione mi sembra inoltre
sgomberare il campo dalla distinzione tra "input" e "output" su cui si basa
molta della narrativa AI-innocentista, ovvero l'argomento secondo cui
l’utilizzo di training data è comunque lecito, indipendentemente dal tipo
di dato e dalle modalità di utilizzo, mentre è solo a livello dell’output
che può sorgere qualche responsabilità – eventualmente da “condividere” con
l’utilizzatore di turno. Nell’impostazione della FTC, invece, la pratica
commerciale che presiede al prodotto in questione va considerata nel suo
insieme,e in relazione agli effetti complessivamente provocati sul
consumatore.

Infine, alcune alcune delle interrogazioni (ad es. la n. 22) riguardano in
maniera dettagliata la raccolta e il trattamento di dati personali, con la
richiesta di specificare le modalità impiegate da Open AI per escludere
l’impiego di informazioni personali come training data. Qui la FTC da’
chiaramente per acquisito che il trattamento di dati personali come
training data senza consenso sia illegale. Forse una riprova del fatto che
i tanto vituperati rilievi del Garante italiano non erano così infondati?

Un saluto cordiale,

_______________
*Maurizio Borghi*
Università di Torino
https://www.dg.unito.it/persone/maurizio.borghi
Co-Director Nexa Center for Internet & Society <https://nexa.polito.it/>


Il giorno gio 13 lug 2023 alle ore 17:40 Daniela Tafani <
daniela.taf...@unipi.it> ha scritto:

> The FTC is investigating whether ChatGPT harms consumers
> The agency’s demand for OpenAI’s documents about AI risks mark the
> company’s greatest U.S. regulatory threat to date
> By Cat Zakrzewski
> <https://www.washingtonpost.com/people/cat-zakrzewski/?itid=ai_top_zakrzewskic>
>  Updated July 13, 2023 at 10:44 a.m. EDT|Published July 13, 2023 at 6:00
> a.m. EDT
> The Federal Trade Commission has opened an expansive investigation into
> OpenAI, probing whether the maker of the popular ChatGPT bot has run afoul
> of consumer protection laws by putting personal reputations and data at
> risk.
>
> The agency this week sent the San Francisco company a 20-page demand for
> records about how it addresses risks related to its AI models, according to
> a document reviewed by The Washington Post
> <https://www.washingtonpost.com/documents/67a7081c-c770-4f05-a39e-9d02117e50e8.pdf?itid=lk_inline_manual_4>.
> The salvo represents the most potent regulatory threat to date to OpenAI’s
> business in the United States, as the company goes on a global charm
> offensive
> <https://www.washingtonpost.com/technology/2023/04/09/sam-altman-openai-chatgpt/?itid=lk_inline_manual_4>
> to shape the future of artificial intelligence policy.
>
> Analysts have called OpenAI’s ChatGPT the fastest-growing consumer app in
> history, and its early success set off an arms race among Silicon Valley
> companies
> <https://www.washingtonpost.com/podcasts/post-reports/the-ai-arms-race-is-on/?itid=lk_inline_manual_5>
> to roll out competing chatbots. The company’s chief executive, Sam Altman,
> has emerged as an influential figure in the debate over AI regulation,
> testifying on Capitol Hill
> <https://www.washingtonpost.com/technology/2023/05/16/sam-altman-open-ai-congress-hearing/?itid=ap_catzakrzewski&itid=lk_inline_manual_5>,
> dining with lawmakers and meeting with President Biden
> <https://www.washingtonpost.com/technology/2023/05/04/white-house-ai-ceos-meeting/?itid=ap_catzakrzewski&itid=lk_inline_manual_5>
> and Vice President Harris.
>
> Big Tech was moving cautiously on AI. Then came ChatGPT.
> <https://www.washingtonpost.com/technology/2023/01/27/chatgpt-google-meta/?itid=lk_interstitial_manual_6>
>
> But now the company faces a new test in Washington, where the FTC has issued
> multiple warnings
> <https://www.washingtonpost.com/technology/2023/04/25/artificial-intelligence-bias-eeoc/?itid=ap_catzakrzewski&itid=lk_inline_manual_7>
> that existing consumer protection laws apply to AI, even as the
> administration and Congress struggle to outline new regulations.
> <https://www.washingtonpost.com/technology/2023/06/17/congress-regulating-ai-schumer/?itid=ap_catzakrzewski&itid=lk_inline_manual_7>
> Senate Majority Leader Charles E. Schumer (D-N.Y.) has predicted
> <https://www.washingtonpost.com/technology/2023/06/21/ai-regulation-us-senate-chuck-schumer/?itid=lk_inline_manual_7>
> that new AI legislation is months away.
>
> The FTC’s demands of OpenAI are the first indication of how it intends to
> enforce those warnings*. *If the FTC finds that a company violates
> consumer protection laws, it can levy fines or put a business under a consent
> decree
> <https://www.washingtonpost.com/technology/2022/09/12/mudge-twitter-ftc-consent-decrees/?itid=lk_inline_manual_10>,
> which can dictate how the company handles data. The FTC has emerged as the
> federal government’s top Silicon Valley cop, bringing large fines against
> Meta
> <https://www.washingtonpost.com/technology/2019/07/12/ftc-votes-approve-billion-settlement-with-facebook-privacy-probe/?itid=lk_inline_manual_10>,
> Amazon
> <https://www.washingtonpost.com/technology/2023/05/31/amazon-alexa-ring-ftc-lawsuit-settlement/?itid=lk_inline_manual_10>and
> Twitter
> <https://www.washingtonpost.com/technology/2022/05/25/twitter-fine-ftc/?itid=lk_inline_manual_10>for
> alleged violations of consumer protection laws.
>
> The FTC called on OpenAI to provide detailed descriptions of all
> complaints it had received of its products making “false, misleading,
> disparaging or harmful” statements about people. The FTC is investigating
> whether the company engaged in unfair or deceptive practices that resulted
> in “reputational harm” to consumers, according to the document.
>
> The FTC also asked the company to provide records related to a security
> incident that the company disclosed in March when a bug in its systems
> allowed some users to see payment-related information, as well as some
> data from other users’ chat history. The FTC is probing whether the
> company’s data security practices violate consumer protection laws. OpenAI
> said in a blog post <https://openai.com/blog/march-20-chatgpt-outage>that
> the number of users whose data was revealed to someone else was “extremely
> low.”
>
> OpenAI and the FTC did not immediately respond to requests for comment
> sent on Thursday morning.
>
> News of the probe comes as FTC Chair Lina Khan is likely to face a
> combative hearing
> <https://www.washingtonpost.com/politics/2023/07/12/tech-giants-racking-up-wins-antitrust-battles/?itid=ap_cristianolima&itid=lk_inline_manual_16>
> Thursday before the House Judiciary Committee, where Republican lawmakers
> are expected to analyze her enforcement record and accuse her of mismanaging
>
> <https://www.washingtonpost.com/technology/2023/03/09/musk-ftc-house-republicans-twitter/?itid=lk_inline_manual_16>the
> agency. Khan’s ambitious plans to rein in Silicon Valley have suffered key
> losses in court. On Tuesday, a federal judge rejected the FTC’s attempt
> <https://www.washingtonpost.com/technology/2023/07/11/microsoft-activision-ftc-decision/?itid=ap_catzakrzewski&itid=lk_inline_manual_16>to
> block Microsoft’s $69 billion deal to buy the video game company Activision.
>
> The agency has repeatedly warned that action is coming on AI, in speeches,
> blog posts
> <https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check>,
> op-eds and news conferences. In a speech at Harvard Law School
> <https://www.ftc.gov/system/files/ftc_gov/pdf/Remarks-to-JOLT-4-1-2023.pdf>
> in April, Samuel Levine, the director of the agency’s Bureau of Consumer
> Protection, said the agency was prepared to be “nimble” in getting ahead of
> emerging threats.
>
> “The FTC welcomes innovation, but being innovative is not a license to be
> reckless,” Levine said. “We are prepared to use all our tools, including
> enforcement, to challenge harmful practices in this area.”
>
> The FTC also has issued several colorful blog posts about its approach to
> regulating AI, at times invoking popular science fiction movies to warn the
> industry against running afoul of the law. The agency has warned
> <https://www.washingtonpost.com/technology/2023/04/25/artificial-intelligence-bias-eeoc/?itid=lk_inline_manual_22>
> against AI scams, using generative AI to manipulate potential customers and
> falsely exaggerating the capabilities of AI products. Khan also
> participated in a news conference with Biden administration officials in
> April about the risk of AI discrimination.
>
> “There is no AI exemption to the laws on the books,” Khan said at that
> event.
>
> The FTC’s push faced swift pushback from the tech industry. Adam
> Kovacevich, the founder and CEO of the industry coalition Chamber of
> Commerce, said it’s clear that the FTC has oversight of data security and
> misrepresentation. But he said it’s unclear if the agency has the authority
> to “police defamation or the contents of ChatGPT’s results.”
>
> "AI is making headlines right now, and the FTC is continuing to put flashy
> cases over securing results,” he said.
>
> Among the information the FTC is seeking from Open AI is any research,
> testing or surveys that assess how well consumers understand “the accuracy
> or reliability of outputs” generated by its AI tools. The agency made
> extensive demands about records related to ways OpenAI’s products could
> generate disparaging statements, asking the company to provide records of
> the complaints people send about its chatbot making false statements.
>
> The agency’s focus on such fabrications comes after numerous high-profile
> reports of the chatbot producing incorrect information that could damage
> people’s reputations. Mark Walters, a radio talk show host in Georgia sued
> OpenAI for defamation, alleging the chabot made up legal claims against
> him. The lawsuit
> <https://www.courthousenews.com/wp-content/uploads/2023/06/walters-openai-complaint-gwinnett-county.pdf>alleges
> that ChatGPT falsely claimed that Walters, the host of “Armed American
> Radio,” was accused of defrauding and embezzling funds from the Second
> Amendment Foundation. The response was provided in response to a question
> about a lawsuit about the foundation that Walters is not a party to,
> according to the complaint.
>
> ChatGPT also said that a lawyer had made sexually suggestive comments and
> attempted to touch a student on a class trip to Alaska, citing an article
> that it said had appeared in The Washington Post. But no such article
> existed, the class trip never happened and the lawyer said he was never
> accused of harassing a student, The Post reported previously
> <https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/?itid=lk_inline_manual_33>
> .
>
> The FTC in its request also asked the company to provide extensive details
> about its products and the way it advertises them. It also demanded details
> about the policies and procedures that OpenAI takes before it releases any
> new product to the public, including a list of times that OpenAI held back
> a large language model because of safety risks.
>
> The agency also demanded a detailed description of the data that OpenAI
> uses to train its products, which mimic humanlike speech by ingesting text,
> mostly scraped from Wikipedia, Scribd and other sites across the open web.
> The agency also asked OpenAI to describe how it refines its models to
> address their tendency to “hallucinate,” making up answers
> <https://www.washingtonpost.com/technology/2023/05/30/ai-chatbots-chatgpt-bard-trustworthy/?itid=lk_inline_manual_36>when
> the models don’t know the answer to a question.
>
> OpenAI also has to turn over details about how many people were affected
> by the March security incident and information about all the steps it took
> to respond.
>
> The FTC’s records request, which is called a Civil Investigative Demand,
> primarily focuses on potential consumer protection abuses, but it also asks
> OpenAI to provide some details about how it licenses its models to other
> companies.
>
> Europe moves ahead on AI regulation, challenging tech giants’ power
> <https://www.washingtonpost.com/technology/2023/06/14/eu-parliament-approves-ai-act/?itid=ap_catzakrzewski&itid=lk_interstitial_manual_41>
>
> The United States has trailed other governments in drafting AI legislation
> and regulating the privacy risks associated with the technology. Countries
> within the European Union have taken steps to limit U.S. companies’
> chatbots under the bloc’s privacy law, the General Data Protection
> Regulation. Italy temporarily blocked
> <https://www.washingtonpost.com/world/2023/03/31/italy-ban-chatgpt-artificial-intelligence-privacy/?itid=lk_inline_manual_42>
> ChatGPT from operating there due to data privacy concerns, and Google had
> to postpone the launch of its chatbot Bard after receiving requests for
> privacy assessments from the Irish Data Protection Commission. The European
> Union is also expected to pass AI legislation by the end of the year.
>
> There is a flurry of activity in Washington to catch up. On Tuesday,
> Schumer hosted an all-senator briefing with officials from the Pentagon and
> intelligence community to discuss the national security risks of artificial
> intelligence, as he works with a bipartisan group of senators to craft new
> AI legislation. Schumer told reporters after the session that it’s going to
> be “very hard” to regulate AI, as lawmakers try to balance the need for
> innovation with ensuring there are proper safeguards on the technology.
>
> On Wednesday, Vice President Harris hosted a group of consumer protection
> advocates and civil liberties leaders at the White House for a discussion
> on the safety and security risks of AI.
>
> “It is a false choice to suggest that we either can advance innovation or
> we protect consumers,” Harris said. “We can do both.”
>
>
> https://www.washingtonpost.com/technology/2023/07/13/ftc-openai-chatgpt-sam-altman-lina-khan/
> _______________________________________________
> nexa mailing list
> nexa@server-nexa.polito.it
> https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
>


-- 
_______________
*Maurizio Borghi*
Università di Torino
https://www.dg.unito.it/persone/maurizio.borghi
Co-Director Nexa Center for Internet & Society <https://nexa.polito.it/>

My Webex room: https://unito.webex.com/meet/maurizio.borghi
_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to