<https://arstechnica.com/tech-policy/2023/04/openai-may-be-sued-after-chatgpt-falsely-says-aussie-mayor-is-an-ex-con/>


Update, 8:30 pm: A spokesperson for Gordon Legal provided a statement to Ars 
confirming that responses to text prompts generated by ChatGPT 3.5 and 4 vary, 
with defamatory comments still currently being generated in ChatGPT 3.5. Among 
"several false statements" generated by ChatGPT were falsehoods stating that 
Brian Hood "was accused of bribing officials in Malaysia, Indonesia, and 
Vietnam between 1999 and 2005, that he was sentenced to 30 months in prison 
after pleading guilty to two counts of false accounting under the Corporations 
Act in 2012, and that he authorised payments to a Malaysian arms dealer acting 
as a middleman to secure a contract with the Malaysian Government." Because 
"all of these statements are false," Gordon Legal "filed a Concerns Notice to 
OpenAI" that detailed the inaccuracy and demanded a rectification. “As 
artificial intelligence becomes increasingly integrated into our society, the 
accuracy of the information provided by these services will come under close 
legal scrutiny," James Naughton, Hood's lawyer, said, noting that if a 
defamation claim is raised, it "will aim to remedy the harm caused" to Hood and 
"ensure the accuracy of this software in his case.”)

It was only a matter of time before ChatGPT—an artificial intelligence tool 
that generates responses based on user text prompts—was threatened with its 
first defamation lawsuit. That happened last month, Reuters reported today, 
when an Australian regional mayor, Brian Hood, sent a letter on March 21 to the 
tool’s developer, OpenAI, announcing his plan to sue the company for ChatGPT’s 
alleged role in spreading false claims that he had gone to prison for bribery.

To avoid the landmark lawsuit, Hood gave OpenAI 28 days to modify ChatGPT’s 
responses and stop the tool from spouting disinformation.

According to Hood’s legal team, ChatGPT could seriously damage the mayor’s 
reputation by falsely claiming that Hood had been convicted for taking part in 
a foreign bribery scandal in the early 2000s while working for a subsidiary of 
the Reserve Bank of Australia. Hood had worked for a subsidiary, Note Printing 
Australia, but rather than being found guilty of bribery, Hood was the one who 
notified authorities about the bribes. Reuters reported that Hood was never 
charged with any crimes, but ChatGPT seems to have confused the facts when 
generating some responses to text prompts inquiring about Hood's history.

OpenAI did not immediately respond to Ars’ request for comment.

Ars attempted to replicate the error using ChatGPT, though, and it seems 
possible that OpenAI has fixed the errors as Hood's legal team has directed. 
When Ars asked ChatGPT if Hood served prison time for bribery, ChatGPT 
responded that Hood “has not served any prison time” and clarified that “there 
is no information available online to suggest that he has been convicted of any 
criminal offense.” Ars then asked if Hood had ever been charged with bribery, 
and ChatGPT responded, “I do not have any information indicating that Brian 
Hood, the current mayor of Hepburn Shire in Victoria, Australia, has been 
charged with bribery.”

Ars could not immediately reach Hood’s legal team to find out which text 
prompts generated the alleged defamatory claims or to confirm if OpenAI had 
responded to confirm that the error had been fixed. The legal team was still 
waiting for that response at the time that Reuters' report published early this 
morning.

Hood’s lawyer, James Naughton, a partner at Gordon Legal, told Reuters that 
Hood’s reputation is “central to his role” as an elected official known for 
“shining a light on corporate misconduct.” If AI tools like ChatGPT threaten to 
damage that reputation, Naughton told Reuters, “it makes a difference to him." 
That's why the landmark defamation lawsuit could be his only course of action 
if the alleged ChatGPT-generated errors are not corrected, he said.

It's unclear to Hood how many people using ChatGPT were exposed to the 
disinformation. Naughton told Reuters that the defamatory statements were so 
serious that Hood could claim more than $130,000 in defamation damages under 
Australian law.

Whether companies like OpenAI could be held liable for defamation is still 
debatable. It’s possible that companies could add sufficient disclaimers to 
products to avoid such liability, and they could then pass the liability on to 
users, who could be found to be negligently or intentionally spreading false 
claims while knowing that ChatGPT cannot always be trusted.

Australia has recently drawn criticism for how it has reviewed defamation 
claims in the digital age. In 2020, Australia moved to redraft its defamation 
laws after a high court ruling found that publishers using social media 
platforms like Facebook should be held liable for defamatory third-party 
comments on their pages, CNBC reported in 2021. That is contrary to laws 
providing immunity shields for platforms, such as Section 230 in the US.

At that time, Australia considered the question of whether online publishers 
should be liable for defamatory statements made by commenters in online forums 
“one of the most complex to address,” with “complications beyond defamation law 
alone.” By the end of last year, Australian attorneys general were pushing new 
reforms to ensure that publishers could avoid any liability, The Guardian 
reported.

Now it looks like new generative AI tools like ChatGPT that publish potentially 
defamatory content will likely pose the next complex question—one that 
regulators, who are just now wrapping their heads around publisher liability on 
social media, may not yet be prepared to address.

Naughton told Reuters that if Hood’s lawsuit proceeds, it would accuse OpenAI 
of “giving users a false sense of accuracy by failing to include footnotes” and 
failing to inform users how ChatGPT's algorithm works to come up with answers 
that may not be completely accurate. AI ethics experts have urged regulators to 
ensure that companies like OpenAI are more transparent about how AI tools work.

If OpenAI doesn't adequately respond to Hood's concerns, his lawsuit could 
proceed before the laws clarify who is responsible for alleged AI-generated 
defamation.

"It would potentially be a landmark moment in the sense that it's applying this 
defamation law to a new area of artificial intelligence and publication in the 
IT space," Naughton told Reuters.

_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to