Apple among companies warned by 42 Attorneys General to address harmful AI 
behaviors
9to5Mac - Wednesday, December 10, 2025

Apple among companies warned by 42 Attorneys General to address harmful AI 
behaviors
 
The National Association of Attorneys General has issued a letter to 13 
tech companies, including Apple, calling for stronger action and safeguards 
against the harm AI can cause, and has caused, “especially to vulnerable 
populations.” Here are the details.
AGs want sycophantic and delusional outputs to be addressed
In a 12-page document (which, to be fair, has four full pages of 
signatures) addressed to Apple, Anthropic, Chai AI, Character Technologies 
(Character.AI), Google, Luka Inc. (Replika), Meta, Microsoft, Nomi AI, 
OpenAI, Perplexity AI, Replika, and xAI, 

Attorneys General for 42 US states manifested what they defined as:
[S]erious concerns about the rise in sycophantic and delusional outputs to 
users emanating from the generative artificial intelligence software 
promoted and distributed by your companies, as well as the increasingly 
disturbing reports of AI interactions with children that indicate a need 
for much stronger child-safety and operational safeguards
Together, they say, these threats demand action, as some of them have been 
associated with real-life violence and harm. That includes murders and 
suicides, domestic violence and poisoning incidents, and hospitalizations 
for psychosis.

In the letter, they go as far as claiming that some of the addressed 
companies may have already violated state laws, including consumer 
protection statutes, requirements to warn users of risks, children’s online 
privacy laws, and in some cases, even criminal statutes.
Worrisome cases seem to be getting worse

Over the last few years, many of these cases were widely reported, 
including that of Allan Brooks, a 47-year-old Canadian man who, after 
repeated interactions with ChatGPT, became convinced he had discovered a 
new kind of mathematics, and that of 14-year-old Sewell Setzer III, whose 
death by suicide is the subject of an ongoing lawsuit alleging that a 
Character.AI chatbot encouraged him to “join her.”

While these are just two examples, there are many more quoted in the 
letter, which also states that its list is by no means comprehensive enough 
to illustrate the potential for harm that generative AI models have over 
“children, the elderly, and those with mental illness—and people without 
prior vulnerabilities”.

They also mention what they refer to as “troubling” interactions between AI 
chatbots and children, including bots with adult personas pursuing romantic 
relationships with minors, encouraging drug use and violence, attacking 
children’s self-esteem, advising them to stop taking prescribed medication, 
and instructing them to keep these interactions secret from their parents.

In the letter, they urge companies to take a series of additional safety 
measures, including:
• Developing and enforcing policies to prevent sycophantic and delusional 
outputs.
• Conducting rigorous safety testing before releasing AI models.
• Adding clear, persistent warnings about potentially harmful outputs.
• Separating revenue optimization from safety decisions.
• Assigning executives responsible for AI-safety outcomes.
• Allowing independent audits and child-safety impact assessments.
• Publishing incident logs and response timelines for harmful outputs.
• Notifying users who were exposed to dangerous or misleading content.
• Ensuring chatbots cannot generate unlawful or harmful outputs for 
children.
• Implementing age-appropriate safeguards to limit minors’ exposure to 
violent or sexual content.

They also ask the companies to confirm their commitment to implementing 
these and other safeguards by January 16, 2026, and schedule meetings with 
their offices to discuss the issues further. We’ll be on the lookout for 
whether, or how, Apple responds.
The letter was signed by Attorneys General of Alabama, Alaska, American 
Samoa, Arkansas, Colorado, Connecticut, Delaware, the District of Columbia, 
Florida, Hawaii, Idaho, Illinois, Iowa, Kentucky, Louisiana, Maryland, 
Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Montana, New 
Hampshire, New Jersey, New Mexico, New York, North Dakota, Ohio, Oklahoma, 
Oregon, Pennsylvania, Puerto Rico, Rhode Island, South Carolina, Utah, 
Vermont, the U.S. Virgin Islands, Virginia, Washington, West Virginia, and 
Wyoming, and you can read it in full here.

Original Article at:
https://9to5mac.com/2025/12/10/attorneys-general-warn-apple-other-tech-firms-about-harmful-ai/

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
[email protected].  Your list owner is Cara Quinn - you can reach Cara at 
[email protected]

The archives for this list can be searched at:
http://www.mail-archive.com/[email protected]/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/viphone/e9e2e099-794b-4624-9127-412149ca1103n%40googlegroups.com.

Reply via email to