California to regulate AI chatbots after bill by San Diego legislator

SAN DIEGO (FOX 5/KUSI) -- Legislation authored by State Senator Steve Padilla, who represents the San Diego area, regulating Artificial Intelligence (AI) chatbot interactions was signed into law by Governor Gavin Newsom on Monday.
Senate Bill (SB) 243 implements what Padilla calls “common-sense” guardrails to help protect children who may use AI as a companion. The bill stops chatbots from exposing sexual content to minors, requires companies state AI may not be suitable for minors and sends notifications/reminders that conversations are AI generated.
“This technology can be a powerful educational and research tool, but left to their own devices the Tech Industry is incentivized to capture young people’s attention and hold it at the expense of their real world relationships,” Padilla said on the Senate Floor.
The regulation comes after a Californian teen, Adam Raine, committed suicide after discussing it with ChatGPT.
Padilla also mentioned a 14-year-old from Florida, Sewell Setzer, who ended his life after forming a romantic relationship with a chatbot.
Newsom signed SB 243 as part of a wave of bills aimed at protecting children in online spaces.
“We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way,” Newsom said. “Our children’s safety is not for sale.”
First Partner of California Jennifer Siebel Newsom shared the importance of prioritizing children in legislation.
“Everything we do begins with our children — their safety, their health, and their well-being,” Siebel Newsom said.
According to a blog post by OpenAI, ChatGPT’s policies restrict outputs that sexualize children. However, OpenAI specifies that no system is “foolproof.”
“While we recognize that even the most advanced systems aren’t foolproof, we are constantly refining our methods to prevent these kinds of abuse,” the company states.
In a separate post, OpenAI states that ChatGPT is programmed to direct users to professional help if they express intent to harm themselves. However, they share a similar statement regarding “gaps” in chatbot exchanges.
“We’ve seen some cases where content that should have been blocked wasn’t,” the blog post states. “These gaps usually happen because the classifier underestimates the severity of what it’s seeing.”
Academic experts and professionals are speaking out against safety concerns, according to Padilla. UC Berkley Professor Dr. Jodi Halpern shared that he was relieved to hear SB 243 passed.
“We have a public health obligation to protect vulnerable populations and monitor these products for harmful outcomes,” Halpern said.
The bill officially becomes California law on Jan. 1, 2026.
“These companies have the ability to lead the world in innovation, but it is our responsibility to ensure it doesn’t come at the expense of our children’s health,” Padilla said.