California has again demonstrated its standing as a global technology powerhouse with Governor Gavin Newsom’s signing of Senate Bill 53, a transformative bill that aims to advance the state’s world-leading artificial intelligence (AI) industry. This legislation builds on the momentum of a strategic executive order (EO) issued in 2023. The EO laid the foundation for ethical, transparent, and trustworthy generative AI procurement and deployment throughout the state. It established clear guidelines for responsible use, setting the tone for further regulatory and legislative action.
California hosts 32 of the world’s top 50 AI companies, making it a magnet for talent, investment, and cutting-edge research. As the tech-savvy state pushes the boundaries of AI innovation, rigorous oversight has become necessary to address concerns around privacy, security, and the societal impact of emerging technologies. By pairing rapid technological advancement with diligent governance, California’s governor hopes to garner public trust and ensure that AI development aligns with human core values and public interests.
Earlier this year, as part of those efforts, Governor Newsom brought together a distinguished group of AI academics and industry experts to further shape California’s vision for AI. Their collaboration focused on ensuring AI technologies contribute positively to society while minimizing risks and cultivating innovation. The resulting recommendations have helped inform the principles embedded in SB 53, emphasizing oversight, safety, and equitable technological progress.
The main goals and provisions of SB 53 include:
- Increased Transparency for AI Systems: Large developers of generative AI systems and foundation models are required to publish clear documentation and frameworks for their development practices, including adherence to national and international standards and industry best practices.
- Mandated Assessment and Reporting of Catastrophic Risks: Large frontier developers must assess the risk of catastrophic incidents arising from their AI models and submit summaries of these assessments to the California Office of Emergency Services.
- Processes for Safety Reporting: The Office of Emergency Services must create systems for the public and developers to report critical safety incidents and submit confidential catastrophic risk assessments.
- Protection for Whistleblowers: Frontier developers are prohibited from retaliating against employees who disclose information about potential dangers or violations related to AI models and require internal anonymous reporting channels for employees.
- Imposed Penalties for Noncompliance: Civil penalties are enforceable by the Attorney General against developers who fail to comply with the Act’s requirements.
- Creation of the CalCompute Consortium: A consortium within the Government Operations Agency must form to develop a public cloud computing cluster (“CalCompute”) that supports safe, ethical, equitable, and sustainable AI development and research in California.
- Preemption of Local Regulations: Local governments are prevented from enacting separate rules regarding frontier developers’ management of catastrophic risk, ensuring a unified statewide approach.
- Protection of Confidentiality of Safety Reports: Critical safety incident reports and catastrophic risk assessments are exempt from public records disclosure to encourage transparent reporting while safeguarding sensitive information.
- Legislative Findings for Limiting Public Access: Where relevant, findings must be included that justify limiting public access to certain AI-related safety and risk information to protect public interests.
California Leads the Nation Again with AI Chatbot Legislation
California recently made history again. In addition to SB 53, on October 13, 2025, Governor Newsom signed Senate Bill 243 into law, making it the first state in the nation to require comprehensive safeguards for AI chatbots. Authored by Senator Steve Padilla, SB 243 mandates that chatbot operators implement reasonable protections to shield minors and vulnerable individuals from harmful interactions, including exposure to sexual content and suicide-related discussions. The law also grants families the right to pursue legal actions against developers who fail to comply, aiming to hold tech companies accountable for the safety of their AI products.
The urgency for such regulation arose from increasing reports of events regarding AI chatbots and a supposed lack of appropriate intervention controls. The law was shaped by testimony from affected families and enjoyed broad bipartisan support in both legislative chambers, reflecting a consensus on the need for immediate action to protect users.
Key provisions of SB 243 include mandatory notifications and reminders that chatbots are AI-generated, clear disclosure statements for minor users, protocols for addressing suicidal ideation, and annual reporting on the impact of chatbots on mental health. The law, set to take effect on January 1, 2026, is viewed as a critical first step in regulating AI companion technologies and will serve as a foundation for future legislative efforts.
U.S. Examines Widespread AI Legislation as Business Integration Grows
In 2025, global AI adoption accelerated rapidly, with a 31% increase over the previous year, transitioning from experimentation to mainstream business integration. The market is valued at around $244 billion, but it’s expected to reach $1 trillion by 2031. Worldwide, about two-thirds of individuals use AI daily, with an estimated 378 million people using AI tools this past year. That’s an increase from 116 million AI users five years ago, and it’s 64 million higher than the previous year, making it the most significant year-to-year boost ever.
It’s infiltrating the business world at a record pace, with nearly 80% of organizations now using AI, a 55% increase from the preceding year. While trust in AI remains below 50% among the general population, 76% of experts say its benefits outweigh the risks. Additionally, 60% of the global population lives in a jurisdiction with AI legislation, a notable leap from 10-15% in 2020.
In 2024, we reported that Europe made history with the enactment of the EU AI Act, which outlines comprehensive regulations to mitigate potential harms associated with high-risk AI. At that time, the U.S. was expected to follow suit. As of 2025, all 50 states have introduced some AI legislation. However, in early 2025, a federal executive order was issued that removed barriers to AI innovation. The EO, which asserts a purpose “to promote human flourishing, economic competitiveness, and national security,” states in part: “This order revokes certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to retain global leadership in artificial intelligence.”
Oxford Can Help: Partnering for AI Success
The journey to successful AI integration can be complex, requiring expertise that spans technical, legal, and strategic domains. If your organization is seeking to leverage AI, partnering with a professional services firm offers a powerful pathway forward. Oxford can help. We provide deep technical know-how and broad industry experience necessary to guide businesses through every stage of AI adoption, from identifying high-impact opportunities and designing comprehensive solutions to implementing pilot projects and scaling deployments securely.
We can also help you navigate fast-evolving regulatory landscapes, ensuring compliance with emerging standards and ethical guidelines. Our collaborative approach enables you to customize AI strategies to your unique needs, accelerating innovation and mitigating risks. By tapping into our vetted expertise, your business can focus on core operations and achieve transformative results from AI investments without the need to build teams or infrastructure internally.

.svg.png)
