By Sanjay Vadlamani (CISA & CISM) and Scott Madenburg (CIA, CISA, CRMA)
California's proposed Senate Bill 1047 aims to wrangle the Wild West of AI with regulations based on computational power. The rapid growth of artificial intelligence is like a stampede of wild horses. The technology is growing so fast that deploying effective controls is like lassoing a herd of sprinting horses—challenging, unpredictable, and requiring a strategic approach to avoid chaos. While instilling regulations is needed, a one-size-fits-all approach could backfire, hindering the very progress it seeks. Sanjay Vadlamani (CISA & CISM) and Scott Madenburg (CIA, CISA, CRMA) offer a risk management perspective, drawing on insights from AI experts Anjney Midha (General Partner) and Derrick Harris on the a16z Podcast.
Regulating AI is a balancing act
Let's dissect the situation, focusing on how to achieve effective AI safety monitoring and regulation.
Flawed Foundation of SB 1047:
Netting Minnows, Not Sharks:Â SB 1047 targets AI models based on computing power (FLOPS). Rapid advancements mean this metric could soon capture many more models, impacting not just tech giants but also startups and academic research. Imagine using a fishing net for sharks - you'll catch harmless fish and miss the real threats.
Floating point operations per second (FLOPS) is a metric used to measure the computational capacity and efficiency of deep learning models.
Outdated Metrics, Unrealistic Expectations:Â AI technology evolves at breakneck speed. Fixed computational thresholds like FLOPS quickly become obsolete, failing to capture the true risks posed by newer, more efficient models. Think navigating a modern city with an 1800s map - you'll likely get lost.
Hazardous Capability: Assessing the hazardous capability of AI technologies is akin to evaluating the potential damage caused by a student's science experiment gone wrong. The measure defines catastrophic harms as any harm over $500 million, such as a science project damaging school property. Launching new AI technology is like letting students do complex experiments without supervision because the bill's catch-all clause covers unforeseen reactions and accidents. The dangers are high, and the consequences can be costly and far-reaching.
Key Risks of Unregulated AI
Before diving into solutions, let's consider the key risks we need to address now and as AI continues to evolve:
Bias and Discrimination:Â AI models trained on biased data can perpetuate societal inequalities, like a loan approval system that disproportionately penalizes certain demographics. How can we ensure that the resulting AI systems contribute to reducing, rather than perpetuating, societal inequalities?
Privacy Violations: AI can collect and analyze vast amounts of personal data. How can we assure appropriate data collection and usage?
Lack of Transparency:Â The inner workings of complex AI models can be opaque, making it difficult to identify and address potential risks. Can we develop effective frameworks for explainable AI?
Security Vulnerabilities:Â Malicious actors will exploit vulnerabilities in AI systems to cause harm. How can we build robust security measures?
Job Displacement:Â AI-powered automation could lead to widespread job losses. How do we ensure a smooth transition for the workforce?
A More Nuanced Approach: The Risk-Based Compliance Framework for AI
This framework prioritizes flexibility and responsiveness, tailoring regulatory requirements to the potential risks and specific use cases of AI models. Here's how it works:
Focus on High-Risk Applications:Â Regulations would focus on specific high-risk applications and potential malicious uses of AI, rather than the models themselves. Imagine regulating firearms based on their intended use, not simply their existence.
Adaptable Guidance:Â Establish clear guidelines for training costs, model capabilities, and regulatory thresholds. These guidelines would be subject to regular review and updates to keep pace with technological advancements. Think of them as living documents that evolve alongside AI. Just like in sports, the coach and players must adapt their strategies to the changing dynamics of the game.
Collaborative Development:Â A diverse group of stakeholders would be involved, including governance, risk, and compliance (GRC) professionals, industry experts, academic researchers, and internal auditors. A successful approach requires a village, not just a single voice. Like how a symphony needs a variety of instruments playing different notes together to make a beautiful piece of music.
Regulating AI is a balancing act, this approach offers a more nuanced and sustainable solution for regulating AI development in California, prioritizing effective AI safety monitoring and regulation.
Are regulations like SB 1047 the right approach to monitoring AI?
0%Yes
0%No
0%Not Sure
Comments