The Surge in AI Advocacy: A Deep Dive into Corporate Lobbying Efforts

Jan 24, 2025 at 9:03 PM
Single Slide
In a year marked by unprecedented regulatory uncertainty, corporate America has ramped up its efforts to shape the future of artificial intelligence. The landscape of AI policy advocacy has seen a dramatic transformation, with companies investing heavily to influence legislation and secure favorable outcomes.

Unleashing Potential: How Companies Are Driving AI Policy Forward

The Escalating Battle for Influence

The past year witnessed a significant escalation in corporate lobbying around artificial intelligence at the federal level. According to data from OpenSecrets, 648 entities engaged in AI-related advocacy in 2024, compared to 458 in 2023—a staggering 141% increase. This surge underscores the growing importance of AI in the national agenda and highlights the competitive nature of the tech industry.Tech giants like Microsoft threw their weight behind initiatives such as the CREATE AI Act, which aims to establish benchmarks for U.S.-developed AI systems. Meanwhile, OpenAI championed the Advancement and Reliability Act, advocating for a dedicated government center focused on AI research. These legislative pushes reflect the diverse priorities within the AI community, each aiming to steer the regulatory framework in a direction that aligns with their strategic goals.

Pioneering Firms Lead the Charge

Among the most active participants in this lobbying frenzy are specialized AI labs—companies dedicated to commercializing cutting-edge AI technologies. These firms have significantly increased their advocacy spending, demonstrating a heightened commitment to influencing policy outcomes. OpenAI, for instance, boosted its expenditures from $260,000 in 2023 to $1.76 million in 2024. Anthropic, a close competitor, more than doubled its investment from $280,000 to $720,000 over the same period. Cohere, an enterprise-focused startup, also ramped up its spending from $70,000 in 2022 to $230,000 in 2024.These increases are not just financial; they signify a broader strategic shift. Both OpenAI and Anthropic bolstered their teams with key hires to enhance policymaker outreach. Anthropic welcomed Rachel Appleton, a former Department of Justice official, while OpenAI appointed political veteran Chris Lehane as VP of policy. Together, these moves underscore the critical role that human capital plays in shaping AI policy.

A National and State-Level Tug-of-War

The year was characterized by intense activity in both federal and state legislatures. At the national level, Congress considered over 90 AI-related bills in the first half of the year alone, according to the Brennan Center. Despite this flurry of activity, little tangible progress was made, leading state lawmakers to take matters into their own hands.Tennessee became the first state to protect voice artists from unauthorized AI cloning, setting a precedent for safeguarding intellectual property rights. Colorado adopted a tiered, risk-based approach to AI regulation, offering a nuanced framework that balances innovation with oversight. California Governor Gavin Newsom signed several safety bills, including measures requiring AI companies to disclose details about their training processes. However, no state managed to implement regulations as comprehensive as international frameworks like the EU’s AI Act.

Challenges and Uncertainty Ahead

The path forward remains uncertain, particularly at the federal level. President Donald Trump has signaled his intention to deregulate the AI industry, viewing stringent regulations as obstacles to U.S. dominance in the field. On his first day in office, Trump rescinded an executive order by former President Joe Biden aimed at mitigating AI risks. He further solidified this stance by signing an executive order instructing federal agencies to suspend certain AI policies and programs.Despite these challenges, some companies remain optimistic. Anthropic has called for targeted federal regulation within the next 18 months, emphasizing the urgency of proactive risk prevention. OpenAI, in its recent policy document, urged the U.S. government to take more substantive action on AI infrastructure, recognizing the need for robust support mechanisms to foster responsible development.