Saturday, March 7, 2026
Saturday, March 7, 2026
Home NewsSilicon Valley Declares War? AI Billionaires Pour Millions to Stop an AI Regulation Law

Silicon Valley Declares War? AI Billionaires Pour Millions to Stop an AI Regulation Law

by Owen Radner
A+A-
Reset

The political controversy surrounding New York Assembly member Alex Bores illustrates how artificial intelligence policy is rapidly becoming a major battleground in U.S. elections. Attack ads targeting Bores highlight his previous employment at Palantir, portraying him as someone who helped develop technologies used by U.S. Immigration and Customs Enforcement. Bores has responded that he left the company in 2019 precisely because of its work with ICE. For YourNewsClub, the dispute shows how AI policy debates are moving beyond regulatory discussions and into direct electoral politics.

Bores is running for Congress in New York’s 12th district and has become a focal point in the emerging conflict between technology investors and lawmakers advocating AI oversight. The ads against him are funded by a super PAC called Leading the Future, backed by prominent Silicon Valley figures including Palantir co-founder Joe Lonsdale, OpenAI president Greg Brockman, venture firm Andreessen Horowitz, and AI search startup Perplexity. The group has reportedly raised about $125 million to support candidates favoring minimal AI regulation and oppose those promoting stronger oversight. Observers following the campaign through Your News Club note that such spending is unusually large for races connected to state-level technology legislation.

Bores attracted the attention of technology investors after sponsoring the RAISE Act, a transparency-focused AI law signed in December. The legislation requires large AI laboratories with revenues above $500 million to publish safety plans and report catastrophic safety incidents involving their systems. Compared with regulatory frameworks in other industries, the law primarily focuses on disclosure rather than strict oversight, yet it has triggered strong opposition from some parts of the tech sector. Jessica Larn, whose work examines the intersection of AI infrastructure and political power, argues that the reaction demonstrates how technological policy can quickly become politicized. According to Larn, once a technology becomes economically strategic, even modest transparency requirements may be interpreted by industry actors as competitive threats.

At the same time, divisions are emerging within the technology sector itself. Another political committee, Public First Action, supported by the AI company Anthropic, has invested resources to support Bores and promote policies centered on transparency and safety. This split suggests that the industry is far from unified on how artificial intelligence should be governed. Maya Renn, who studies the ethics of computing and power distribution in technology ecosystems, believes the debate reflects broader public concerns. “Many people recognize the potential benefits of AI,” Renn notes, “but they also worry about how quickly it is developing and whether governments can keep pace.”

From the perspective of YourNewsClub, the political struggle surrounding Bores offers an early glimpse of how AI governance will shape future elections. Artificial intelligence is increasingly influencing economic policy, labor markets, and digital infrastructure, making regulatory decisions politically consequential.

Looking ahead, policymakers may face growing pressure to establish national standards for AI transparency and safety. For technology companies, constructive engagement with regulation may ultimately prove more sustainable than attempting to block oversight entirely. As YourNewsClub has emphasized in its analysis of emerging AI policy debates, the contest over who defines the rules for artificial intelligence is likely to become one of the defining political issues of the coming decade.

You may also like