Latest news with #AIregulation
Yahoo
6 hours ago
- Business
- Yahoo
Moratorium on state AI regulation clears Senate hurdle
A Republican effort to prevent states from enforcing their own AI regulations cleared a key procedural hurdle on Saturday. The rule, as reportedly rewritten by Senate Commerce Chair Ted Cruz in an attempt to comply with budgetary rules, would withhold federal broadband funding from states if they try to enforce AI regulations in the next 10 years. And the rewrite seems to have passed muster, with the Senate Parliamentarian now ruling that the provision is not subject to the so-called Byrd rule — so it can be included in Republicans' 'One Big, Beautiful Bill' and passed with a simple majority, without potentially getting blocked by a filibuster, and without requiring support from Senate Democrats. However, it's not clear how many Republicans will support the moratorium. For example, Republican Senator Marsha Blackburn of Tennessee recently said, 'We do not need a moratorium that would prohibit our states from stepping up and protecting citizens in their state.' And while the House of Representatives already passed a version of the bill that included a moratorium on AI regulation, far-right Representative Marjorie Taylor Greene subbsequently declared that she is 'adamantly OPPOSED' the provision as 'a violation of state rights' and said it needs to be 'stripped out in the Senate.' House Speaker Mike Johnson defended the provision by saying it had President Donald Trump's support and arguing, 'We have to be careful not to have 50 different states regulating AI, because it has national security implications, right?' In a recent report, Americans for Responsible Innovation (an advocacy group for AI regulation), wrote that 'the proposal's broad language could potentially sweep away a wide range of public interest state legislation regulating AI and other algorithmic-based technologies, creating a regulatory vacuum across multiple technology policy domains without offering federal alternatives to replace the eliminated state-level guardrails.' A number of states do seem to be taking steps toward AI regulation. In California, Governor Gavin Newsom vetoed a high-profile AI safety bill last year while signing a number of less controversial regulations around issues like privacy and deepfakes. In New York, an AI safety bill passed by state lawmakers is awaiting Governor Kathy Hochul's signature. And Utah has passed its own regulations around AI transparency.


TechCrunch
7 hours ago
- Business
- TechCrunch
Moratorium on state AI regulation clears Senate hurdle
A Republican effort to prevent states from enforcing their own AI regulations cleared a key procedural hurdle on Saturday. The rule, as reportedly rewritten by Senate Commerce Chair Ted Cruz in an attempt to comply with budgetary rules, would withhold federal broadband funding from states if they try to enforce AI regulations in the next 10 years. And the rewrite seems to have passed muster, with the Senate Parliamentarian now ruling that the provision is not subject to the so-called Byrd rule — so it can be included in Republicans' 'One Big, Beautiful Bill' and passed with a simple majority, without potentially getting blocked by a filibuster, and without requiring support from Senate Democrats. However, it's not clear how many Republicans will support the moratorium. For example, Republican Senator Marsha Blackburn of Tennessee recently said, 'We do not need a moratorium that would prohibit our states from stepping up and protecting citizens in their state.' And while the House of Representatives already passed a version of the bill that included a moratorium on AI regulation, far-right Representative Marjorie Taylor Greene subbsequently declared that she is 'adamantly OPPOSED' the provision as 'a violation of state rights' and said it needs to be 'stripped out in the Senate.' House Speaker Mike Johnson defended the provision by saying it had President Donald Trump's support and arguing, 'We have to be careful not to have 50 different states regulating AI, because it has national security implications, right?' In a recent report, Americans for Responsible Innovation (an advocacy group for AI regulation), wrote that 'the proposal's broad language could potentially sweep away a wide range of public interest state legislation regulating AI and other algorithmic-based technologies, creating a regulatory vacuum across multiple technology policy domains without offering federal alternatives to replace the eliminated state-level guardrails.' Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW A number of states do seem to be taking steps toward AI regulation. In California, Governor Gavin Newsom vetoed a high-profile AI safety bill last year while signing a number of less controversial regulations around issues like privacy and deepfakes. In New York, an AI safety bill passed by state lawmakers is awaiting Governor Kathy Hochul's signature. And Utah has passed its own regulations around AI transparency.


The Verge
11-06-2025
- Business
- The Verge
The war is on for Congress' AI law ban
'This is absurd.' That's all Amba Kak, co-executive director of the AI Now Institute, recalls thinking when she first heard about the proposed moratorium on state AI regulation tucked into President Donald Trump's 'big, beautiful bill' — the same funding bill that had Trump and Elon Musk recently trading barbs online. According to the bill's text, no state 'may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems' for a 10-year period, which would start the same day the bill is passed. The moratorium was worse than doing nothing at all to regulate AI, she remembers thinking. Not only was the proposed rule stopping such regulation in the future, but it was also 'rolling back the very few protections we have.' It could scuttle bills covering anything from data privacy to facial recognition models in Washington, Colorado, and other states. 'It's turning the clock back, and it's freezing it there,' Kak says. Days after she learned about the moratorium, she was called to testify about it at the House Committee on Energy and Commerce. The AI moratorium passed without issue as part of the House bill, and it's been preserved in the current Senate version with a few changes. But for weeks, it's been making waves among Democrats and Republicans alike, and every day brings new developments and draws new battle lines. That's true not just in Washington, but also for the AI industry and its critics — who are working out what the rule could mean for business and society at large. The basic contours of the debate are simple. Many Republicans and tech leaders — including OpenAI CEO Sam Altman — think the moratorium will cut through patchwork state AI regulations that could hamper US companies competing against rivals like China's DeepSeek. Many Democrats and AI researchers, on the other hand, believe it will kneecap a broad range of tech regulation as states wait for federal action and pave the way for AI systems even less controllable than the ones we have today. Within those factions, things are a little more complicated. To Jutta Williams, who has worked in regulatory compliance and responsible AI at tech companies like Google, Reddit, X, and Meta, a 'patchwork quilt' of regulation 'just confuses the issues and makes it impossible to do anything.' Now an advisor to startups focused on social good, Williams says she has spent 25 years working on compliance in all types of industries, mostly in the data governance space, which is similar to AI in many ways. When regulation is done 'in a fragmented sort of way,' she says, 'the net net is a lot of cost, a lot of internal friction, a lot of confusion, and no progress.' Williams says that although the federal government 'has not done their job in managing interstate issues,' states should be focused on societal components and the things they can control instead of regulating AI businesses. OpenAI lobbied publicly for a moratorium on state laws citing SB 1047, a California AI safety bill narrowly vetoed by Gov. Gavin Newsom last year. Google, Microsoft, Meta, and Apple have stayed largely quiet and did not respond to requests for comment. Perplexity AI spokesperson Jesse Dwyer offered a relatively neutral statement with a stray shot at the hybrid nonprofit/for-profit OpenAI. 'Some model builders have already shown us they're going to do whatever they want, regardless of regulation—policies governing non-profits for instance—whereas we're confident our commitment to accurate and trustworthy AI positions us well in any regulatory environment,' Dwyer told The Verge. And Josh Gartner, head of comms for Cohere, told The Verge that 'the most effective way to promote the healthy and safe development of AI is a federal standard that provides a consistent and predictable regulatory framework.' There's one example of high-profile opposition. On June 5th, Anthropic CEO Dario Amodei published an op-ed in The New York Times arguing that while he understood the motivations behind the proposed moratorium, it's 'far too blunt an instrument. A.I. is advancing too head-spinningly fast.' Amodei called for a federal transparency standard instead, mandating that leading AI companies be required 'to publicly disclose on their company websites … how they plan to test for and mitigate national security and other catastrophic risks. They would also have to be upfront about the steps they took, in light of test results, to make sure their models were safe before releasing them to the public.' 'I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off,' Amodei wrote. 'Without a clear plan for a federal response, a moratorium would give us the worst of both worlds — no ability for states to act, and no national policy as a backstop.' For Kak, Amodei's op-ed was important. It was a welcome change to see an industry player stick their neck out and acknowledge the need for any kind of regulation. But, she says, 'We have no such federal standard, and the last 10 years do not inspire confidence that any [such] standards are coming.' 'In this debate, we're going to hear a lot, especially from industry players, around punting the question of regulation to the federal level,' Kak says, adding, 'It can't be industry players acting as the messiahs of dictating that regulatory agenda because there's a clear conflict of interest. That needs to come from industry-independent, public perspectives.' As for the popular argument that state-level AI regulation will hurt AI startups, Kak says that's a 'smokescreen.' While some proponents of the moratorium have floated a claim that there are about 1,000 state laws regulating AI, that's not the case. Although more than 1,000 pieces of AI-related legislation were introduced so far in 2025, just over 75 have been adopted or enacted, according to Chelsea Canada of the National Conference of State Legislatures (NCSL). In 2024, out of 500 proposed AI bills, just 84 pieces of legislation were enacted and 12 resolutions or memorials were adopted. What we have now, Kak says, is 'very far from a patchwork of U.S. regulation — it's a straightforward laundry list of targeted rules that get at the worst actors in the market.' Another key concern, according to Kyle Morse, deputy executive director of the Tech Oversight Project, is that the provision bans a broad range of laws and may prohibit state-level regulation on any sort of 'automated decision system.' 'It would apply to not just AI-specific laws — it would apply to state-level consumer protection laws,' Morse said. 'We're talking about civil rights laws. We're talking about so much more than just AI companies' abilities to do their jobs.' So many businesses are now billing themselves as operating AI services, Morse said, that it's possible that under the moratorium, 'companies in healthcare or housing can claim to be AI companies [to get out of regulation], where AI is integrated into their business but it's not their core business model.' The rule's fate in the Senate seems uncertain. On Tuesday, Sen. Edward J. Markey (D-MA), a member of the Commerce, Science, and Transportation Committee, announced he plans to file an amendment to the bill that would block the moratorium. 'Despite the overwhelming opposition to their plan to block states from regulating artificial intelligence for the next decade, Republicans are refusing to back down on this irresponsible and short-sighted provision,' he said in a statement. And last Tuesday, he delivered remarks on the Senate floor calling the provision a 'backdoor AI moratorium' that 'is not serious, it's not responsible, and it's not acceptable.' 'They are choosing Big Tech over kids, families, seniors, and disadvantaged communities across this country,' Markey said. 'We cannot allow that to happen. I am committed to fighting this 10-year ban with every tool at my disposal.' That same day, a bipartisan group of 260 state lawmakers from all 50 states wrote a letter to Congress calling for the opposition to the moratorium. Last Wednesday, Americans for Responsible Innovation and a group of policy and advocacy nonprofits announced a campaign to mobilize voters against the moratorium, which gathered 25,000 petitions opposing the provision in two weeks, according to ARI. Rep. Marjorie Taylor Greene (R-GA) publicly opposed the bill due to the moratorium after voting in its favor, and some GOP senators have said they plan to vote against its current text. On Thursday, the Senate Commerce, Science and Transportation Committee proposed alternative language for the moratorium, moving from a blanket ban on state AI regulation to a warning that states must not regulate AI if they want to receive federal broadband funding. If the Senate does make changes to the bill's language, the House will need to vote on it again. Proponents would also need to overcome the Byrd Rule, which disallows non-budgetary clauses from being included in fiscal-focused bills. For those trying to pass the bill, Kak said, 'the emphasis right now is on finding a way to have a sweeping rollback on state legislation survive the Byrd Rule, given that on the face of it, this proposal would never survive.' The moratorium, if passed, puts regulating the AI industry at large — a sector that's predicted to surpass $1 trillion in revenue in less than seven years — squarely in the hands of a Congress that has railed against Big Tech but failed to pass everything from a digital privacy framework to an antitrust overhaul. 'If past is prologue,' Morse said, 'Congress has struggled to get meaningful safeguards and protections over the finish line, and a moratorium isn't the way to do that.'


Forbes
09-06-2025
- Business
- Forbes
Trump's One Big Beautiful Bill: Tech And Cybersecurity Implications
The 'One Big Beautiful Bill Act' is the source of rising tensions on Capitol Hill over its sweeping ... More provisions on taxes, AI regulation and cybersecurity funding. The 'One Big Beautiful Bill Act' has done more than reshape the policy debate in Washington. It has driven a wedge between two of the most high-profile figures in tech and politics. President Donald Trump and Elon Musk who once aligned on deregulation and digital innovation, now stand publicly opposed. Musk has called the bill a 'disgusting abomination,' citing its runaway spending and embedded regulatory overreach. While tax and immigration headlines have dominated the coverage, a deeper story is unfolding beneath the surface. This sweeping legislation, combined with Trump's recent executive order reversing key Biden-era cybersecurity initiatives, signals a tectonic shift in how the federal government approaches digital infrastructure, AI governance and national cyber defense. Together, these actions are redrawing the line between public-sector accountability and centralized federal control in the technology ecosystem. At over 1,100 pages, the bill narrowly passed the House of Representatives on May 22 by a 215–214 vote. It now awaits Senate action, with Republican leaders pushing to finalize a version by July 4. But given the range of objections that range from fiscal hawks to civil liberties advocates, the road ahead is anything but smooth. Consistent with the administration's recent cybersecurity executive order which reversed key Biden-era initiatives, the bill delivers a strong boost to military and defense applications, while simultaneously dismantling much of the country's civilian cyber defense apparatus. The Department of Defense stands to gain more than $370 million for IT modernization, audit automation and DARPA-led cyber research. These investments will directly benefit contractors specializing in secure software development, threat intelligence platforms and AI-driven analytics. In stark contrast, the Cybersecurity and Infrastructure Security Agency, now leaderless following the resignation of Jen Easterly in November and the stalled nomination of Sean Plankey amid bipartisan concerns, is facing a proposed $495 million budget cut, nearly 30 percent of its total funding. The plan would eliminate over 1,000 positions across key divisions, gutting the agency's capacity at a time of growing cyber threats. The Cybersecurity Division, which protects federal networks and critical infrastructure, would lose 204 roles and $216 million. Regional field teams and the Integrated Operations Division, which provide direct support to local governments and small businesses, would see $36 million in cuts. Stakeholder Engagement, which includes international partnerships and private-sector collaboration, would be slashed by 62 percent—severely weakening CISA's ability to coordinate with the broader ecosystem it was built to protect. Programs critical to national cyber coordination are also on the chopping block. The Joint Cyber Defense Collaborative, CyberSentry, Continuous Diagnostics and Monitoring and federal vulnerability assessments would all be defunded or downsized. CISA's election security and risk management efforts would be effectively shut down. The administration's stated rationale is to refocus the agency on its 'core mission' and redirect resources toward 'cost-effective' automated solutions. The result is a paradox: just as the federal government deepens its investment in artificial intelligence, network automation and digital infrastructure, it is hollowing out the civilian agency responsible for defending those systems. From an AI perspective, the most contested provision in the bill is a ten-year moratorium on state and local regulation of artificial intelligence systems. This clause would prohibit cities and states from setting their own rules on automated decision-making, algorithmic bias, facial recognition or data privacy in AI applications. Tech industry giants like OpenAI and Anthropic have quietly supported the measure, arguing that a unified federal framework will provide regulatory clarity and prevent innovation from being stifled by a patchwork of state laws. But opposition has been fierce. Over 260 state legislators from both parties have condemned the move, calling it a blatant overreach that strips communities of the ability to respond to real-world harms. The Senate is now exploring a compromise: tying federal broadband funding to compliance with the AI preemption clause, giving states an incentive but not an outright mandate to align with Washington's standards. The bill mandates the auction of 600 MHz of broadband spectrum, expected to generate as much as $88 billion in federal revenue. While the windfall is welcome, the underlying policy is aimed at boosting 5G expansion, edge computing and defense communications. With rising geopolitical tensions, control of digital infrastructure is increasingly viewed as a national security imperative. Additional funding for border technology, totaling $70 billion, includes investments in AI-enabled surveillance towers, drone systems and integrated communication backbones. For IT vendors and cloud providers, this represents a massive opportunity. For privacy advocates, it signals a new era of always-on federal monitoring. One underreported element of the bill is a clause limiting the ability of federal courts to enforce contempt rulings against government officials. Under the new language, a plaintiff would need to post a financial bond before any enforcement action can proceed. Critics warn that this weakens the rule of law, especially in cases involving data privacy, digital rights and cybersecurity accountability. The provision has already drawn backlash from civil liberties groups and constitutional scholars. Its long-term implications for tech oversight, especially when government agencies fail to comply with judicial rulings, could be profound. In the tech and cybersecurity arena, winners include defense contractors, AI infrastructure providers and firms that benefit from uniform regulatory environments. The Department of Defense's investment wave favors those who build automation, secure communications and cyber-analytics at scale. But the losers are equally clear. CISA loses not just funding but functional capacity, shedding nearly a third of its workforce and scaling back nearly every major civilian cyber defense program. State and local governments lose their ability to regulate AI, forcing them to wait a decade for tools that match their realities on the ground. And perhaps most publicly, the relationship between Donald Trump and Elon Musk is now a casualty fractured by fiscal ideology and a growing divide over tech governance. For the broader tech industry, this bill is both an opportunity and a warning. It centralizes power, favors large players and prioritizes national defense over civic resilience. Whether that shift creates a safer, more stable digital landscape or leaves key vulnerabilities exposed will remain to be seen.
Yahoo
09-06-2025
- Business
- Yahoo
UK ministers delay AI regulation amid plans for more ‘comprehensive' bill
Proposals to regulate artificial intelligence have been delayed by at least a year as UK ministers plan a bumper bill to regulate the technology and its use of copyrighted material. Peter Kyle, the technology secretary, intends to introduce a 'comprehensive' AI bill in the next parliamentary session to address concerns about issues including safety and copyright. This will not be ready before the next king's speech, and is likely to trigger concerns about delays to regulating the technology. The date for the next king's speech has not been set but several sources said it could take place in May 2026. Labour had originally planned to introduce a short, narrowly drafted AI bill within months of entering office that would have been focused on large language models, such as ChatGPT. The legislation would have required companies to hand over their models for testing by the UK's AI Security Institute. It was intended to address concerns that AI models could become so advanced that they posed a risk to humanity. This bill was delayed, with ministers choosing to wait and align with Donald Trump's administration in the US because of concerns that any regulation might weaken the UK's attractiveness to AI companies. Ministers now want to include copyright rules for AI companies as part of the AI bill. 'We feel we can use that vehicle to find a solution on copyright,' a government source said. 'We've been having meetings with both creators and tech people and there are interesting ideas on moving forward. That work will begin in earnest once the data bill passes.' The government is already locked in a standoff with the House of Lords over copyright rules in a separate data bill. It would allow AI companies to train their models using copyrighted material unless the rights holder opts out. It has caused a fierce backlash from the creative sector, with artists including Elton John, Paul McCartney and Kate Bush throwing their weight behind a campaign to oppose the changes. This week, peers backed an amendment to the data bill that would require AI companies to disclose if they were using copyrighted material to train their models, in an attempt to enforce current copyright law. Ministers have refused to back down, however, even though Kyle has expressed regret about the way the government has gone about the changes. The government insists the data bill is not the right vehicle for the copyright issue and has promised to publish an economic impact assessment and series of technical reports on copyright and AI issues. In a letter to MPs on Saturday, Kyle made a further commitment to establish a cross-party working group of parliamentarians on AI and copyright. Beeban Kidron, the film director and cross-bench peer who has been campaigning on behalf of the creative sector, said on Friday that ministers 'have shafted the creative industries, and they have proved willing to decimate the UK's second-biggest industrial sector'. Kyle told the Commons last month that AI and copyright should be dealt with as part of a separate 'comprehensive' bill. Most of the UK public (88%) believe the government should have the power to stop the use of an AI product if it is deemed to pose a serious risk, according to a survey published by the Ada Lovelace Institute and the Alan Turing Institute in March. More than 75% said the government or regulators should oversee AI safety rather than private companies alone. Scott Singer, an AI expert at the Carnegie Endowment for International Peace, said: 'The UK is strategically positioning itself between the US and EU. Like the US, Britain is attempting to avoid overly aggressive regulation that could harm innovation while exploring ways to meaningfully protect consumers. That's the balancing act here.'