logo
#

Latest news with #Skynet

The risk is not AI. It is our overreliance on imperfect technology
The risk is not AI. It is our overreliance on imperfect technology

Indian Express

timea day ago

  • Business
  • Indian Express

The risk is not AI. It is our overreliance on imperfect technology

Nowhere is the AI debate more polarised than between the evangelists who see technology as humanity's next great leap and the sceptics who warn of its profound limitations. Two recent pieces — Sam Altman's characteristically bullish blog and Apple's quietly devastating research paper, 'The Illusion of Thinking' — offer a fascinating window into this divide. As we stand at the threshold of a new technological era, it's worth asking: What should we truly fear, and what is mere hype? And for a country like India, what path does wisdom suggest? Sam Altman, CEO of OpenAI and a central figure in the AI revolution, writes with the conviction of a true believer that AI will soon rival, if not surpass, human reasoning. Altman's vision will attract people. After all, he says that AI can be a true partner in solving the world's hardest problems, from disease to climate change. His argument is not just about technological possibility, but about inevitability. In Altman's world, the march toward artificial general intelligence (AGI) is not just desirable — it's unstoppable. But then comes Apple's 'The Illusion of Thinking', a paper that lands like a bucket of cold water on AI enthusiasm. Apple's researchers conducted a series of controlled experiments, pitting state-of-the-art large language models (LLMs) against classic logic puzzles. The results drove down the enthusiasm around Artificial General Intelligence (AGI). While these models impressed at low and medium complexity, their performance collapsed as the puzzles grew harder. AI is not truly 'thinking' but merely extending patterns. When faced with problems that require genuine reasoning, there are still gaps to be filled. Apple's work is a much-needed correction to the narrative that we are on the verge of achieving AGI. So, who is right? The answer, as is often the case, lies somewhere in between. Altman's optimism is not entirely misplaced. AI has already transformed industries and will continue to do so, especially in domains where pattern recognition and data synthesis are of utmost use. But Apple's critique exposes a fundamental flaw in the current trajectory: The flaw of conflating statistical abilities with genuine understanding or reasoning. There is a world of difference between a machine that can predict the next word in a sentence and one that can reason its way through the Tower of Hanoi or make sense of a complex, real-world dilemma. What, then, should the world be afraid of? The real danger is not that AI will suddenly become superintelligent and take over, but that we will place too much trust in systems whose limitations are poorly understood. Imagine deploying these models in healthcare, infrastructure, or governance, only to discover that their intelligence isn't truly that. The risk is not Skynet, but systemic failure born of misplaced faith. Billions could be wasted chasing the chimera of AGI, while urgent, solvable problems are neglected. There is often waste in innovation processes. But the scale of resources deployed for AI dwarfs other examples, and hence, demands a different sort of caution. Yet, there are also fears we can safely discard. The existential risk posed by current AI models is, for now, more science fiction than science. These systems are powerful, but they are not autonomous agents plotting humanity's downfall. They are tools — impressive, but fundamentally limited. The real threat, as yet, is not malicious machines, but human hubris. Are there any lessons for India to draw from this? The country stands to gain enormously from AI, particularly in areas like language translation, agriculture, public service delivery, and others. Here, based on the strengths of today's AI — pattern recognition, automation, and data analysis — it can be used to address real-world, local challenges, which is what India has been majorly trying to do. But India must resist the temptation to tag along with the AGI hype. Instead, it should invest in human-in-the-loop systems, where AI aids rather than replaces human judgement, especially in domains where discretion levels are high at the point of contact with people, and where the stakes are high. Human judgement is still ahead of AI, as of now, so, stick to using it. There is also a deeper lesson here, that is imparted by control theory. True control — over machines, systems, or societies — requires the ability to adapt, to reason, to respond dynamically to feedback. Current AI models, for all their power, lack this flexibility. They cannot adjust their approach when complexity exceeds their training. More data and more computing do not solve this problem. In this sense, the illusion of AI control is as dangerous as the illusion of AI thinking. The future will be shaped neither by those who are blind in their faith towards AI, nor by those who see only limits, but by those who can navigate the space between. For India, and for the world, the challenge is to harness the real strengths of AI while remaining clear-eyed about its weaknesses. The true danger is not that machines will outthink us, but that we will stop thinking for ourselves. Related to this was an interesting brain scan study by MIT Media Lab of ChatGPT users, which suggested that AI isn't making us more productive. It could instead be harming us cognitively. This is what we need to worry about, at least for now. The writer is research analyst at The Takshashila Institution in their High-Technology Geopolitics Programme

Caught by a Camera: When Biometrics Replace Visas
Caught by a Camera: When Biometrics Replace Visas

Time Business News

time14-06-2025

  • Time Business News

Caught by a Camera: When Biometrics Replace Visas

VANCOUVER, BC — The sight of a passport being stamped may soon be as nostalgic as flipping through paper maps. As of 2025, dozens of countries have begun phasing out traditional visas and replacing them with biometric entry systems. A camera, not a consular officer, now determines who crosses international borders. With biometric systems—such as face scans, iris recognition, and fingerprints—becoming the global standard, the question for travellers, journalists, and privacy advocates alike is straightforward: What happens when your body becomes your visa? Amicus International Consulting, a global leader in second citizenship programs, identity transformation, and legal relocation, issues this press release to unpack the implications of biometric visas, explore real-world cases, and explain how individuals can still maintain legal mobility and privacy in a world increasingly defined by surveillance. The Rise of the Biometric Visa System Biometric data, which includes facial geometry, fingerprints, iris patterns, and voice profiles, is no longer limited to security agencies or intelligence operations. It now forms the backbone of global travel systems. The replacement of traditional visa procedures with biometric scans is accelerating rapidly. Governments around the world now use biometric systems to: Replace or supplement visa paperwork Confirm identities at e-gates and customs Detect false identities or document forgeries Flag individuals on international watchlists Enforce no-fly lists and cross-border risk assessments The biometric systems are passive, unlike traditional visa processes that require an application; today's systems scan without needing permission or even awareness. Global Leaders in Biometric Border Control As of this year, more than 80 nations have adopted biometric-based entry systems. These include: United States: The Department of Homeland Security's Biometric Entry-Exit Program is active at nearly all international airports. The Department of Homeland Security's Biometric Entry-Exit Program is active at nearly all international airports. European Union: The Entry/Exit System (EES) now scans all non-EU travellers using facial recognition and fingerprints. The Entry/Exit System (EES) now scans all non-EU travellers using facial recognition and fingerprints. China: With its Skynet program, China monitors and records the movement of citizens and foreigners using over 600 million AI-linked cameras. With its Skynet program, China monitors and records the movement of citizens and foreigners using over 600 million AI-linked cameras. United Arab Emirates: Dubai and Abu Dhabi airports utilize biometric e-gates equipped with iris recognition technology. Dubai and Abu Dhabi airports utilize biometric e-gates equipped with iris recognition technology. India: The Aadhaar-linked eVisa system ties biometric identity to mobile numbers and tax records. The Aadhaar-linked eVisa system ties biometric identity to mobile numbers and tax records. South Korea and Singapore: Known for early adoption, these nations now operate fully touchless biometric gates that identify and clear travellers in under ten seconds. Even visa-free nations now require biometric pre-clearance, quietly redefining what it means to be a 'free traveller.' Case Study: The Journalist Flagged by Algorithm In early 2024, a Russian journalist who had previously sought asylum in France attempted to visit Germany using a passport from a Caribbean country acquired through a legal citizenship-by-investment program. At Munich Airport, a biometric gate matched her face to a historic Eurodac asylum database entry. Within minutes, she was detained, questioned, and placed on a return flight—not because her documents were invalid, but because her biometric footprint had been digitally preserved, resurrected, and weaponized. No paper visa was ever denied. No formal notification was issued. Just a camera, a database match, and a door that stayed shut. The Silent Shift: From Application to Algorithm This marks a fundamental shift in global mobility: Old Visa System Application forms Physical interviews Transparent rejections Legal appeals Biometric Visa System Passive enrollment via CCTV or e-gate Invisible watchlists and scoring algorithms Automated denials without explanation Little to no recourse or legal clarity The biometric visa process reverses the burden of proof: travellers must now prove they are not a threat, often without knowing they have been categorized as one. Hidden Triggers: How Biometrics Flag You Facial recognition systems don't just read your face—they interpret behavior, prior travel patterns, and associations. Common biometric triggers include: Re-entry after applying for asylum Previously used aliases, even if legally abandoned Visits to politically controversial countries Association with flagged phone numbers or social media Multiple identities used across jurisdictions Such data is shared through complex networks like PRUM (EU), INTERPOL's facial data program, and the Five Eyes intelligence alliance. Case Study: The Whistleblower Trapped in Transit A Central American cybersecurity contractor exposed evidence of human rights abuses in 2022 and sought safe harbor. He legally obtained a second passport through an investment program in the Caribbean. When flying to Geneva in 2023, the EU's biometric visa system flagged him as a match to a historic INTERPOL Red Notice issued under questionable political grounds. He was held in Zurich airport for 72 hours before being quietly returned to his departure point. His documentation was valid. His face was not. When Biometric Data Goes Wrong Biometric systems are not infallible. Facial recognition algorithms have been criticized for significant error rates, especially among ethnic minorities and women. A 2023 MIT study found that commercial biometric systems had: A 34% higher false-positive rate for Black women compared to white men A 21% error rate for individuals under age 25 due to changing facial features Significant difficulty distinguishing between identical twins or family members One incident in 2024 saw a Dutch student wrongly detained in Turkey due to a false biometric match with a wanted Balkan fugitive. Human Rights Concerns Legal scholars and human rights groups have raised red flags over biometric visa programs: Lack of consent: Biometric collection often occurs without informed permission. Biometric collection often occurs without informed permission. Due process violations: Travelers have no way to appeal or understand denials. Travelers have no way to appeal or understand denials. Surveillance creep: Border technologies are being repurposed for domestic monitoring. Border technologies are being repurposed for domestic monitoring. Chilling effect: Journalists, activists, and dissidents restrict movement out of fear. Journalists, activists, and dissidents restrict movement out of fear. Biometric permanence: Unlike documents, biometrics can't be revoked or reissued. The concern is that biometric systems silently enforce ideological, political, or economic restrictions under the guise of technological efficiency. Amicus International's Biometric Risk Services Amicus International Consulting offers legally compliant solutions for those affected by biometric systems: Second Citizenship Programs: Diversify legal identity options for safer travel Diversify legal identity options for safer travel Facial Recognition Advisory: Evaluate current biometric risks and exposure Evaluate current biometric risks and exposure Secure Relocation Planning: Choose jurisdictions with limited biometric data sharing Choose jurisdictions with limited biometric data sharing Case-Based Identity Strategy: Build documentation to reflect current, lawful identity Build documentation to reflect current, lawful identity Digital Privacy Services: Reduce biometric footprint in global registries Amicus operates exclusively within legal frameworks and does not engage in document forgery or facial spoofing technologies. Case Study: Rebuilding After Biometric Surveillance In 2021, a Middle Eastern human rights advocate living in exile in Malaysia was added to a biometric watchlist following leaked border surveillance documents. Despite holding valid passports, he faced repeated entry refusals. Amicus reviewed his digital trail, prepared a comprehensive dossier of legal name change documentation, and assisted in obtaining a second passport through Grenada's citizenship-by-investment program. Through strategic planning, he relocated to a non-sharing jurisdiction and resumed work under a legal identity with full travel rights. Today, he moves without incident. What the Future Holds By 2026, global travel will look radically different: Over 150 countries will maintain biometric border databases Most eVisas will be auto-issued based on biometric risk scoring Visas may become invisible—issued or denied entirely by algorithm Biometric-only travel corridors will exclude those with privacy concerns or mismatched histories Countries like Estonia, Singapore, and UAE already issue digital 'e-citizenships' tied to biometric blocks on the blockchain—blending identity and surveillance into a single package. Final Thoughts: No Papers, Just Patterns Biometric technology is replacing the passport, the visa, and perhaps even the identity card. The camera is no longer a passive observer—it is the gatekeeper. To travel freely in 2025 and beyond, individuals must understand the systems tracking them, the data fueling decisions, and the legal routes available to reclaim autonomy. Amicus International Consulting remains committed to helping clients navigate this new world—not by dodging the law, but by understanding it better than those who write it. Contact InformationPhone: +1 (604) 200-5402Email: info@ Website:

MOVIE REVIEW: We decide if AI-themed sci-fi-thriller 'Renner' is worth switching on
MOVIE REVIEW: We decide if AI-themed sci-fi-thriller 'Renner' is worth switching on

Daily Record

time13-06-2025

  • Entertainment
  • Daily Record

MOVIE REVIEW: We decide if AI-themed sci-fi-thriller 'Renner' is worth switching on

Dull then mean-spirited, you'll want to pull the plug long before the end credits roll. Sci-fi-thriller Renner sees the titular computer genius (Frankie Muniz) create AI life coach Salenus (voiced by Marcia Gay Harden) to help him find love. But when next-door neighbour Jamie (Violett Beane) comes into his well-ordered life, things quickly begin to unravel. ‌ Renner is a strange film that takes the topical AI technology and doesn't do much with it until the latter stages. ‌ Salenus is a giant glass eye who turns out to be slightly manipulative but a long way off M3GAN or Skynet. Most of the runtime in Robert Rippberger's ( Those Who Walk Away) flick is full of awkward conversations and, apart from brief ventures into a corridor, his characters never leave their apartments. There is actually a five-minute story about the board game Monopoly - yes, really. The nervy Renner has OCD and is a germophobe and Muniz proves capable of portraying this nebbish side but the incredibly forced angry rants that burst from his mouth at times just don't suit the former Malcolm in the Middle. Beane was better in Drop and Truth or Dare, however, she deals with Jamie's character shifts very capably. ‌ Rippberger's use of UV light in certain scenes feels like a feeble effort to make the movie look futuristic - truthfully it could be set any time from the past few years. Shots of Renner's morning routine become supremely repetitive and as the film went on it entered my mind it would have worked better as a 40-minute Twilight Zone episode. The final third 'reveal' is less surprising than another Love Island couple break-up. ‌ Blood randomly starts flowing thick and fast which really jars with everything we've previously watched. Dull and then mean-spirited, you'll want to pull the plug on Renner and its average AI long before the end credits roll. ● Do you have any favourite AI-themed films? ‌ Pop me an email at and I will pass on your comments – and any movie or TV show recommendations you have – to your fellow readers. Stevie Bishop got in touch to say: 'Mobland on Amazon Prime is superb. It has such a great cast and the storylines are full of surprises.' ● Renner is screening on Amazon Prime Video now.

Machine Bias: How AI Misidentifies and Grounds Travellers
Machine Bias: How AI Misidentifies and Grounds Travellers

Time Business News

time12-06-2025

  • Business
  • Time Business News

Machine Bias: How AI Misidentifies and Grounds Travellers

Amicus International Consulting Warns That Algorithmic Errors in Border Security Systems Are Costing Innocent Travellers Their Freedom FOR IMMEDIATE RELEASE VANCOUVER, Canada – Artificial Intelligence is rapidly transforming the way borders are managed. Facial recognition cameras, predictive surveillance, and AI-driven immigration databases now control who boards a plane, who is flagged for inspection, and who is denied entry. But in 2025, these automated systems are not infallible, and their mistakes are grounding innocent travellers. Amicus International Consulting, a global authority on legal identity change, biometric resistance, and international relocation, has published an urgent report examining how machine bias is leading to travel bans, wrongful detentions, and permanent digital mislabeling of law-abiding individuals. 'We've seen a staggering rise in AI-driven misidentifications,' said a spokesperson for Amicus. 'Clients have been barred from flights, detained at borders, or added to watchlists simply because an algorithm made an assumption—and no human bothered to double-check.' The Rise of Border AI: Fast, Scalable—and Flawed Artificial intelligence (AI) is now a central component of border security across most developed nations. The shift toward automated clearance has been touted as a triumph of speed and safety. At major airports, passengers walk through biometric corridors where cameras match faces against centralized identity databases. Algorithms assess risk, detect discrepancies, and generate alerts. Examples of AI in Border Control: CBP's Biometric Entry/Exit system scans the faces of travellers entering and leaving the United States. scans the faces of travellers entering and leaving the United States. The EU's ETIAS and EES systems use predictive algorithms to assess threat levels before issuing electronic travel authorizations. use predictive algorithms to assess threat levels before issuing electronic travel authorizations. Singapore's Changi Airport uses facial recognition at every stage of the passenger journey. uses facial recognition at every stage of the passenger journey. China's Skynet surveillance grid integrates facial, gait, and behavioural recognition with state security databases. However, this level of automation comes with a critical flaw: machine bias. What Is Machine Bias? Machine bias refers to systematic errors in decision-making made by artificial intelligence systems due to flawed training data, design assumptions, or operational contexts. These biases disproportionately affect: People of colour Women Transgender and non-binary individuals Children and elderly travellers Individuals with medical conditions or facial disfigurements Unlike human errors, machine bias can replicate itself across systems at scale, affecting thousands—or millions—before anyone notices. Case Study: Wrongfully Flagged at Heathrow In 2024, a U.K. citizen of Middle Eastern descent was detained at Heathrow Airport after facial recognition systems identified him as a suspected terrorist. In reality, the man shared facial features with another individual whom Interpol had flagged, but the software failed to distinguish between them. Amicus was contacted after the man missed his international connection, was interrogated for 11 hours, and faced travel bans from five partner countries—all based on an AI-generated false positive. It took months to clear his name. How AI Gets It Wrong: The Technical Reality 1. Poor Training Data Facial recognition algorithms are often trained on limited datasets. When these datasets underrepresent certain ethnicities or genders, the system becomes less accurate for those groups. A 2023 MIT study found that facial recognition software misidentified Black women at rates up to 35% higher than white men. 2. Static Rules in a Dynamic World AI lacks context. It cannot account for recent legal name changes, updated citizenship, or medical changes in appearance, especially after gender reassignment surgery or reconstructive procedures. 3. Dependency on Legacy Systems Border AIs are often linked to outdated or incorrect watchlists, including expired INTERPOL notices, unverifiable alerts, or flawed database merges. 4. Feedback Loop Contamination When an individual is misidentified, the system often treats that error as confirmed data, reinforcing the false flag and pushing it across multiple countries' databases. The Real-World Consequences of AI Error Missed Flights and Detainment Innocent travellers are frequently stopped, interrogated, and denied boarding because their biometric scans generate false alerts. Innocent travellers are frequently stopped, interrogated, and denied boarding because their biometric scans generate false alerts. Visa Rejections and Travel Bans Once flagged by an AI system, individuals often face rejection on visa applications, even after the mistake is corrected. Once flagged by an AI system, individuals often face rejection on visa applications, even after the mistake is corrected. Social and Financial Fallout Some clients have lost job opportunities, had business contracts cancelled, or faced reputational harm due to travel disruption. Some clients have lost job opportunities, had business contracts cancelled, or faced reputational harm due to travel disruption. Permanent Surveillance Labels In many cases, an error that triggers machine alerting results in long-term inclusion in border 'alert' categories, even after the issue is resolved. Case Study: Facial Mismatch Denies Family Reunion A woman travelling from South Africa to Canada to reunite with her children was stopped at Pearson International Airport in 2023. The AI scanner failed to recognize her updated appearance following chemotherapy-related facial changes. Although she had valid documents and matching fingerprints, the system flagged her as a 'mismatch.' It took 48 hours, legal intervention, and biometric reevaluation to clear her identity, delaying her travel and causing significant emotional distress. Amicus' Response: Legal Identity and Biometric Strategy Amicus International Consulting has developed an advanced suite of services designed to protect clients from AI-driven border control failures. These services include: Legal Name and Gender Change Documentation: Court-recognized changes supported by digital identity updates across systems. Court-recognized changes supported by digital identity updates across systems. Second Citizenship Acquisition: Providing clean legal identities not associated with old errors or politically sensitive data. Providing clean legal identities not associated with old errors or politically sensitive data. Facial Recognition Defence Using AI Tools: Use of tools like Fawkes and LowKey to subtly distort publicly available facial data and prevent AI learning. Use of tools like Fawkes and LowKey to subtly distort publicly available facial data and prevent AI learning. Red Notice Review and Removal Support: Challenging and removing invalid Interpol Red Notices that fuel wrongful alerts. Challenging and removing invalid Interpol Red Notices that fuel wrongful alerts. Human Rights Advisory: For travellers from vulnerable populations, Amicus provides documentation support and risk profiling to mitigate entry disputes. 'We don't just fix identities—we prevent errors before they happen,' said the Amicus spokesperson. 'In an AI-first world, the best protection is proactive legal and biometric management.' Where AI Border Errors Are Most Common Based on client case studies and Amicus research, the following regions pose the highest risk of machine bias and AI error at the border: United States: Particularly in major hubs like JFK, LAX, and Atlanta, where facial scanning is mandatory. Particularly in major hubs like JFK, LAX, and Atlanta, where facial scanning is mandatory. European Union (Schengen Zone): Automated systems under EES frequently flag biometric mismatches. Automated systems under EES frequently flag biometric mismatches. United Kingdom: Heathrow and Gatwick use controversial facial databases with high false-positive rates. Heathrow and Gatwick use controversial facial databases with high false-positive rates. Singapore and South Korea: High-tech but inflexible systems unable to accommodate nuanced identity profiles. High-tech but inflexible systems unable to accommodate nuanced identity profiles. United Arab Emirates: Broad data sharing and surveillance integration with allied states. Countries with lower technological enforcement or more flexible human review tend to have fewer reported AI errors. Case Study: Dual Citizen Blocked from Transit A Canadian-Iranian dual citizen was flagged while transiting through Frankfurt due to name similarity with a blacklisted individual. The AI system failed to detect different birth dates and citizenships. He was removed from his flight, interrogated, and required to return to his point of origin. Only after Amicus provided documentary proof of his name change, clean record, and legal travel authorization was he cleared to fly again. AI Is Not the Judge—But It Decides Who Gets Judged In 2025, border AI is not just an assistant to human officers—it is the first and sometimes only filter determining who gets a second look. Human oversight has been reduced as systems become more 'efficient.' 'If the algorithm flags you, you're already guilty until proven innocent,' said the Amicus spokesperson. 'Even if you prove it, the delay, damage, and data trail remain.' Amicus' Solutions: Travel Risk Management in the AI Era For high-risk clients, Amicus provides: Pre-travel biometric risk analysis AI compatibility tests against known global systems Biometric minimalism coaching for low-detection appearance and behaviour for low-detection appearance and behaviour Client flag removal assistance in global watchlists in global watchlists Emergency relocation strategy in the event of wrongful denial or detainment Amicus acts as a legal firewall between clients and the machine errors that would otherwise derail their rights. Conclusion: In the Age of AI, Mistaken Identity Is a Matter of Code AI-powered borders may promise security, but their errors are increasingly a threat to lawful travellers. The risk is not just technical—it's existential for those seeking freedom from political targeting, surveillance, or violence. Amicus International Consulting stands at the intersection of privacy, legality, and human dignity, offering those most vulnerable the ability to move safely, legally, and free from algorithmic discrimination. In a world where machines make the first call, having Amicus on your side may be the difference between being cleared or permanently flagged. 📞 Contact InformationPhone: +1 (604) 200-5402Email: info@ Website:

A Political Battle Is Brewing Over Data Centers
A Political Battle Is Brewing Over Data Centers

WIRED

time10-06-2025

  • Business
  • WIRED

A Political Battle Is Brewing Over Data Centers

Jun 10, 2025 2:56 PM An AI-related provision in the 'Big Beautiful Bill' could restrict state-level legislation of energy-hungry data centers—and is raising bipartisan objections across the US. A data center in Sterling, Virginia. Photograph: Gerville/ Getty Images A 10-year moratorium on state-level AI regulation included in President Donald Trump's 'Big Beautiful Bill' has brushed up against a mounting battle over the growth of data centers. On Thursday, Representative Thomas Massie, a Kentucky Republican, posted on X that the megabill's 10-year block on states regulating artificial intelligence could 'make it easier for corporations to get zoning variances, so massive AI data centers could be built in close proximity to residential areas.' Massie, who did not vote for the bill, followed up his initial tweet with a screenshot of a story on a proposed data center in Oldham County, Kentucky, which downsized and changed locations following local pushback. 'This isn't a conspiracy theory; this was a recent issue in my Congressional district,' he wrote of concerns over the placement of data centers. 'It was resolved at the local level because local officials had leverage. The big beautiful bill undermines the ability of local communities to decide where the AI data centers will be built.' The same day, the National Conference of State Legislatures, a nonpartisan group representing state lawmakers around the country, sent a letter to the Senate urging it to reject the AI provision. Barrie Tabin, the legislative director of the NCSL, told WIRED that the organization had heard directly from multiple state lawmakers who were concerned about how the moratorium may affect data center legislation. Laws passed by local legislatures, the letter states, 'empower communities to weigh in on data center sitings, protecting ratepayers from increasing utility costs, preserving local water resources, and maintaining grid stability.' Representative Marjorie Taylor Greene, who admitted that she hadn't read the provision in the bill when she voted for it, posted a long response to Massie in which she compared AI to Skynet, the fictional AI from the Terminator movie franchise. 'I'm not voting for the development of skynet and the rise of the machines by destroying federalism for 10 years by taking away state rights to regulate and make laws on all AI,' she wrote on X. 'Forcing eminent domain on people's private properties to link the future skynet is not very Republican.' Since its introduction in the House Energy and Commerce Committee, the AI moratorium has drawn widespread criticism, including from some major AI companies, for what some say is a heavy-handed regulation of all state AI laws for the next decade. On the other hand, supporters of the moratorium—including White House AI adviser and venture capital investor David Sacks—say that the proliferation of state-level AI laws is creating a patchwork of policies that will stifle innovation if they continue to be passed. A senior official directly involved in negotiations in the Energy and Commerce Committee told WIRED that restricting states' rights over data centers, including the use of water, is not the intent of the moratorium—something lawmakers should have 'communicated better.' Rather, the goal was to establish a framework for regulating AI models at the federal level and to avoid any confusion that might come with a patchwork of state policies. 'I think it's the right policy, for us to take a national standard,' they said. While the intent of the AI moratorium may not have been to regulate physical infrastructure, the reaction from Massie illustrates just how much of a hot-button issue data centers are becoming across the country. The rapid growth in the number of data centers across the US has seen a corresponding rise in local pushback against them. While the projects bring in tax dollars, they often use massive amounts of electricity and water. A recent BloombergNEF analysis found that AI's electricity demand in the US is expected to triple by 2035, while in Virginia data centers currently use as much electricity as 60 percent of the households in the state. A recent report from Data Center Watch, a project run by AI intelligence firm 10a Labs, found that local opposition to data centers has blocked or delayed their development in many places across the country over the past two years, with Data Center Watch counting more than 140 activist groups working across 24 states. The report noted that pushback against data center construction is 'bipartisan,' with both Republican and Democratic politicians making public statements opposing data centers in their districts. 'From noise and water usage to power demands and property values, server farms have become a new target in the broader backlash against large-scale development,' the report notes. 'The landscape of local resistance is shifting—and data centers are squarely in the crosshairs.' In Virginia, data centers have already reshaped political battle lines: In Prince William County in 2023, the chair of county supervisors was ousted in 2023 following community opposition to a new data center complex. Data centers also played a starring role at a recent debate for the Republican primary for Virginia's 21st state House district, with candidates focusing on issues around tax rates and zoning for data centers. Whoever wins that Republican primary later this month will face incumbent Josh Thomas, a Democrat, in the election for the seat in November. Thomas says that data centers have become a front-and-center issue since he took office in 2022. 'I wanted to run to help give families a place to live and help women keep their reproductive rights, but turns out, data centers ended up being local issue number one,' he says. Thomas has filed several pieces of legislation around data center growth since taking office; one passed with bipartisan support this spring but was vetoed by Governor Glenn Youngkin. The AI moratorium in the megabill, sources tell WIRED, was spearheaded in the House Energy and Commerce Committee by Representative Jay Obernolte, a California Republican. Obernolte is the chair of the bipartisan House Task Force on Artificial Intelligence, which worked over the course of 2024 to form policy recommendations for how to sponsor and address the growth of AI at the federal level. While the group's final report did not mention state-level data center laws specifically, it did acknowledge the 'challenges' of AI's high energy demand and made recommendations around energy consumption, including strengthening 'efforts to track and project AI data center power usage.' In March, Obernolte described the recommendations in the Task Force's report as a 'future checklist' at an event hosted in March by the Cato Institute, a right-wing think tank. Obernolte, who said at the event that he had conferred with White House advisers including Sacks on AI policy, also said that states were 'acting on their own' with regards to legislating AI models—a situation, he added, that made it imperative for Congress to begin regulating AI as soon as possible. 'We need to make it clear to the states what the guardrails are,' Obernolte said. 'We need to do this all at once.' It's not clear if the moratorium will survive the Senate. On Friday, Punchbowl News reported that Senator Josh Hawley, a Missouri Republican, will work with Democrats to remove the AI moratorium from the final bill text.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store