
The Biggest Existential Threat Calls For Philosophers, Not AI Experts
Google's former AI chief, Geoffrey Hinton distinguishes between two ways in which AI poses an ... More existential threat to humanity. (Photo by Jonathan NACKSTRAND / AFP) (Photo by JONATHAN NACKSTRAND/AFP)
Geoffrey Hinton, Nobel laureate and former AI chief in Google, recently distinguished between two ways in which AI poses an existential threat to humanity. According to Hinton, the threat unfolds when:
Hinton cites cyberattacks, creation of viruses, corruption of elections, and creation of echo chambers as examples of the first way AI poses an existential threat. And deadly autonomous weapons and superintelligent AI that realizes it doesn't need us and therefore decides to kill us as examples of the second.
But there is a third existential threat that neither Hinton nor his AI peers seem to worry about. And contrary to their warnings, this third threat is eroding human existence without reaching any of the media headlines.
The third way AI poses an existential threat to humanity unfolds when:
The simplest definition of an existential threat is 'a threat to something's very existence'. But to know whether humanity's existence is threatened, we must know what it means to exist as a human. And the AI experts don't.
Ever since Alan Turing refused to consider the question: 'Can machines think?', AI experts have deftly failed to define basic human traits such as thinking, consciousness and creativity. No one knows how to define these things, they say. And they are right. But they are wrong to use their lack of definitions as an excuse for not taking the question of what it means to be human seriously. And they add to the existential threat to humanity by using terms like human-level intelligence when talking about AI.
German philosopher Martin Heidegger said that our relationship with technology puts us in constant ... More danger of losing touch with technology, reality, and ourselves. (Photo by Fritz Eschen / ullstein bild)
What Existential Threat Really means
Talking about when and how AI will reach human-level intelligence, or outsmart us altogether, without having any idea how to understand human thinking, consciousness, and creativity is not only optimistic. It also erodes our shared understanding of ourselves and our surroundings. And this may very well turn out to be the biggest existential threat of all: that we lose touch with our humanity.
In his 1954 lecture, 'The Question Concerning Technology', German philosopher Martin Heidegger said that our relationship with technology puts us in constant danger of losing touch with technology, reality, and ourselves. Unless we get a better grip of what he called the essence of technology, he said we are bound to:
When I interviewed Neil Lawrence, DeepMind professor of machine learning at the University of Cambridge, for 'An AI Professor's Guide To Saving Humanity From Big Tech' last year, he agreed that Heidegger's prediction has proven to be frighteningly accurate. But instead of pointing to the essence of technology, he said that 'the people who are in control of the deployment of [technology] are perhaps the least socially intelligent people we have on the planet.'
Whether that's why AI experts conveniently avoid talking about the third existential threat is not for me to say. But as long as we focus on them and their speculations about what it takes for machines to reach human-level intelligence, we are not focusing on ourselves and what it takes for us to exist and evolve as humans.
Existential Philosophers On Existential Threats
Unlike AI experts, founders, and developers, the existential philosophy that Heidegger helped pioneer has not received billions of dollars in annual investment since the 1950's. Quite the contrary. While the AI industry has exploded, the interest and investments in the humanities has declined worldwide. In other words, humanity has for decades invested heavily in understanding and developing artificial intelligence, while we have neglected to understand and develop ourselves as humans.
But although existential philosophers like Heidegger, Jean-Paul Sartre, and Maurice Merleau-Ponty have not received as large grants as their colleagues in computer science departments, they have contributed insights that are more helpful when it comes to understanding and dealing with the existential threats posed by AI.
In Being and Nothingness, French philosopher Jean-Paul Sartre places human consciousness, or ... More no-thingness (néant), in opposition to being, or thingness (être). Sorbonne, à Paris, France, le 22 mai 1968. (Photo by Pierre BLOUZARD/Gamma-Rapho)
Like different AI experts believe in different ways to reach human-level intelligence, different existential philosophers have different ways of describing human existence. But unlike AI experts, they don't consider the lack of definitions a problem. On the contrary, they consider the lack of definitions, theories and technical solutions an important piece in the puzzle of understanding what it means to be human.
Existential philosophers have realized that consciousness, creativity, and other human qualities that we struggle to define, are not an expression of 'something', that is, a core, function, or feature that distinguishes us from animals and machines. Rather, they are an expression of 'nothing'. Unlike other creatures, we humans not only exist, we also question our existence. We ask why and for how long we will be here. We exist knowing that at some point we will cease to exist. That we are limited in time and space. And therefore have to ask why, how and with whom we live our lives.
For existential philosophers, AI does not pose an existential threat to humanity because it might exterminate all humans. It poses an existential threat because it offers answers faster than humans can ask the questions that help them contemplate their existence. And when humans stop asking existential questions, they stop being human.
AI Experts Agree: Existential Threats Call For Philosophy
While existential philosophers insist on understanding the existential part of existential threats, AI experts skip the existential questions and go straight to the technical and political answers to how the threats can be contained. That's why we keep hearing about responsible AI and regulation: because that's the part that calls for technical expertise. That's the part where the AI experts are still needed.
Demis Hassabis, CEO of Google DeepMind, recently called for new, great philosophers to understand ... More the implications of developments in AI. (Photo byfor TIME)
AI experts know how to design and develop 'something', but they have no idea how to deal with 'nothing'. That's probably what Hinton realized when he retired to spend more time on what he described as 'more philosophical work.' That also seems to be what Demis Hassabis, CEO of Google DeepMind, suggests when he says that 'we need new great philosophers to come about to understand the implications of this.' And that's certainly what Nick Bostrom hinted at in my interview with him about his latest book, Deep Utopia, when he declared that some questions are 'beyond his pay grade'.
What 20th-century existential philosophy teaches us is that we don't have to wait for the AI experts to retire or for new great philosophers to emerge to deal with the existential threats posed by AI. All we have to do is remind ourselves and each other to ask how we want – and don't want – to live our lives before we trust AI to know the answer.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


TechCrunch
32 minutes ago
- TechCrunch
The stablecoin evangelist: Katie Haun's fight for digital dollars
In 2018, when Bitcoin was trading around $4,000 and most Americans, at least, thought cryptocurrency was a fad, Katie Haun found herself on a debate stage in Mexico City opposite Paul Krugman, the Nobel Prize-winning economist who had dismissed digital assets as near worthless. As Krugman focused on Bitcoin's wild price swings, Haun steered the conversation toward something else — stablecoins. 'Stablecoins are really interesting and really important to this ecosystem to hedge against that volatility,' she argued on stage, explaining how digital tokens pegged to the U.S. dollar could offer the benefits of blockchain technology without the volatility of traditional cryptocurrencies. Krugman dismissed the idea entirely. It wasn't exactly a turning point in Haun's career, but it was one moment among others that have helped define it. A former federal prosecutor who had spent more than a decade investigating financial crimes, including creating the government's first cryptocurrency task force and leading investigations into the Mt. Gox hack and corrupt agents in the Silk Road case, Haun had an unusual background for a crypto champion. She wasn't a libertarian ideologue or a tech founder. Coming instead from law enforcement, she understood the criminal potential and legitimate uses of digital assets. By 2018, she had already made history as the first female partner at Andreessen Horowitz, where she co-led their crypto funds. Founding Haun Ventures in 2022, with more than $1.5 billion in assets under management — its team is now investing from a brand-new set of funds that have yet to officially close — she has been even more free to pursue her specific convictions about the future of money. The leap to hanging her own shingle hasn't been without its complexities. Despite her role at a16z and the industry connections that came with it, the two haven't publicly co-invested in anything since early 2022, shortly after she launched her fund, and Haun, who joined the board of Coinbase in 2017, stepped off it last year, while Marc Andreessen, who took colleague Chris Dixon's seat in 2020, remains a director. When asked Wednesday night at TechCrunch's StrictlyVC event about her relationship with Andreessen Horowitz, she downplayed any potential friction while acknowledging they aren't collaborators exactly. 'There's no gentleman's agreement,' she said, echoing this editor's question about whether there's any understanding to avoid competing with her former employer. 'In fact, I still talk to Andreessen Horowitz. You're right that we haven't really done any deals together of late.' Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW The apparent lack of co-investment could reflect the cutthroat industry or the challenges associated with leaving one of Silicon Valley's most prominent firms to compete directly with former colleagues. Whatever the case, Haun is now charting her own course, and at the heart of it is stablecoins, which are cryptocurrencies designed to maintain a stable value by being pegged to traditional assets like the U.S. dollar. Unlike Bitcoin or Ethereum, which can swing wildly in value, stablecoins like Circle's USDC or Tether's USDT are meant to trade at exactly $1, creating a digital representation of traditional currency that can move on blockchain networks. Indeed, fast-forward to today, and Haun's belief in stablecoins looks increasingly prescient. Stablecoins — which barely existed in 2015 — now represent a quarter of a trillion dollars in value. They've become the 14th largest holder of U.S. Treasuries globally, recently surpassing both Germany and Norway. For the first time this year, stablecoin transaction volume exceeded Visa's. 'I think people who looked at stablecoins a few years ago thought, what is the value prop?' Haun said Wednesday night. 'You've asked me this before. You said, 'Why do I need stablecoins?' And I said, 'I refer to this as an 'If it works for me, it works for everyone' problem.' In reality, for most Americans, the existing financial system works reasonably well. We have Venmo, bank accounts, credit cards. But Haun, drawing on her prosecutor's understanding of global financial systems, says she has long been aware that the U.S. experience isn't universal. In countries with unstable currencies or limited banking infrastructure, stablecoins offer something unique, she argues, which is instant access to stable, dollar-denominated value that can be sent anywhere in the world for pennies. 'People in Turkey don't think of Tether as a cryptocurrency,' she said Wednesday, 'They think of Tether as money.' The technology has evolved dramatically since those early debates, certainly. Stablecoins once cost $12 to send internationally. And Circle says its USDC stablecoin is fully backed one-to-one by dollars held in JP Morgan bank accounts and audited by Big Four accounting firms. It's important to note that while Circle and Tether are committed to having enough reserves to support their tokens, unlike traditional banks, there's no insured government protection behind these reserves. Still, the corporate world is taking notice in a big way. Walmart and Amazon are reportedly exploring stablecoins, as are other goliaths like Uber, Apple, and Airbnb. The reason is simple economics. Stablecoins provide a way to move the value of U.S. dollars using cryptocurrency rails instead of traditional banking infrastructure, potentially saving these retail-heavy companies billions in processing fees. But the shift has critics worried about economic chaos. If major corporations can issue their own currencies, what happens to monetary policy and banking regulation? The concerns run deeper than just economic disruption. Not all stablecoins are created equal, and many lack the backing and oversight that companies like Circle provide. While well-regulated stablecoins like USDC are backed by actual dollars in U.S. Treasury securities, others operate with less transparency or rely on complex algorithmic mechanisms that have proven vulnerable to collapse. (TerraUSD has had the most specular crash to date, wiping out $60 billion in value when it nosedived.) Corruption concerns in particular came into sharp focus recently when President Donald Trump's family issued its own stablecoin, a move that highlighted potential conflicts of interest in an industry where political influence can directly impact market value and regulatory outcomes. These concerns came to a head as Congress debated the GENIUS Act, legislation that would create a federal framework for stablecoin regulation. The bill passed the Senate early last week with bipartisan support, with 14 Democrats crossing party lines to support it. It now awaits a House vote before potentially reaching the president's desk. But Senator Elizabeth Warren, the ranking member on the Senate Banking Committee, has been particularly vocal in her opposition, calling the legislation a 'superhighway for Donald Trump's corruption.' Her criticism centers on a notable gap in the bill: while it prohibits members of Congress and senior executive branch officials from issuing stablecoin products, it says nothing about their family members. Asked about Warren's concerns on Wednesday night, Haun practically rolled her eyes. 'I think it's really ironic that Elizabeth Warren or other Democrats who do call this corruption are not running to pass crypto legislation,' she said. 'Had there been rules of the road in place [already], there would have been a framework, there would have been clear rules for what's a security, what's a commodity, and what are the consumer protections around that.' Haun, whose venture capital firm has made numerous stablecoin investments including Bridge (acquired by Stripe for reportedly 10 times forward revenue), is largely supportive of the legislation, unsurprisingly. But she has one notable criticism: the bill's prohibition on yield-bearing stablecoins. 'I'm not sure that yield-bearing stablecoins are a good idea for consumers in the U.S., but I'm not sure that a prohibition is a good idea,' she told StrictlyVC attendees. The issue comes down to who profits from the interest earned on stablecoin reserves. Currently, that money goes to companies like Circle and Coinbase. But Haun wonders why consumers shouldn't get that yield, just like they would with a savings account. 'If you had a savings account or checking account and you're getting yield on that, you're getting interest,' she explained. 'What if you just said, 'No, the bank gets interest, not you,' and they're lending out your money?' Haun was less nuanced when it comes to another Warren concern: that if the GENIUS Act is signed into law, stablecoins could become a vehicle for money laundering and terrorism financing. 'Criminals are great beta testers of all technologies,' said Haun. 'But this technology is highly traceable, way more traceable than cash. The largest criminal instrument is the dollar bill.' (According to Haun, the Treasury Department has testified that 99.9% of money laundering crimes succeed using traditional bank wires, not cryptocurrency.) Meanwhile, she said, the regulatory clarity that legislation like the GENIUS Act provides could actually make the system safer by distinguishing between legitimate, well-backed stablecoins from more experimental or risky variants. In fact, as the stablecoin ecosystem continues to mature, Haun sees even bigger changes ahead. She envisions a future where all kinds of assets — from money market funds to real estate to private credit — get 'tokenized' and made available 24/7 to global markets. 'It's just a digital representation of a physical asset,' she explains. 'BlackRock, Franklin Templeton, they've already tokenized their money market funds. That's already happened.' According to Haun, tokenized assets could democratize access to investments in ways similar to how Netflix democratized entertainment. Instead of having to be wealthy enough to meet minimum investment thresholds, someone with $25 and a smartphone could buy fractional ownership in a share of Apple or Amazon, for example. 'Just because something's inevitable doesn't mean it's imminent,' Haun said on Wednesday. But she's confident the transformation is coming, driven by the same forces that made stablecoins successful: they're faster, cheaper, and more accessible than traditional alternatives. Looking back at that 2018 debate with Krugman, Haun's persistence seems to have paid off. A major question now isn't whether digital dollars will reshape the financial system but perhaps more importantly, whether regulators can keep pace with the technology while addressing legitimate concerns about corruption, consumer protection, and financial stability. Haun doesn't seem concerned. While critics point to the fact that stablecoins represent just 2% of global payments, questioning their product-market fit, Haun sees this as a familiar tech adoption story — one that has played out repeatedly and often takes longer than people initially imagine. 'We think it's really early days,' she told the crowd.


Forbes
32 minutes ago
- Forbes
From Cognitive Debt To Cognitive Dividend: 4 Factors
Benjamin Franklin portrait and light bulbs idea concept on white background When an eye-catching (not yet peer reviewed) MIT Media Lab paper — Your Brain on ChatGPT — landed this month, the headline sounded almost playful. The data are anything but. Over four months, students who leaned on a large-language model to draft SAT-style essays showed the weakest neural connectivity, lowest memory recall, and flattest writing style of three comparison groups. The authors dub this hidden cost cognitive debt: each time we let a machine think for us, natural intelligence quietly pays interest. Is it time to quit the AI train while we still can, or this the moment to adopt a more thoughtful yet pragmatic alternative to blind offloading? We can deliberately offset cognitive debt with intentional mental effort, switching between solo thinking and AI-assisted modes to stretch neural networks rather than letting them atrophy. Drawing from insights into physiology, this might be the moment to adopt a cognitive high-intensity interval training. To get started think in terms of four sequential guardrails, the 4 A-Factors — that convert short-term convenience into the long-term dividend of hybrid Intelligence:. Attitude: Set The Motive Before You Type (Or Vibe Code) Mindset shapes outcome. In a company memo published on 17 June 2025, Amazon chief executive Andy Jassy urged employees to 'be curious about AI, educate yourself, attend workshops, and experiment whenever you can'. Curiosity can frame the system as a colleague rather than a cognitive crutch. Before opening a prompt window, write one sentence that explains why you are calling on the model, for example, 'I am using the chatbot to prototype ideas that I will refine myself.' The pause anchors ownership. Managers can reinforce that habit by rewriting briefs: swap verbs such as generate or replace for verbs that imply collaboration like co-design or stress-test. Meetings that begin with a shared intention end with fewer rewrites and stronger ideas. Approach: Align Aspirations, Actions And Algorithms Technology always follows incentives. If we measure only speed or click-through, that is what machines will maximize, often at the expense of originality or empathy. It does not have to be an either-or equation. MIT Sloan research on complementary capabilities highlights that pattern recognition is silicon's strength while judgment and ethics remain ours. Teams therefore need a habit of alignment. First, trace how a desired human outcome, i.e. say, customer trust, translates into day-to-day actions such as transparent messaging. Then confirm that the optimization targets inside the model rewards those very actions, not merely throughput. When aspirations, actions, and algorithms pull in one direction, humans stay in the loop where values matter and machines are tailored with a prosocial intention to accelerate what we value. Ability: Build Double Literacy Tools do not level the playing field; they raise the ceiling for those who can question them. An EY Responsible AI Pulse survey released in June 2025 reported that fewer than one-third of C-suite leaders feel highly confident that their governance frameworks can spot hidden model errors. Meanwhile an Accenture study shows that ninety-two per cent of leaders consider generative AI essential to business reinvention. The gap is interesting. Closing it requires double literacy: fluency in interpersonal, human interplays and machine logic. On the technical side, managers should know how to read a model card, notice spurious correlations, and ask for confidence intervals. On the human side, they must predict how a redesigned workflow changes trust, autonomy, or diversity of thought. Promotions and pay should reward people who speak both languages, because the future belongs to translators, not spectators. Ambition: Scale Humans Up, Not Out The goal is not to squeeze people out but to stretch what people can do. MIT Sloan's Ideas Made to Matter recently profiled emerging 'hybrid intelligence' systems that amplify and augment human capability rather than replace it.. Ambition reframes metrics. Instead of chasing ten-per-cent efficiencies, design for ten-fold creativity. Include indicators such as learning velocity, cross-domain experimentation, and employee agency alongside traditional return on investment. When a firm treats AI as a catalyst for human ingenuity, the dividend compounds: faster product cycles, richer talent pipelines, and reputational lift. 4 Quick Takeaways Attitude → Write the 'why' before the prompt; the pause keeps you in charge. Approach → Harmonize values and tools; adjust the tool when it drifts away from the values you believe in, as a human, offline. Not the other way → Learn to challenge numbers and narratives; double literacy begins with you. Ambition → Audit metrics quarterly to be sure they elevate human potential. Cognitive Debt Is Not Destiny Attitude steers intention, approach ties goals to code, ability equips people to question what the code does, and ambition keeps the whole endeavor pointed at humane progress. Run every digital engagement through the 4 A factor grid and yesterday's mental mortgage turns into tomorrow's dividend in creativity, compassion and shared humanistic value for all stakeholders.
Yahoo
44 minutes ago
- Yahoo
Moratorium on state AI regulation clears Senate hurdle
A Republican effort to prevent states from enforcing their own AI regulations cleared a key procedural hurdle on Saturday. The rule, as reportedly rewritten by Senate Commerce Chair Ted Cruz in an attempt to comply with budgetary rules, would withhold federal broadband funding from states if they try to enforce AI regulations in the next 10 years. And the rewrite seems to have passed muster, with the Senate Parliamentarian now ruling that the provision is not subject to the so-called Byrd rule — so it can be included in Republicans' 'One Big, Beautiful Bill' and passed with a simple majority, without potentially getting blocked by a filibuster, and without requiring support from Senate Democrats. However, it's not clear how many Republicans will support the moratorium. For example, Republican Senator Marsha Blackburn of Tennessee recently said, 'We do not need a moratorium that would prohibit our states from stepping up and protecting citizens in their state.' And while the House of Representatives already passed a version of the bill that included a moratorium on AI regulation, far-right Representative Marjorie Taylor Greene subbsequently declared that she is 'adamantly OPPOSED' the provision as 'a violation of state rights' and said it needs to be 'stripped out in the Senate.' House Speaker Mike Johnson defended the provision by saying it had President Donald Trump's support and arguing, 'We have to be careful not to have 50 different states regulating AI, because it has national security implications, right?' In a recent report, Americans for Responsible Innovation (an advocacy group for AI regulation), wrote that 'the proposal's broad language could potentially sweep away a wide range of public interest state legislation regulating AI and other algorithmic-based technologies, creating a regulatory vacuum across multiple technology policy domains without offering federal alternatives to replace the eliminated state-level guardrails.' A number of states do seem to be taking steps toward AI regulation. In California, Governor Gavin Newsom vetoed a high-profile AI safety bill last year while signing a number of less controversial regulations around issues like privacy and deepfakes. In New York, an AI safety bill passed by state lawmakers is awaiting Governor Kathy Hochul's signature. And Utah has passed its own regulations around AI transparency.