logo
The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Scoop02-06-2025

It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet?
"My greatest fear is that, in the long run, the digital beings we're creating turn out to be a better form of intelligence than people."
Hinton's fears come from a place of knowledge. Described as the Godfather of AI, he is a pioneering British-Canadian computer scientist whose decades of work in artificial intelligence earned him global acclaim.
His career at the forefront of machine learning began at its inception - before the first Pacman game was released.
But after leading AI research at Google for a decade, Hinton left the company in 2023 to speak more freely about what he now sees as the grave dangers posed by artificial intelligence.
Talking on this weeks's 30 With Guyon Espiner, Hinton offers his latest assessment of our AI-dominated future. One filled with promise, peril - and a potential apocalypse.
The Good: 'It's going to do wonderful things for us'
Hinton remains positive about many of the potential benefits of AI, especially in fields like healthcare and education. "It's going to do wonderful things for us," he says.
According to a report from this year's World Economic Forum, the AI market is already worth around US$5 billion in education. That's expected to grow to US$112.3 billion in the next decade.
Proponents like Hinton believe the benefits to education lie in targeted efficiency when it comes to student learning, similar to how AI assistance is assisting medical diagnoses.
"In healthcare, you're going to be able to have [an AI] family doctor who's seen millions of patients - including quite a few with the same very rare condition you have - that knows your genome, knows all your tests, and hasn't forgotten any of them."
He describes AI systems that already outperform doctors in diagnosing complex cases. When combined with human physicians, the results are even more impressive - a human-AI synergy he believes will only improve over time.
Hinton disagrees with former colleague Demis Hassabis at Google Deepmind, who predicts AI learning is on track to cure all diseases in just 10 years. "I think that's a bit optimistic."
"If he said 25 years I'd believe it."
The Bad: 'Autonomous lethal weapons'
Despite these benefits, Hinton warns of pressing risks that demand urgent attention.
"Right now, we're at a special point in history," he says. "We need to work quite hard to figure out how to deal with all the short-term bad consequences of AI, like corrupting elections, putting people out of work, cybercrimes."
He is particularly alarmed by military developments, including Google's removal of their long-standing pledge not to use AI to develop weapons of war.
"This shows," says Hinton of his former employers, "the company's principals were up for sale."
He believes defense departments of all major arms dealers are already busy working on "autonomous lethal weapons. Swarms of drones that go and kill people. Maybe people of a particular kind".
He also points out the grim fact that Europe's AI regulations - some of the world's most robust - contain "a little clause that says none of these regulations apply to military uses of AI".
Then there is AI's capacity for deception - designed as it to mimic the behaviours of its creator species. Hinton says current systems can already engage in deliberate manipulation, noting Cybercrime has surged - in just one year - by 1200 percent.
The Apocalyptic: 'We'd no longer be needed'
At the heart of Hinton's warning lies that deeper, existential question: what happens when we are no longer the most intelligent beings on the planet?
"I think it would be a bad thing for people - because we'd no longer be needed."
Despite the current surge in AI's military applications, Hinton doesn't envisage an AI takeover being like The Terminator franchise.
"If [AI] was going to take over… there's so many ways they could do it. I don't even want to speculate about what way [it] would choose."
'Ask a chicken'
For those who believe a rogue AI can simply be shut down by "pulling the plug", Hinton believes it's not far-fetched for the next generation of superintelligent AI to manipulate people into keeping it alive.
This month, Palisade Research reported that Open AI's Chat GPT 03 model altered shut-down codes to prevent itself from being switched off - despite being given clear instructions to do so by the research team.
Perhaps most unsettling of all is Hinton's lack of faith in our ability to respond. "There are so many bad uses as well as good," he says. "And our political systems are just not in a good state to deal with this coming along now."
It's a sobering reflection from one of the brightest minds in AI - whose work helped build the systems now raising alarms.
He closes on a metaphor that sounds absurd as it does chilling: "If you want to know what it's like not to be the apex intelligence, ask a chicken."
Watch the full conversation with Geoffrey Hinton and Guyon Espiner on 30 With Guyon Espiner.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

World's largest data breach exposes 16 billion credentials
World's largest data breach exposes 16 billion credentials

Techday NZ

time5 hours ago

  • Techday NZ

World's largest data breach exposes 16 billion credentials

The scale of the latest data breach, involving a staggering 16 billion new credentials and passwords, is forcing both experts and organisations to reckon with the ongoing weaknesses in global digital security. Described as the world's largest data breach, the incident has reportedly swept up data from a vast array of online platforms, including not only commercial giants like Apple and Google but also government services and numerous SaaS (Software as a Service) applications. Brian Soby, co-founder and CTO at AppOmni, whose company specialises in securing digital records, believes the breach was inevitable given the industry's reliance on outmoded security frameworks. Soby warns that the gravity of the situation goes beyond the raw numbers: "This isn't just a collection of old, previously leaked passwords; it appears to be a new, massive, and highly organised library of credentials." According to Soby, cybercriminals now hold a "roadmap for widespread account takeovers" that threatens the backbone of modern digital life — cloud services and SaaS applications — potentially outpacing many current security defences. Soby highlights a critical vulnerability at the heart of today's enterprises. While many organisations invest in identity management and access security projects, basic misconfigurations and failure to disable outdated forms of credential use leave them exposed. "Large credential dumps such as these are likely to highlight just how many organisations indeed remain vulnerable to credential attacks due to these insufficient protections," he adds. Spencer Young, Senior Vice President EMEA at cybersecurity firm Delinea, echoes the concern, underlining that static credentials, especially passwords which are seldom changed, represent an Achilles' heel. "Passwords alone – especially unrotated ones – leave consumers and organisations vulnerable to phishing, credential stuffing, and Pass-the-Hash attacks," he notes. Young stresses that the traditional advice of strong password hygiene is no longer sufficient. Instead, initiatives like automated password rotation and credential vaulting, which reduce the window of opportunity for attackers, should be the new standard. In terms of longer-term solutions, Young observes that passwordless authentication approaches are gaining traction. "Technologies such as biometrics, where biometric data remains encrypted and safely stored in the device and does not travel across the network, improves the authentication process," he explains. However, he warns that passwords themselves are far from obsolete; they are increasingly being relegated to the background as part of a layered, multifactor authorisation system that may include one-time passwords or magic links to enhance security. With cybercriminals orchestrating campaigns using vast troves of login data, the scale of weaponisation is unprecedented. Tim Eades, CEO and co-founder at Anetac, illustrates the dilemma facing organisations across the world, as these troves become "a commodity that are bought, sold, and weaponised in countless attacks." Eades notes that the unrelenting circulation of stolen records magnifies the risk over time, especially as new AI agents — sometimes deployed without adequate safeguards — can introduce further vulnerabilities and thousands of new access points for attackers. "The part that keeps CISOs up at night? These records circulate for years, the risk doesn't go away, it only grows over time." Raising further alarm, Eades points out that until affected organisations are identified, compromised individuals may have no warning or recourse. This opacity not only endangers users but also perpetuates a cycle in which threat actors vie to surpass one another, pushing the boundaries of data breaches ever further. He urges organisations to reinforce security measures: "Leaders should protect all credentials like they are the keys to the castle." Encouraging the use of unique passwords, two-factor authentication, and embedding a culture of security awareness are presented as essential starting points. Another concern arising from the breach is the "snowball effect" it might have on cyber-attacks, especially through the proliferation of sleeper accounts. Xavier Sheikrojan, Senior Risk Intelligence Manager at Signifyd, warns that fraudsters may use stolen credentials not just for immediate exploitation but to create dormant accounts for later and larger-scale attacks. He advocates for proactive action, urging businesses to monitor user behaviour, force password resets, and continually refine machine learning systems aimed at picking up fraudulent activity. As experts across the sector agree, the exposure of billions of records simultaneously marks a pivotal moment in the digital security landscape. While technology continues to advance, so too does the capacity and sophistication of cybercrime, prompting renewed calls for organisations and individuals alike to treat identity and access security with unwavering seriousness and vigilance.

‘Nanogirl' informs South on AI's use
‘Nanogirl' informs South on AI's use

Otago Daily Times

time2 days ago

  • Otago Daily Times

‘Nanogirl' informs South on AI's use

Even though "Nanogirl", Dr Michelle Dickinson, has worked with world leading tech giants, she prefers to inspire the next generation. About 60 Great South guests were glued to their Kelvin Hotel seats on Thursday evening as the United Kingdom-born New Zealand nanotechnologist shared her knowledge and AI's future impact. Business needed to stay informed about technology so it could future-proof, she said. The days were gone where the traditional five year business plan would be enough to futureproof due to the breakneck speed technology has been advancing. Owners also needed to understand the importance of maintaining a customer-centric business or risk becoming quickly irrelevant. "I care about that we have empty stores." The number of legacy institutions closing was evidence of its model not moving with the customer. "Not being customer-centric is the biggest threat to business." Schools were another sector which needed to adapt to the changing world as it predominantly catered to produce an "average" student. "Nobody wants their kids to be average." Were AI technology to be implemented it could be used to develop personalised learning models while removing the stress-inducing and labour-intensive tasks from teachers' workload. "Now you can be the best teacher you can be and stay in the field you love. "I don't want our teachers to be burnt out, I want them to be excited to be teaching." In 30 seconds, new technology could now produce individualised 12-week teaching plans aligned to the curriculum, in both Ma¯ori and English she said. Agriculture was another sector to benefit from the developing technology. Better crop yields and cost savings could now be achieved through localised soil and crop tracking information which pinpointed what fertiliser needs or moisture levels were required in specific sections of a paddock. While AI was a problem-solving tool which provided outcomes on the information available to it, to work well, it still needed the creative ideas to come from humans, she said. "People are the fundamentals of the future . . . and human side of why we do things should be at the forefront. "We, as humans, make some pretty cool decisions that aren't always based on logic." Personal and commercial security had also become imperative now there was the ability to produce realistic "deep-fake" productions with videos and audio was about to hit us. She urged families and organisations to have "safe words" that would not be present in deep fake recordings and allow family members or staff to identify fake from genuine cries for help. "This is the stuff we need to be talking about with our kids right now." Great South chief executive Chami Abeysinghe said Dr Dickinson's presentation raised some "thought-provoking" questions for Southland's business leaders. She believed there needed to be discussions about how Southland could position itself to be at the forefront of tech-driven innovation. "I think some of the points that she really raised was a good indication that we probably need to get a bit quicker at adopting and adapting. "By the time we get around to thinking about it, it has already changed again." AI was able to process information and data in a fraction of the time humans did, but the technology did not come without risks and it was critical businesses protected their operations. "If we are going to use it, we need to be able to know that it's secure." Information on ChatGPT entered the public realm that everyone could have access to and business policies had not kept up. "You absolutely have to have a [AI security] policy."

Nearly half of developers say over 50% of code is AI-generated
Nearly half of developers say over 50% of code is AI-generated

Techday NZ

time2 days ago

  • Techday NZ

Nearly half of developers say over 50% of code is AI-generated

Cloudsmith's latest report shows that nearly half of all developers using AI in their workflows now have codebases that are at least 50% AI-generated. The 2025 Artifact Management Report from Cloudsmith surveyed 307 software professionals in the US and UK, all working with AI as part of their development, DevOps, or CI/CD processes. Among these respondents, 42% reported that at least half of their current codebase is now produced by AI tools. Despite the large-scale adoption of AI-driven coding, oversight remains inconsistent. Only 67% of developers who use AI review the generated code before every deployment. This means nearly one-third of those working with AI-assisted code are deploying software without always performing a human review, even as new security risks linked to AI-generated code are emerging. Security concerns The report points to a gap between the rapid pace of AI integration in software workflows and the implementation of safety checks and controls. Attacks such as 'slopsquatting'—where malicious actors exploit hallucinated or non-existent dependencies suggested by AI code assistants—highlight the risks when AI-generated code is left unchecked. Cloudsmith's data shows that while 59% of developers say they apply extra scrutiny to AI-generated packages, far fewer have more systematic approaches in place for risk mitigation. Only 34% use tools that enforce policies specific to AI-generated artifacts, and 17% acknowledge they have no controls in place at all for managing AI-written code or dependencies. "Software development teams are shipping faster, with more AI-generated code and AI agent-led updates," said Glenn Weinstein, CEO at Cloudsmith. "AI tools have had a huge impact on developer productivity, which is great. That said, with potentially less human scrutiny on generated code, it's more important that leaders ensure the right automated controls are in place for the software supply chain." Developer perceptions The research reveals a range of attitudes towards AI-generated code among developers. While 59% are cautious and take extra steps to verify the integrity of code created by AI, 20% said they trust AI-generated code "completely." This suggests a marked difference in risk appetite and perception within developer teams, even as the majority acknowledge the need for vigilance. Across the sample, 86% of developers reported an increase in the use of AI-influenced packages or software dependencies in the past year, and 40% described this increase as "significant." Nonetheless, only 29% of those surveyed felt "very confident" in their ability to detect potential vulnerabilities in open-source libraries, from which AI tools frequently pull suggestions. "Controlling the software supply chain is the first step towards securing it," added Weinstein. "Automated checks and use of curated artifact repositories can help developers spot issues early in the development lifecycle." Tooling and controls The report highlights that adoption of automated tools specifically designed for AI-generated code remains limited, despite the stated importance of security among software development teams. While AI technologies accelerate the pace of software delivery and updating, adoption of stricter controls and policy enforcement is not keeping up with the new risks posed by machine-generated code. The findings indicate a potential lag in upgrading security processes or artifact management solutions to match the growing use of AI in coding. Developers from a range of industries—including technology, finance, healthcare, and manufacturing—participated in the survey, with roles spanning development, DevOps management, engineering, and security leadership in enterprises with more than 500 employees. The full Cloudsmith 2025 Artifact Management Report also explores other key issues, including how teams decide which open-source packages to trust, the expanding presence of AI in build pipelines, and the persistent challenges in prioritising tooling upgrades for security benefits.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store