logo
Crossed Wires: Artificial Intelligence reality check – not as smart as it thinks it is

Crossed Wires: Artificial Intelligence reality check – not as smart as it thinks it is

Daily Maverick5 days ago

Apple researchers' recent paper, The Illusion of Thinking, challenges the hype around AI, revealing its limitations in solving complex problems.
If one is to believe Sam Altman and other AI boosters and accelerationists, the era of abundance is almost upon us. AI is about to relieve us all of drudgery, ill health, poverty and many other miseries before leading us to some promised land where we will shed our burdens and turn our attention to loftier concerns. Any day now.
And so the publication of a paper by Apple researchers this month arrived as a refreshing dose of realism. It was titled The Illusion of Thinking and it broke the AI Internet. It concluded that ChatGPT-style GenAI models (like Claude, Gemini, DeepSeek and others) can only solve a constrained set of problems and tend to collapse spectacularly when complexity is introduced. The implications of the paper are clear – the underlying technologies that have so supercharged the AI narrative and fueled so much hyperbole have a long way to go before anyone attains the holy grail of Artificial General Intelligence (AGI) and the imagined utopia of techno-optimists.
For anyone with time and grit, here is the paper. One of the examples cited concerns the well-known 'Tower of Hanoi' problem, which involves stacking variously sized disks on a vertical rod. Any reasonably smart nine-year-old can find a solution, a very short computer program can describe the solution but, left to its own devices, GenAI cannot come up with a general solution to the problem. As more and more disks are added, the AI becomes a blithering idiot. It has no idea what it is doing. It is not able to 'generalise' from a few disks to many.
This leads to the inescapable conclusion that, if a child or a very short algorithm can best the most advanced 'reasoning' models from ChatGPT and Claude, then we are far from AGI. No matter what Sam Altman says.
It is not as if a whole slew of clever researchers are blind to this fact. There are some researchers busy trying to embed ethics and alignment into AI so that humans can survive its evolution without too much pain or possible extinction. There are some researchers who are taking what we have now and applying it to current real-world problems in science, education, healthcare or the sludge of institutional processes. And there are some who are saying: This version of AI, this 'deep learning' machine that has captured everyone's attention – it is simply not good enough. They are looking to invent something that breaks free of the constraints which Apple's paper so brutally highlights.
There are some clever band-aids available to patch over the obvious weaknesses of current AI models, such as a widely used technique called RL (Reinforcement Learning), which boosts learning after the AI has been trained, but these partial fixes do not address the basic weakness of the core architecture – they address the symptoms and not the cause.
It doesn't take an expert to know that humans learn in many different ways, all the way back to our warm launchpad in the womb. We have genetic programs gifted by our ancestors, we learn from our senses, we learn by example, we learn by trial and error, we learn by being taught by others, we learn by accident, we learn by intent, and then we also learn to reason, to generalize, to deduce, to infer. It is probably fair to say that we humans are learning machines – running all day, every day, from the moment of conception. Our learning may well be faulty, our memories inaccurate, our lessons sometimes haphazard, our failures manifold – but learn we do, always and forever.
It is in this area that the current crop of AI techniques are exposed as having only a thin veneer of competence. Take ChatGPT, at least in its text version. It has learnt how to predict the next word from a historical store of human-created documents reduced to gigantic matrices of statistically related words. There is not much more to it than that, even though its usefulness has astounded everyone.
But really, compare this with what our species does as we go about our daily business – learning, learning, learning, both to our benefit and sometimes to our detriment – all the time, unable to stop for even a microsecond. AI models are simply embarrassing next to that. Babies are smarter, primates are smarter. Dogs are smarter. The challenge of 'continuous autonomous learning' has yet to be met in current AI models.
Before I go overboard about the absurdity of the AGI-is-nearly-here claim, I should throw some light on what has been achieved, especially via the GenAI technologies. These are sometimes confusingly called Large Language Models (they now go way beyond mere language). What they can do is truly unprecedented. I use them all day, every day. They are novel and brilliant assistants. They are much, much smarter or faster than I am at doing a whole slew of important things. But I am much, much smarter than they are when it comes to a huge number of other things.
AGI, as now commonly defined, means the point at which AI equals (or betters) humans at all cognitive (as opposed to physical) tasks. I spend a large part of my day reading about the advances at the edge of this fabulous field, which is probably the most important technological development in human history. There is fabulous stuff coming down the line. A cure for cancer, perhaps. Infinite cheap energy. Healthy and long lives.
But will it be better than humans at all cognitive tasks? Not today. Not this year. Not next year. Not until AI is spawned as we are and learns as we do.
Like the witches riddle in Shakespeare's Macbeth, perhaps only when AI is of woman born. DM

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

16 billion passwords leaked in massive data breach
16 billion passwords leaked in massive data breach

The South African

time17 hours ago

  • The South African

16 billion passwords leaked in massive data breach

Cybersecurity researchers uncovered a massive breach, with 16 billion passwords leaked in one of the biggest data exposures ever. The data trove, revealed by the Cybernews research team, could grant cybercriminals access to a wide range of online services and personal accounts. The researchers found 30 exposed datasets, each containing tens of millions to over 3.5 billion records. Most of the data includes a URL followed by login credentials and passwords, a format commonly collected by infostealer malware. Cybernews said the datasets appear to be a mix of credential-stuffing compilations, stealer malware output, and previously repackaged leaks. Researchers caution that some claims about stolen credentials from major platforms like Facebook, Apple, and Google exaggerate the facts. 'It's hard to miss anything when 16 billion records are on the table,' the researchers noted. However, the true number of unique records remains uncertain due to potential duplication across datasets. Cybercriminals now have unprecedented access to personal information from the leak, enabling them to carry out account takeovers, identity theft, and phishing attacks. Cybernews reported that security professionals may have compiled some of the data for monitoring, but cybercriminals almost certainly possessed some of the datasets. Aggregated databases are especially valuable for criminals looking to scale their attacks. According to cybersecurity experts cited by CBS News, taking immediate action following a data breach is critical to safeguarding your personal information. For those who struggle to memorise numerous complex passwords. A reliable password manager or the use of passkeys can simplify this task while greatly enhancing security. These tools generate strong, unique passwords and store them securely. In addition, enabling multifactor authentication, whether through a mobile phone, email, or a physical USB authenticator key, adds an essential second layer of protection. This extra step makes it significantly more difficult for unauthorised users to access your accounts, even if they obtain your login credentials. Maintaining strong cyber hygiene practices is essential today. Data breaches are becoming more common. Risks to personal privacy and financial security are higher than ever. Let us know by leaving a comment below, or send a WhatsApp to 060 011 021 1. Subscribe to The South African website's newsletters and follow us on WhatsApp, Facebook, X, and Bluesky for the latest news.

PODCAST: Toyota Tundra in Mzansi can shake up the bakkie game
PODCAST: Toyota Tundra in Mzansi can shake up the bakkie game

The Citizen

time21 hours ago

  • The Citizen

PODCAST: Toyota Tundra in Mzansi can shake up the bakkie game

American bakkie's hybrid powertrain produces 325kW of power and 790Nm of torque. South Africans love driving bakkies, the bigger the better. This love-affair can soon grow more intense with the imminent arrival the Toyota Tundra in right-hand drive. The Toyota Tundra is an American truck-styled bakkie built for the North American market manufactured at two assembly plants in Indiana and Texas. While it is built exclusively in left-hand drive (LHD), Australian-firm Autogroup International (AGI) imports them and then convert them into right-hand drive. Durban-based Rospa International has now entered into a partnership agreement with AGI which could soon lead to the Toyota Tundra making its way to Mzansi via Down Under. Toyota Tundra makes a Pitstop On this week's episode of The Citizen Motoring's Pitstop podcast, we discuss the bakkie's potential in South Africa. The Toyota Tundra is offered as a hybrid in the US. A 3.5-litre twin-turbo V6 petrol engine hooked up to an electric motor and battery pack combines to deliver a monstrous 325kW of power and 790Nm of torque. These numbers, along with its outlandishly big dimensions, are exactly the reasons local bakkie fans would crave to hav a Toyota Tundra standing in their driveways. Rospa International has a solid reputation for importing classic and sports cars, mainly from Japan. These include Nissan GT-Rs, Honda NSXs, Toyota AE86s and Mazda RX-7s. Its partnership with AGI will also see its portfolio expand to a range of American cars. These include the likes of the Dodge Challenger, Hummer EV and Camaro. ALSO READ: Importer to offer right-hand-drive Toyota Tundra in Mzansi Experienced converters 'AGI firmly believes in right-hand-drive markets and the opportunities they present,' says Rob Hill, CEO of AGI. 'Since the early 1990s, we have carefully re-engineered over 5 000 LHD vehicles for export all over the world. We can now add South Africa to that list through Rospa International, a company that shares our vision and passion.' Rospa International will offer finance on the imported cars, as well as three-year/100 000km warranties with after-sales back-up.

Pope Leo warns AI could disrupt young minds' grip on reality
Pope Leo warns AI could disrupt young minds' grip on reality

The Citizen

time2 days ago

  • The Citizen

Pope Leo warns AI could disrupt young minds' grip on reality

The pope has called for ethical oversight on AI, especially for the sake of children and adolescents. Pope Leo XIV warned on Friday of the potential consequences of artificial intelligence (AI) on the intellectual development of young people, saying it could damage their grip on reality. Since his election as head of the Catholic Church on May 8, the pope — a mathematics graduate — has repeatedly warned of the risks associated with AI but this is the first time he has spoken out exclusively on the subject. Concerns for children's mental and neurological development 'All of us… are concerned for children and young people, and the possible consequences of the use of AI on their intellectual and neurological development,' the American pope warned in a written message to participants at the second Rome Conference on AI. 'No generation has ever had such quick access to the amount of information now available through AI. 'But again, access to data — however extensive — must not be confused with intelligence,' Leo told business leaders, policymakers and researchers attending the annual conference. While welcoming the use of AI in 'enhancing research in healthcare and scientific discovery', the pope said it 'raises troubling questions on its possible repercussions' on humanity's 'distinctive ability to grasp and process reality'. ALSO READ: Nzimande signs letter of intent in China to boost AI in SA Pope targeted by AI manipulation Pope Leo himself has been the target of deep fake videos and audio messages published on social media in recent weeks. An AFP investigation earlier this month identified dozens of YouTube and TikTok pages broadcasting AI-generated messages masquerading as genuine comments from the pope in English or Spanish. A survey from the Reuters Institute for the Study of Journalism this week found significant numbers of young people in particular were using chatbots to get headlines and updates. The church's broader push for AI ethics The Catholic Church has attempted to influence ethical thinking surrounding the use of new technologies in recent years under Leo's predecessor Francis. In 2020, the Vatican initiated the Rome Call for AI Ethics — signed by Microsoft, IBM, the United Nations, Italy and a host of universities — urging transparency and respect for privacy. NOW READ: Eskom launches AI chatbot 'Alfred' to expedite fault reporting

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store