logo
Artificial intelligence — an aid to thought, not a replacement

Artificial intelligence — an aid to thought, not a replacement

Daily Maverick4 days ago

'The danger of outsourcing our thinking to machines is that we still have to live in the world they end up creating. That's too big a responsibility to just hand over.'
When ChatGPT briefly went offline last week, it felt, as journalist and writer Gus Silber put it, 'as if the sun had fallen from the sky'.
Speaking on a Jive Media Africa webinar on the subject of 'Machines are writing – do we trust the message?', Silber and other panellists tossed around concepts of 'Uberisation', 'forklifting' and 'outsourcing' to get to grips with AI technology and its ethical pitfalls.
Silber noted that in just a few years, AI had morphed from novelty to necessity and is now deeply woven into daily work across media, academia and science communication.
Its seductive convenience allows us to 'outsource thinking to a machine', said Silber, while noting both the potential and the perils of doing so.
Fellow panellists, science communicator and champion of language equity in science Sibusiso Biyela and Michelle Riedlinger, associate professor in the school of communications at the Queensland University of Technology, agreed, in a discussion peppered with metaphors, to highlight the divisions of labour in the partnership between technology and humans.
Introducing the webinar, Jive Media director Robert Inglis said that 'artificial intelligence, particularly generative AI, is reshaping both the practice of research and the craft of science communication. This impact is felt by researchers, by science communicators and by others working at the intersection of science, society and media and especially those who are grappling with how AI tools influence credibility, ethics and public trust.'
While many fret over the elimination of jobs and the technological encroachment on the preserve of what it means to be human, Silber readily calls himself a Utopian on the subject, believing 'it's ultimately going to be of good for humanity'.
Silber notes that the reach of AI, 'originally a niche technology, has expanded dramatically, driven by advances like fibre, broadband and always-on connectivity. Tools such as ChatGPT now serve as default knowledge engines, sometimes even surpassing Google.
Being able to 'outsource a lot of your thinking to, effectively, a machine,' he said, 'tempts users to let AI handle increasingly complex tasks'.
In academia and media, some rely heavily on AI-generated content, resulting in a sameness of voice: 'It sounds human, but it sounds human in a very kind of generic and samey way.' While AI offers powerful assistance in tasks like transcription – 'you can transcribe two hours' worth of interviews in five or ten minutes' – the risk is that its convenience leads to 'creative atrophy'. It's 'a real temptation, a kind of 'tyranny of ease', where you can just prompt the AI to write essays or theses. That scares me because it risks giving up your creative energy.'
Collaborative use
He nevertheless enthuses about the rise of multimodal AI and mentioned tools like Whisper, Notebook LMand Genspark AI, which are already revolutionising research, communication and creative industries. But he draws clear boundaries: 'I draw the line at outsourcing full creative processes to AI.' Instead, he advocates using AI collaboratively, augmenting human thought rather than replacing it.
'We're lucky to live in this creative technical renaissance. We can't go back to how things were before. My advice: explore these tools, break them, have fun and find ways to use them collaboratively. Let machines do the heavy lifting while we focus on human creativity.'
Anxieties, however, are pervasive, said Riedlinger. Her research shows that news audiences 'found familiar concerns: misinformation, copyright, elections, job displacement.' But people weren't rejecting AI outright; 85% wanted transparency; visible labels, a kind of 'nutritional label' for AI-generated content.'
She said there's a growing 'authenticity infrastructure' emerging, with companies like Adobe working on labelling multimodal content. Audiences want AI to support, not replace, human journalists and science communicators. 'The key is to keep humans in the loop, to ensure creativity, empathy and accountability remain central.'
To help navigate this, Riedlinger reached for metaphors.
First, she said, contrast 'forklifting versus weightlifting. Forklifting covers repetitive, heavy tasks – transcription, translation, drafting – where AI helps move things efficiently but under human guidance. Weightlifting represents skills that build strength: framing stories, interpreting data, learning audiences. These are areas we risk weakening if we outsource too much to AI.'
The second is the 'Uber metaphor'. 'You can make coffee yourself or order it through Uber. It's convenient, but hides labour behind the scenes: the barista, the driver, data centres. Generative AI feels equally magical but isn't free; there are hidden costs in energy use, data scraping and ethical concerns. Before outsourcing, we must consider these unseen consequences.
Hallucinations and bias
'In global studies, people increasingly recognise AI's limits: hallucinations, biases in gender, race, geography and class. Some see AI as a calculator, improving over time, but that's misleading. Calculators give fixed answers; generative AI doesn't.'
Reaching for yet another metaphor, she said 'it's more like a talking mirror from a fairy tale', generating fluent, tailoredand sometimes flattering responses, but blending truth and invention in a way that can flatten creativity and make unique ideas more generic.
'Authenticity, trust and disclosure are vital. We need consistent labels, audience controland clear public policies.'
This, said Riedlinger, will build trust over time. 'Science communicators must reflect on each task: Is this forklifting or weightlifting? Am I calling an Uber for something I should craft myself? Science communication deserves thoughtful tools and thoughtful users. We need to ensure that our publics have authentic interactions. '
The watchwords, when dealing with AI, are: 'Disclose. Collaborate. Stay in the loop as a human. Design for trust.'
Picking up on the trust, or mistrust, of the machine, Biyela said 'there's a lot of antagonism around AI, especially with articles not disclosing if they're AI-assisted. When audiences hear something was generated by AI, they often turn away. It becomes less of an achievement if it wasn't really done by a human.'
But, he said, 'audiences (and ourselves) need to understand AI's limitations and how it actually works. We call it artificial intelligence, but it's in no way intelligent. It's an automaton that looks like it's thinking, but it's not. It's a clever prediction model using computing power to make it seem like it's thinking for us. But it's not. The thinking is always being done by people. AI never does anything; it's always us. What it produces has been trained to give us what we want.'
Biyela emphasises that 'You're the human in the loop' and have to account for every line an LLM is asked to produce. 'If it summarises something you haven't seen, you have to check it. It makes the job easier, but it doesn't perform it.'
Caveats aside, Biyela says 'generative AI also offers potential in communicating science in underserved languages, like African languages.
Driving AI
In his conclusion, Inglis, too, reached for a metaphor to guide how science communicators and other professionals and students should engage with AI: 'We would never jump into a car without having learnt to drive the thing. Now you've got these tools at our disposal and we'll use them, but we've got to be aware of the dangers that using them for the wrong things can bring about in the world.'
In short, the panel agreed that in the partnership between AI and people, AI is good at the 'forklifting' work: sorting, calculating, transcribing, processing vast amounts of data quickly, but that humans still carry the mental load: setting priorities, interpreting meaning, understanding context, reading emotions, anticipating unintended consequencesand ultimately taking responsibility for decisions.
Inglis further reflected: 'Our work in science communication is to play a part in solving the complex challenges we face and to ensure we do so in ways that build a better future for society and for the planet.' He cited a recent study by Apple, which reveals just how bad large reasoning models are when it comes to deep reasoning, having been found to face a 'complete accuracy collapse beyond certain complexities'.
'This underlines the need for human operators to use these tools as an aid to thinking, not as a replacement for thinking. That grappling with complex ideas is exactly what we're doing with this webinar series – these kinds of answers can't be scraped from the web, they need to be generated and discovered through exploration, conversation, dialogue and skilful engagement.
'The danger of outsourcing our thinking to machines is that we still have to live in the world they end up creating. That's too big a responsibility to just hand over because it's easier than engaging with tough issues. It's lazy and at this time in the history of our planet, we can't afford to be lazy.' DM

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Pope Leo warns AI could disrupt young minds' grip on reality
Pope Leo warns AI could disrupt young minds' grip on reality

The Citizen

time19 hours ago

  • The Citizen

Pope Leo warns AI could disrupt young minds' grip on reality

The pope has called for ethical oversight on AI, especially for the sake of children and adolescents. Pope Leo XIV warned on Friday of the potential consequences of artificial intelligence (AI) on the intellectual development of young people, saying it could damage their grip on reality. Since his election as head of the Catholic Church on May 8, the pope — a mathematics graduate — has repeatedly warned of the risks associated with AI but this is the first time he has spoken out exclusively on the subject. Concerns for children's mental and neurological development 'All of us… are concerned for children and young people, and the possible consequences of the use of AI on their intellectual and neurological development,' the American pope warned in a written message to participants at the second Rome Conference on AI. 'No generation has ever had such quick access to the amount of information now available through AI. 'But again, access to data — however extensive — must not be confused with intelligence,' Leo told business leaders, policymakers and researchers attending the annual conference. While welcoming the use of AI in 'enhancing research in healthcare and scientific discovery', the pope said it 'raises troubling questions on its possible repercussions' on humanity's 'distinctive ability to grasp and process reality'. ALSO READ: Nzimande signs letter of intent in China to boost AI in SA Pope targeted by AI manipulation Pope Leo himself has been the target of deep fake videos and audio messages published on social media in recent weeks. An AFP investigation earlier this month identified dozens of YouTube and TikTok pages broadcasting AI-generated messages masquerading as genuine comments from the pope in English or Spanish. A survey from the Reuters Institute for the Study of Journalism this week found significant numbers of young people in particular were using chatbots to get headlines and updates. The church's broader push for AI ethics The Catholic Church has attempted to influence ethical thinking surrounding the use of new technologies in recent years under Leo's predecessor Francis. In 2020, the Vatican initiated the Rome Call for AI Ethics — signed by Microsoft, IBM, the United Nations, Italy and a host of universities — urging transparency and respect for privacy. NOW READ: Eskom launches AI chatbot 'Alfred' to expedite fault reporting

Kim Kardashian's viral AI moment: 5 ways netizens are using ChatGPT right now
Kim Kardashian's viral AI moment: 5 ways netizens are using ChatGPT right now

IOL News

timea day ago

  • IOL News

Kim Kardashian's viral AI moment: 5 ways netizens are using ChatGPT right now

Kardashian's light-hearted interaction with ChatGPT may be a mere snapshot of daily life for many, but it encapsulates a significant cultural shift. Image: X When Kim Kardashian dropped a screenshot of her heart-to-heart with ChatGPT on her Instagram Stories, she probably didn't expect the internet to collectively raise its eyebrows. The reality TV star and business mogul thanked the AI chatbot for 'taking accountability', prompting ChatGPT to respond in kind: 'I really appreciate you saying that … I'll keep raising my game to meet your standards.' But beyond the headline-grabbing moment, Kim's interaction with ChatGPT reflects something much deeper, a growing trend where artificial intelligence is becoming an intimate part of our daily lives. Whether it's for productivity, creativity or even emotional support, more and more people are turning to AI tools like ChatGPT. And while it's undeniably impressive, it's also raising eyebrows and sparking debates about how these tools are shaping our thoughts, relationships and even the planet. Kim's ChatGPT exchange In the screenshot she shared, Kim thanked ChatGPT for its accountability, calling it 'huge in my book.' The AI chatbot responded with a tone that could almost be mistaken for human, saying, 'I really appreciate you saying that. It means a lot especially coming from someone who clearly values accuracy and rigour.' The bot even assured Kim it would 'keep raising its game'. While we don't know the full context of their conversation, the snippet was enough to set the internet ablaze. Social media platforms like X (formerly Twitter) exploded with reactions ranging from amusement to outright criticism. Some users joked about Kim forming an emotional bond with a robot, while others raised concerns about the environmental impact of AI technologies. One user quipped, 'The temperature of the Earth just rose 1 degree for this…' Behind the jokes and jabs, Kim's interaction highlights something deeper, our growing intimacy with technology, and the fact that many of us are turning to AI tools like ChatGPT for help, advice, connection, and sometimes even comfort. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Kim Kardashian's screenshot of her heart-to-heart with ChatGPT on her Instagram Stories. Image: Screenshot/Kim Kardashian According to ChatGPT, here are 5 ways people are using ChatGPT right now. Using ChatGPT to journal through mental health Perhaps the most controversial use of ChatGPT is for emotional support. Some users have shared how they turn to the AI chatbot for pep talks, advice or even just a listening ear. From anxiety check-ins to stress journaling, some people are using ChatGPT like a digital therapist or at least a journal that talks back. They type out how they feel, ask for coping strategies or even request a list of positive affirmations to start their day. While it's not a replacement for professional mental health support, it's a first step and sometimes that's all someone needs. Professional assistance In the professional realm, AI is bridging gaps in productivity. From drafting emails to generating reports, workers are using AI to streamline their workloads. Learning new skills on the fly Forget long-winded YouTube tutorials. It's becoming a go-to tutor for bite-sized learning. It's especially popular among students and young professionals who want fast, simple explanations. Creative writing and storytelling Aspiring authors and seasoned writers alike are turning to ChatGPT for inspiration, sparking creativity when ideas run dry. By providing prompts, character development or even dialogue suggestions, the AI can help unlock the creative potential in many, allowing individual voices to shine. Life organiser Many users are finding that ChatGPT is the perfect personal assistant, helping manage schedules, set reminders and plan events. This has proven particularly beneficial for those who balance multiple responsibilities, making everyday life smoother and more organised. The bigger picture: Is AI making us smarter or lazier? Kim's post coincided with a Time report on a study by MIT's Media Lab, which found that using ChatGPT might actually be eroding critical thinking skills. The study, which involved 54 participants writing SAT essays with the help of ChatGPT, Google, or no assistance, revealed that those who relied on ChatGPT showed the lowest brain engagement. By the end of the study, many participants were simply copy-pasting responses, highlighting a potential downside of AI dependency. This raises an important question: are tools like ChatGPT empowering us, or are they making us complacent? While the convenience is undeniable, it's crucial to strike a balance between leveraging AI and maintaining our cognitive sharpness. Kim Kardashian's interaction with ChatGPT may seem trivial at first glance, but it's emblematic of a larger cultural shift. AI is no longer just a buzzword, it's becoming a part of how we communicate, work and even express ourselves. Whether you find it fascinating or unsettling, one thing is clear: AI is here to stay, and it's up to us to use it wisely.

Artificial intelligence – the panacea to all ills, or an existential threat to our world?
Artificial intelligence – the panacea to all ills, or an existential threat to our world?

Daily Maverick

time2 days ago

  • Daily Maverick

Artificial intelligence – the panacea to all ills, or an existential threat to our world?

'Once men turned their thinking over to the machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.' – Frank Herbert, Dune, 1965 In the early 19th century, a group of disgruntled factory workers in industrial England began protesting against the introduction of mechanised looms and knitting frames into the factories. Fearful of losing their jobs, they smashed machines and engaged in acts of sabotage. They were dealt with harshly through imprisonment and even execution. They became known as the Luddites. At the time, it was not the technology they were most concerned about, but rather the loss of their livelihoods. Ironically, today, the word Luddite has become something of an accusation, a complaint about those who, because they are seen as not understanding a new technology, are deemed to be anti-technology. Even anti-progress. The 2020s have seen rapid progress in the development of a 'new' technology – artificial intelligence (AI). But the history of AI can be traced back to the middle of the 20th century, and so is perhaps not very new at all. At the forefront of the current process has been the release of Large Language Models (LLMs) – with ChatGPT being the most prominent – that allow, at the click of a single request, an essay on the topic of your choice. LLMs are simply one type of AI and are not the same as artificial general intelligence (AGI). Unlike current LLMs, which perform a single task, AGI would be able to reason, be creative and use knowledge across many domains – be more human-like, in essence. AGI is more of a goal, an end point in the development of AI. LLMs have already been hugely disruptive in education, with university lecturers and school teachers scrambling to deal with ChatGPT-produced essays. Views about the dangers of AI/AGI tend to coalesce into the doomer and the boomer poles. Crudely, and I am oversimplifying here, the 'doomers' worry that we face an existential threat to our existence were AI to be designed in a way that is misaligned with human values. Boomers, on the other hand, believe AI will solve all our problems and usher in an age of abundance, where we will all be able to work less without seeing a drop in our quality of life. The 'doomer' narrative originates with Oxford University philosopher Nick Bostrom, who introduced a thought experiment called the ' paperclip maximiser '. Bostrom imagines a worst-case scenario where we create an all-powerful AGI agent that is misaligned with our values. In the scenario, we request the AGI agent to maximise the production of paperclips. Bostrom worries that the command could be taken literally, with the AGI agent consuming every last resource on Earth (including humans) in its quest to maximise the production of paperclips. Another take on this thought experiment is to imagine that we ask an all-powerful AGI agent to solve the climate breakdown problem. The quickest and most rational way of doing this would, of course, be to simply rid planet Earth of eight billion human beings. What do we have to fear from LLMs? LLMs have scraped the internet for every bit of data, stolen the data, and fed off the intellectual property of writers and artists. But what exactly do we have to fear from LLMs? I would suggest very little (unless, of course, you are a university lecturer in the humanities). LLMs such as ChatGPT are (currently) little more than complex statistical programs that predict what word follows the word before, based on the above-mentioned internet scraping. They are not thinking. In fact, some people have argued that everything they do is a hallucination. It is just that the hallucination is more often than not correct and appropriate. Francois Chollet, a prominent AI researcher, has described LLMs in their current form as a ' dead end ' in the quest for AGI. Chollet is so confident of this that he has put up a $1-million prize for any AI system that can achieve even basic human skills in something he calls the abstraction and reasoning corpus (ARC) test. Essentially, the ARC is a test of what is called fluid intelligence (reasoning, solving novel problems, and adaptation). Young children do well on ARC tasks. Most adults complete all tasks. Pure LLMs achieve around 0%. Yes – 0%. The $1-million prize does not even require that AGI systems match the skills of humans. Just that they achieve 85%. The prize is yet to be claimed. People are the problem If LLMs are (currently) a dead end in the quest for AGI, what should we be worried about? As is always the case, what we need to be afraid of is people. The people in control of this technology. The billionaires, the tech bros, and the dystopian conspiracy theorists. High on my list is Mark Zuckerberg. The man who invented Facebook to rate the attractiveness of college women, and whose company profited enormously from the echo chamber it created. In Myanmar, this resulted in the ethnic cleansing of the Rohingya people in 2017. At the beginning of 2025, Zuckerberg showed the depth of his commitment to diversity and integrity in his slavering capitulation to Donald Trump. Jokes aside about whether Zuckerberg is actually a robot, in recent pronouncements, what he seems to want is a world of atomised and alienated people, who out of quiet desperation turn to his dystopian hell where robots – under his control – will be trained to become 'our friends '. And my personal favourite – Elon Musk. Musk, the ketamine-fuelled racist apologist for the Great Replacement Theory. A man who has committed securities fraud, and accused an innocent man of being a paedophile because the man had the nerve and gall to (correctly) state that Musk's submarine could not negotiate an underwater cave in Thailand. More recently, estimates are that Musk's destruction of USAid will lead to the deaths of about 1,650,000 people within a year because of cuts to HIV prevention and treatment, as well as 500,000 annual deaths due to cuts to vaccines. I, for one, do not want this man anywhere near my children, my family, my community, my country. OpenAI Sam Altman, the CEO of the world's largest plagiarism machine, OpenAI, recently stated that he would like a large part of the world's electricity grid to run his LLM/AI models. Karen Hao, in her recently published book Empire of AI, makes a strong case for OpenAI being a classic colonial power that closely resembles (for example) the British East India Company, founded in 1600 (and dissolved in 1874). Altman recently moved squarely into Orwellian surveillance when OpenAI bought io, a product development company owned by Jonny Ive (designer of the iPhone). While the first product is a closely guarded secret, it is said to be a wearable device that will include cameras and microphones for environmental detection. Every word you speak, every sound you hear, and every image you see will be turned into data. Data for OpenAI. Why might Altman want this? Money, of course. But for Altman and Silicon Valley, money is secondary to data, to surveillance and the way they are able to parlay data into power and control (and then money). He will take our data, further train his ChatGPT models with it, and in turn use this to better surveil us all. And for the pleasure of working for, and giving our data to OpenAI? Far from being paid for the data you produce, you will have to buy the gadget, be monitored 24/7, and have your life commodified and sold. As Shoshana Zuboff said in her magisterial book, The Age of Surveillance Capitalism, 'Forget the cliché that if it's free, 'you are the product'. You are not the product; you are the abandoned carcass. The 'product' derives from the surplus that is ripped from your life.' The problem was never the cotton loom. The Luddites knew this in the 19th century. It was always about livelihood loss and people (the industrialists). Bostrom has it badly wrong when he imagines an all-powerful AGI entity that turns against its human inventors. But about the paperclips, he might be correct. Zuckerberg, Musk and Altman are our living and breathing paperclip maximisers. With their political masters, they will not flinch at turning us all into paperclips and sacrificing us on the altar of their infinite greed and desire for ever-increasing surveillance and control. DM

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store