logo
Emile Ormond: South Africa unready for AI-era job disruption

Emile Ormond: South Africa unready for AI-era job disruption

News245 hours ago

Emerging economies like South Africa may be partially shielded from the initial waves of AI automation, but when it inevitably arrives, the country could be especially vulnerable due to its large, predominantly young labour force, writes Emile Ormond.
As artificial intelligence (AI) grows more sophisticated and pervasive, its potential to disrupt labour markets demands urgent attention.
Will AI displace workers? Could it trigger unprecedented unemployment? There has been an influx of news articles, predictions, and expert claims that AI will be highly disruptive to the workforce. For instance, McKinsey estimates 400-800 million people globally may need new jobs by 2030, while a BCG survey found 42% of workers fear their roles may vanish within a decade.
For South Africa, with an unemployment rate of 32.9% and 46.5% for youth, these predictions are dire. The country simply cannot afford large-scale job losses without jeopardising fragile social stability, deepening poverty and inequality, increasing crime, and threatening fiscal sustainability. As the government of national unity prioritises 'inclusive growth and job creation,' understanding AI's impact on jobs is not just critical - it's urgent.
Impact yet to materialise
Despite these warnings, evidence of current AI-driven job losses remains limited. In advanced economies like the US and EU, unemployment is near historic lows. Research has found that, for now, AI's impact on employment is minimal, often boosting productivity instead. In South Africa, high unemployment predates AI, rooted in structural economic challenges.
So far, AI has not significantly shrunk job markets globally or locally.
Historical, technological leaps, like the Industrial Revolutions, sparked similar fears of mass labour market disruption but ultimately resulted in substantially higher employment and productivity. For instance, more than two-thirds of the world's population lived in extreme poverty before the Industrial Revolution – today, it is less than 10%.
This precedent, combined with AI's limited impact to date, may have bred complacency among South Africans, especially policymakers, that AI's impact will be manageable and a net positive. However, this view is shortsighted and lacks nuance. Rapidly increasing advances in areas such as multi-modal and agentic AI are poised to transform workplaces.
The vast majority of organisations are planning on introducing or expanding their use of AI. This will see workers requiring new skills, creating new roles, and eliminating others. While the balance of these changes is debated, massive labour market disruption is almost certain.
This time is different
AI's unique traits, distinct from past technologies, will amplify its impact on jobs. These features include:
Cognitive capabilities: Unlike earlier automation that targeted manual tasks, AI can handle complex cognitive work, such as analysis and decision-making.
General-purpose technology: Like electricity or the internet, AI's application spans all sectors, driving broad economic impact and broadly fuelling productivity at an unrivalled pace.
Self-improvement: AI can help enhance future iterations of itself, unlike previous technologies. For instance, the most advanced nuclear reactor cannot design new reactors, but AI can make better AI.
Democratised access: Many AI tools are freely or cheaply available, unlike costly previous industrial technologies that were often limited to large, wealthy organisations.
Rapid adoption: Generative AI, for example, surged from obscurity to global prominence in just three years. Now, South African workers use generative AI more than those in the US and UK.
These characteristics illustrate why AI will disrupt labour markets at an unprecedented pace and scale, but not all countries and groups are equally vulnerable.
SA has breathing room
High-income countries, with more white-collar jobs, face earlier AI-driven disruption. For instance, 34% of European Union jobs are exposed to AI automation, compared to 19% in the African Union, according to the International Labour Organisation (ILO).
Ageing populations and high labour costs may also accelerate AI adoption in developed markets. Young workers, often in entry-level roles, are particularly at risk. The ILO notes that youth hold jobs most susceptible to automation, potentially blocking their entry into the labour force.
This is particularly pressing for Africa, with 350 million young Africans expected to reach working age by 2050.
In other words, emerging economies like South Africa may be partially shielded from the initial waves of AI automation, but when it inevitably arrives, the country could be especially vulnerable due to its large, predominantly young labour force. In conjunction with this, AI will likely also drive massive productivity gains and create new, currently unforeseen jobs, but the transition period could be long and hard. Moreover, it could ultimately further entrench South Africa's world-leading inequality.
Charting a path forward
South Africa has a narrow window, as short as two to three years, to harness AI's productivity gains while mitigating its fallout. Key actions stakeholders can take include:
Policy development: Political leaders must move beyond vague rhetoric and adopt nuanced, thoughtful policy positions on AI. The government should finalise a national AI strategy, released for comment in mid-2024, to address labour market impacts.
Digital infrastructure: Expand reliable, high-speed internet nationwide, resolving disputes over providers like Starlink to ensure equitable AI access.
Reskilling programmes: Invest in large-scale training to equip workers with AI-relevant skills and update school and tertiary education curricula for emerging roles.
Responsible AI governance: Regulators and organisations should integrate AI oversight into corporate governance, aligning innovation with national development goals. Moreover, AI needs to be a cross-cutting responsibility in government.
Social protections: Plans for displaced workers need to be considered now – there are nearly 19 million grant recipients, compared to a tax base of 7 million. Growth measures and/or new revenue sources will need to be found if the if the
South Africa stands at the edge of an epoch-defining labour shift. The question is whether we act proactively or react in a crisis.
- Dr Emile Ormond has an interest in policy analysis and risk managment.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sword Health Now Valued At $4 Billion, Announces Expansion Into Mental Health Services
Sword Health Now Valued At $4 Billion, Announces Expansion Into Mental Health Services

Yahoo

time17 minutes ago

  • Yahoo

Sword Health Now Valued At $4 Billion, Announces Expansion Into Mental Health Services

Sword Health announced Tuesday that it had raised $40 million in a recent funding round, giving it a $4 billion valuation. Founded in 2015, the healthcare startup has focused on helping people manage chronic pain at home. Using AI tools, the platform connects users with expert clinicians who then provide patients with tools for digital physical therapy, pelvic health, and overall mobility health. However, the company says this new round of funding will largely go towards developing a mental health arm of its program called Mind. Don't Miss: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — Peter Thiel turned $1,700 into $5 billion—now accredited investors are eyeing this software company with similar breakout potential. Learn how you can "Today, nearly 1 billion people worldwide live with a mental health condition. Yet care remains fragmented, reactive, and inaccessible," Sword said in the announcement. "Mind redefines mental health care delivery with a proactive, 24/7 model that integrates cutting-edge AI with licensed, Ph.D-level mental health specialists. Together, they provide seamless, contextual, and responsive support any time people need it, not just when they have an appointment." Sword CEO Virgílio Bento told CNBC, "[Mind] really a breakthrough in terms of how we address mental health, and this is only possible because we have AI." Users will be equipped with a wearable device called an M-band, which will measure their environmental and physiological signals so that experts can reach out proactively as needed. The program will also offer access to services like traditional talk therapy. Bento told CNBC that a human is "always involved" in patients care in each of its programs, and that AI is not making any clinical decisions. Trending: Maximize saving for your retirement and cut down on taxes: . For example, if a Sword patient has an anxiety attack, AI will identify it through the wearable and bring it to the attention of a clinician, who can then provide an appropriate care plan. "You have an anxiety issue today, and the way you're going to manage is to talk about it one week from now? That just doesn't work," Bento told CNBC. "Mental health should be always on, where you have a problem now, and you can have immediate help in the moment." According to Bento, Sword Mind already has a waiting list, and is being tested by some of its partners who appreciate it's "personalized approach and convenience." "We believe that it is really the future of how mental health is going to be delivered in the future, by us and by other companies," he told CNBC. "AI plays a very important role, but the use of AI — and I think this is very important — needs to be used in a very smart way." The rest of the cash raised in the funding round, which was led by General Catalyst, will go towards acquisitions, global expansion, and AI development, Sword Health says. Read Next: Here's what Americans think you need to be considered Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article Sword Health Now Valued At $4 Billion, Announces Expansion Into Mental Health Services originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

A Jobseeker Says Reddit Paints A Bleak Job Market. But Then Admits People Are Still Getting 'Hired Every Single Day. That's A Fact'
A Jobseeker Says Reddit Paints A Bleak Job Market. But Then Admits People Are Still Getting 'Hired Every Single Day. That's A Fact'

Yahoo

time35 minutes ago

  • Yahoo

A Jobseeker Says Reddit Paints A Bleak Job Market. But Then Admits People Are Still Getting 'Hired Every Single Day. That's A Fact'

After spending time in multiple career-related subreddits, one Reddit user had a realization that registered with people: Reddit sometimes makes the job market look worse than it really is. 'I've been trying to switch careers recently and joined a bunch of subreddits — tech, healthcare, education, engineering, etc.,' the original poster wrote. 'And in every single one, it's the same thing: 'No jobs' 'The market is dead' 'Everything's saturated' 'You should've started 10 years ago'.' But they pushed back on the despair. 'People get hired every single day. That's a fact,' they said. 'The people who are getting jobs aren't posting here. The ones who are stuck are the ones who are venting.' Don't Miss: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — Peter Thiel turned $1,700 into $5 billion—now accredited investors are eyeing this software company with similar breakout potential. Learn how you can Their perspective resonated, especially as others chimed in with their experiences. 'I got laid off at the beginning of the year and was terrified because I'm here lurking a lot,' one person commented. 'Luckily, I'm pretty good at interviewing and landed a [work-from-home] job maybe two weeks after. I never posted about how fast I was able to find work, so what you say is true.' Others said the negativity isn't universal across fields. 'Tech jobs in education, medical, and finance are booming right now. I moved companies earlier this year and did not have any trouble finding another fully remote position for a significant raise,' one person added. Still, the thread also highlighted the brutal side of the market. Many shared long stretches of unemployment and feelings of defeat. One mid-level developer said they'd sent out over 100 applications in four weeks and heard back from only five. 'I'm not the best interviewee and am a poster child for, 'if it wasn't for bad luck, I'd have no luck at all.'' Trending: Invest early in CancerVax's breakthrough tech aiming to disrupt a $231B market. New graduates, in particular, seemed to bear the brunt of the pain. 'Some have literally been unemployed for 2-3 years now,' another person said of recent tech grads. 'One of [my friends] is a camp counselor at a coding camp. The other, working IT at a warehousing startup.' He described them as 'Smart kids, high 90's in HS and 3.8 and above GPA in university.' The nursing and teaching sectors drew mixed responses. Many users acknowledged that these fields continue to experience high demand due to staffing shortages, burnout, and high turnover. However, some pointed out that employers often prefer experienced workers, leaving recent graduates without opportunities to gain that very experience. Others emphasized how working conditions and pay in these sectors contribute to why positions remain unfilled, with some describing the workload and pressure as overwhelming despite the steady demand for workers. Reddit's tendency to skew toward doom and gloom was a recurring point. 'Reddit as a collective has the mentality of a depressed 16-year-old. It definitely shouldn't be used as a barometer for anything,' one person joked. Another added, 'It's like reading reviews on Amazon. People only post something negative, while positive is rarely posted.'In the end, the original poster urged job seekers to stay the course. 'Don't let [Reddit] convince you that nothing is working anywhere for anyone. That's just not true. If you're feeling discouraged, I get it. But keep going. You're probably doing better than you think.' Recent data from the U.S. Bureau of Labor Statistics paints a mixed yet still functional employment picture. In May, employers added 139,000 nonfarm payroll jobs, keeping the unemployment rate steady at 4.2%. Job gains were led by health care, leisure and hospitality and social assistance. While federal government payrolls declined, private-sector hiring continued. Though slower than prior months, growth continues, supporting the idea that 'people get hired every single day.' Read Next: Many are using retirement income calculators to check if they're on pace —Up Next: Transform your trading with Benzinga Edge's one-of-a-kind market trade ideas and tools. Click now to access unique insights that can set you ahead in today's competitive market. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article A Jobseeker Says Reddit Paints A Bleak Job Market. But Then Admits People Are Still Getting 'Hired Every Single Day. That's A Fact' originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved.

Why is AI halllucinating more frequently, and how can we stop it?
Why is AI halllucinating more frequently, and how can we stop it?

Yahoo

timean hour ago

  • Yahoo

Why is AI halllucinating more frequently, and how can we stop it?

When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store