logo
Revolution or risk? How AI is redefining broadcasting and raising red flags

Revolution or risk? How AI is redefining broadcasting and raising red flags

TimesLIVE06-06-2025

Imagine watching the evening news, only to find out later that the images, voices or even the person reporting were not real.
This is not fiction any more; generative artificial intelligence (GenAI) is transforming the broadcasting industry and not without consequences.
Prof Nelishia Pillay, an AI expert at the University of Pretoria, says while the technology is opening exciting opportunities for content creation, it also raises serious ethical concerns.
"GenAI creates new content based on what it learns from online data," she said. "While it doesn't come up with truly original ideas – that creativity is still reserved for humans – it does help reshape existing ones."
Used widely, generative artificial intelligence (GenAI) has made life easier for broadcasters. Journalists can now create engaging visuals using just voice prompts, producers can create music or video clips in minutes and translate subtitles in different languages in just few clicks. Even converting text to speech using AI also helps broadcasters to do more with fewer resources.
However, with this convenience comes ethical concerns, especially around what is called "deepfakes". These are digitally forged images or videos that can be convincing, posing a threat to truth and trust in the media.
"A challenge that comes with GenAI is how to ensure the ethical use of these tools," she said. "Deepfakes can be used to produce fake news and harmful cybersecurity attacks on businesses."
Pillay also highlighted how tailoring content through AI can reinforce biases if left unchecked.
To address such risks, tools are emerging to detect GenAI misuse. According to the International New Media Association, AI has already shown success in detecting the unethical use of GenAI, with machine-learning being used to detect fake news.
Tools like Checkmate, a real-time fact checking system that flags claims in videos and checks them against reliable sources and Turnitin used in the academic world to detect student plagiarism are also evolving.
"Such tools need to be embedded in GenAI systems in the broadcasting industry to detect the unethical use of GenAI," said Pillay.
Beyond fake news, there are deeper ethical questions. Who owns content created by machines? Is it fair to use information from social media platforms to train tools? And the impact of personalised content on audiences.
As AI is making it harder to tell the difference between human and machine creation, media organisations need to come up with clear rules protecting intellectual property and privacy, especially when they use datasets.
"Broadcasters need strict guidelines to respect privacy rights of individuals when creating images or video," Pillay said.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI in African education: We need to adapt it before we adopt it
AI in African education: We need to adapt it before we adopt it

Mail & Guardian

timean hour ago

  • Mail & Guardian

AI in African education: We need to adapt it before we adopt it

Using AI without critical reflection widens the gap between relevance and convenience. Imagine a brilliant student from rural Limpopo. She presents a thorough case study to her class that is locally relevant and grounded in real-world African issues. Her classmate submits a technically perfect paper filled with American examples and Western solutions that don't apply to a rural African setting. The difference? Her classmate prompted ChatGPT and submitted a paraphrased version of its response. This example highlights an uncomfortable truth — generative AI is reshaping teaching and learning in higher education but, without critical reflection, it risks widening the gap between relevance and convenience. The recent This poses obvious risks, such as the unintended consequences of imposing Global North solutions onto vastly different educational, technological and socio-economic contexts. For example, an AI tool calibrated for English-speaking, well-resourced school systems could reinforce exclusion in multilingual classrooms or among students with limited internet access. A more subtle, longer-term concern is the growing influence of digital colonialism — the way global tech platforms shape what knowledge is visible, whose voices matter and how learning happens. In higher education, this risks weakening our academic independence and deepening reliance on systems that were never built with our contexts — or our students — in mind. Banning AI tools is not a solution. The question isn't about whether to use AI or not, it's how to do so with care, strategy and sovereignty. Too often, institutions swing between extremes of uncritical techno-optimism ('AI will solve everything') and fearful rejection ('Ban it before it breaks us'). Lost in the middle are students who lack guidance on responsibly working with these tools and shaping them for African futures. When an African law student queries ChatGPT, they're often served US case law. Ask for economic models, and the results tend to assume Western market conditions. Request cultural insights and Western assumptions are frequently presented as universal truths. It's not that AI tools can't provide localised or African-specific information, but without proper prompting and a trained awareness of the tools' limitations, most users will get default outputs shaped by largely Western training data. Our African perspective risks being overshadowed. This is the hidden curriculum of imported AI — it quietly reinforces the idea that knowledge flows from the North to the South. African students and lecturers become unpaid contributors, feeding data and insights into systems they don't own, while Silicon Valley collects the profits. So, what's the alternative? What is needed is a technocritical approach which is a mindset that acknowledges both AI's promise and pitfalls in our context. The five core principles are: Participatory design : Students and academic staff are not just users but co-creators, shaping how AI is embedded in their learning. Critical thinking: Learners are taught to critically interrogate all AI outputs. What data is presented here? Whose voices are missing? Contextual learning : Assignments require comparing AI outputs to local realities, to identify more nuanced insights and to acknowledge blind spots. Ongoing dialogue: Hold open and candid conversations about how AI influences knowledge in and beyond our classrooms. Ethics of care: Advance African perspectives and protect against harm by ensuring that AI use in education is guided by inclusion and people's real needs — not just speed or scale. The shape of AI in African education isn't pre-ordained. It will be defined by our choices. Will we passively apply foreign tools or actively shape AI to reflect our values and ambitions? We don't need to choose between relevance and progress. With a technocritical approach, we can pursue both — on our terms. Africa cannot afford to adopt AI without adaptation, nor should students be passive users of systems that do not reflect their reality. This is about more than access. It's about digital self-determination — equipping the next generation to engage critically, challenge defaults and build AI futures that reflect African voices, knowledge and needs. AI will shape the future of education, but we must shape AI first. Africa has the opportunity not just to consume technology, but to co-create it in a relevant way. A technocritical approach reminds us that true innovation doesn't mean catching up to the Global North — it means confidently charting our own course. Dr Miné de Klerk is the dean of curricula and research ( ) and Dr Nyx McLean is the head of research and postgraduate studies ( ) at Eduvos.

Pope Leo warns AI could disrupt young minds' grip on reality
Pope Leo warns AI could disrupt young minds' grip on reality

The Citizen

time2 days ago

  • The Citizen

Pope Leo warns AI could disrupt young minds' grip on reality

The pope has called for ethical oversight on AI, especially for the sake of children and adolescents. Pope Leo XIV warned on Friday of the potential consequences of artificial intelligence (AI) on the intellectual development of young people, saying it could damage their grip on reality. Since his election as head of the Catholic Church on May 8, the pope — a mathematics graduate — has repeatedly warned of the risks associated with AI but this is the first time he has spoken out exclusively on the subject. Concerns for children's mental and neurological development 'All of us… are concerned for children and young people, and the possible consequences of the use of AI on their intellectual and neurological development,' the American pope warned in a written message to participants at the second Rome Conference on AI. 'No generation has ever had such quick access to the amount of information now available through AI. 'But again, access to data — however extensive — must not be confused with intelligence,' Leo told business leaders, policymakers and researchers attending the annual conference. While welcoming the use of AI in 'enhancing research in healthcare and scientific discovery', the pope said it 'raises troubling questions on its possible repercussions' on humanity's 'distinctive ability to grasp and process reality'. ALSO READ: Nzimande signs letter of intent in China to boost AI in SA Pope targeted by AI manipulation Pope Leo himself has been the target of deep fake videos and audio messages published on social media in recent weeks. An AFP investigation earlier this month identified dozens of YouTube and TikTok pages broadcasting AI-generated messages masquerading as genuine comments from the pope in English or Spanish. A survey from the Reuters Institute for the Study of Journalism this week found significant numbers of young people in particular were using chatbots to get headlines and updates. The church's broader push for AI ethics The Catholic Church has attempted to influence ethical thinking surrounding the use of new technologies in recent years under Leo's predecessor Francis. In 2020, the Vatican initiated the Rome Call for AI Ethics — signed by Microsoft, IBM, the United Nations, Italy and a host of universities — urging transparency and respect for privacy. NOW READ: Eskom launches AI chatbot 'Alfred' to expedite fault reporting

Charles Hoskinson wants to make ADA an AI crypto – what does it mean?
Charles Hoskinson wants to make ADA an AI crypto – what does it mean?

Mail & Guardian

time3 days ago

  • Mail & Guardian

Charles Hoskinson wants to make ADA an AI crypto – what does it mean?

Crypto enthusiasts know there are few figures as influential as Charles Hoskinson, who is the co-founder of Ethereum and founder of Cardano. Hoskinson is one of the personalities who has been at the forefront of blockchain innovation for years, and lately he has been making waves with its plans to transform ADA, Cardano's native token into the backbone of a decentralized artificial intelligence ecosystem. Experts believe this is an ambitious and bold step in the convergence of two of the most transformative technologies of the moment, which are blockchain and artificial intelligence. In order to make an Discussing the vision Charles Hoskinson's latest statements and directions about projects reveal that he is quite interested in combining blockchain and AI to resist monopolization by big tech companies and preserve decentralization. At the core of his plan to make ADA an AI crypto lays the desire to democratize AI development and deployment, and ensure that no single entity can control its outcomes or growth. Charles Hoskinson has a vision for the future of AI systems: He wants them to be transparent, especially when it comes to decision making He desires for decentralized communities to ethically govern them He wants blockchain technology to be used to create immutable and verifiable systems And he plans to use decentralized computational infrastructure to power them. To this end, the ADA token and Cardano's blockchain are positioned to serve as the foundational elements in this emerging AI economy. Cardano could become the first AI-native blockchain ecosystem if it leverages ADA for governance, computation rewards, and data transactions. Is Cardano suitable for AI? Now this is a question worth answering because it's vital for the project to be suitable for artificial intelligence for the plans to be successful. The Cardano team, together with Hoskinson believe that Cardano has some unique features that make it well-suited for hosting AI systems. The list includes: Formal methods and peer review. Cardano has always stood apart in the crypto sector for its academic rigor. All features and updates are peer-reviewed and grounded in formal mathematical principles. Its approach builds trust, which is critical when dealing with sensitive decisions regarding artificial intelligence, like autonomous vehicles and medical diagnoses. Scalability via Hydra. Scalability has always been a sticking point for blockchain adoption. Cardano's solution is Hydra, a layer-2 scaling protocol designed to significantly increase transaction throughput. For AI, which often requires high-speed data processing, Hydra ensures Cardano can support the necessary bandwidth and latency. Interoperability. Cardano has a special approach to interoperability, which allows it to communicate with other data sources, blockchains, and which could enable AI systems to draw from a wider pool of information. Data is paramount for the success of AI, so this feature is critical. Governance with Voltaire. Voltaire, Cardano's governance system, allows token holders to vote on protocol upgrades and project funding. Applied to AI, this democratic model could allow communities to decide how AI systems are trained, how their outputs are used, and how ethical dilemmas are resolved. Reviewing the key components of Hoskinson's AI strategy Charles Hoskinson has a multi-layered approach to develop an AI-centric crypto ecosystem. Let's have a look at his plan. 1. Integrating AI Agents with Smart Contracts Cardano's smart contract platform, Plutus, will support AI-powered agents – autonomous bots that can negotiate, trade, and make decisions without human input. These agents can interact with decentralized finance (DeFi) protocols, manage supply chains, and even create new AI models by pooling resources. 2. Decentralized AI Marketplaces Using the Cardano blockchain, decentralized marketplaces for AI models, datasets, and computation power can be developed. Developers could offer AI models for rent, users could share data anonymously, and validators could earn ADA for contributing GPU power to AI computations. 3. AI for Blockchain Governance One of the more futuristic aspects of Hoskinson's vision is the use of AI in managing blockchain governance. AI could help detect fraudulent proposals, recommend funding allocations based on network data, and provide predictive analytics for protocol evolution. 4. Partnership with SingularityNET A cornerstone of this plan is Cardano's close collaboration with SingularityNET, a decentralized AI platform founded by Dr. Ben Goertzel. SingularityNET, which moved part of its operations from Ethereum to Cardano, is developing a decentralized network of AI services. The AGIX token (native to SingularityNET) is being bridged to Cardano, allowing for smoother integration between the two ecosystems. SingularityNET and Cardano share a common vision of democratized, ethical AI. Together, they aim to build 'Artificial General Intelligence (AGI)' in a decentralized way, potentially changing how AI is developed and controlled on a global scale. ADA could become the fuel for the AI economy Hoskinson sees Ada, As a payment method for AI services (e.g., model inference, training time, access to datasets). As a tool to incentivize annotation and data sharing. Reward method for decentralized computation providers. As a tool to vote on AI-related governance decisions. If this happens ADA will gain intrinsic value beyond speculation. The demand for decentralized artificial intelligence is higher by the day, so the demand for the cryptocurrency that powers it should follow the same trajectory, which could only reinforce its position as a key player in the new economy. A short look at the roadmap Hoskinson's roadmap for an AI-powered Cardano isn't expected to materialize overnight. The development is expected to progress in stages throughout 2025 and beyond. Key milestones will include: Launch of new AI-integrated dApps. Expansion of the SingularityNET partnership. Rollout of tools for AI agent development. Community governance of AI-related proposals. Integration of off-chain AI computation with Cardano nodes. As expected, academic research, community involvement, and transparent development will stay behind the transformation. If Cardano manages to enter the AI sphere, it could move from being a competitor in the smart contract sector to a pioneer in a new class of decentralized intelligence infrastructure.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store