logo
Crossed Wires: Artificial intelligence slouches towards the advertising industry

Crossed Wires: Artificial intelligence slouches towards the advertising industry

Daily Maverick5 hours ago

Quite suddenly, AI is shredding long-established norms everywhere in this vaunted industry. One of the most startling developments has been the release of Meta's Veo 3, a text-to-video application released a few weeks ago, which has to be seen to be believed.
And what rough beast, its hour come round at last, Slouches towards Bethlehem to be born? — WB Yeats, The Second Coming
Perhaps it's a bit of an overkill to link AI's looming encroachment on the advertising industry to Yeats' darkly foreboding poem. Yet, having just returned from Cannes, where the global ad industry's biggest event, the Golden Lions, is held, it was clear that AI was hanging like a shadow — not visible to everyone perhaps, but obvious at least to those who are certain of the disruption to come. They were the ones who looked like deer caught in the headlights, standing startled and paralysed amid the glitz and glamour of the event.
I was there to present a paper titled 'AI in Advertising: Governance, Regulation and Other Troubles' on behalf of the Icas (International Council for Advertising Self-Regulation) Global Think Tank. I was not the only one talking about AI in Cannes; the conversations and presentations were everywhere. One disquieting question didn't have to be articulated: has the advertising industry arrived at its Fleet Street moment?
The question refers to the collapse of the printed newspaper business in the mid-'90s, catalysed by digitisation and the internet, which brutally upended an industry that had remained largely unchanged for more than a century. There were many casualties and only a few survivors in its wake — which is what is likely to happen in advertising.
Quite suddenly, AI is shredding long-established norms everywhere in this vaunted industry. One of the most startling developments has been the release of Meta's Veo 3, a text-to-video application released a few weeks ago, which has to be seen to be believed (just go to YouTube and search for Veo 3; here is but one example). The quality of the video and the AI 'actors' and locations is indistinguishable from those shot with cameras and populated by human actors and extras. With Veo 3, the user describes the scene they want to see, gives the actors a 'script' and 'directions', and Veo 3 does the rest. (Veo 3 is not the only text-to-video app, just the latest.)
Professional-level text-to-video is a brand-new strand of Generative AI. There are, of course, grumbles. It has limitations. Currently, Veo 3 can only render eight seconds of video. Some visual elements are difficult to control or 'not quite right'. It is expensive.
Expensive? Consider this: A marketing director will brief an agency to deliver a 30-second video commercial. The agency then refines the brief, perhaps with a rough storyboard and brand/campaign context, and passes it on to a few video production companies. One of those companies comes up with a creative approach and pitches a treatment: three days of shooting, four locations, three actors, 10 extras, two weeks of post-production. Budget? $1.5-million.
Or the agency can use Veo 3 in the hands of a single tech-savvy director and perhaps a good human Veo 3 expert. Cost? $150,000, with 10 differently flavoured commercials rendered for presentation to the client within two weeks.
It doesn't take a rocket scientist to see where this is going. It signals the end of video production companies, except for live events or productions with celebrity actors. One estimate I heard at the conference predicted 3,000 production company bankruptcies globally within two years. And it may mean the end of some ad agencies if some corporations decide to plough the money they're saving in production costs into forming new in-house agencies.
Dystopian scenario
This scenario isn't even the worst of it. Meta CEO Mark Zuckerberg recently spelt out the following audacious and dystopian scenario:
'We're going to get to a point where you're a business, you come to us, you tell us what your objective is, you connect to your bank account, you don't need any creative, you don't need any targeting demographic, you don't need any measurement, except to be able to read the results that we spit out. I think that's going to be huge, I think it is a redefinition of the category of advertising.'
Here is his vision: A business comes to Meta with a product and a few ideas, and then Meta takes over; it does everything: creative concept, production, media strategy, analytics. Then AI constantly refines the ad in near-real time, on an ongoing basis, until it performs at maximum efficiency. Zuckerberg, somewhat brutally, implied that, in the future, advertising agencies will not be required.
There are those who will strenuously object, who will talk about brand strategy and management, understanding client product roadmaps, and other assumed sacred cows — the 'deep' cores of the agency proposition. These too, I submit, will fall to AI as soon as it learns from hundreds of thousands of successful brand case studies and is able to generate a plethora of its own novel approaches.
Audience targeting
Finally, there is the matter of audience targeting. The holy grail of the advertising industry has long been the idea of the perfectly relevant ad — one that is pitched directly and only to individual consumers who are looking to buy that very product or service. Consumers have also sought the same thing: ads that matter to them and do not waste their attention. It has been assumed to be a perfect match of incentives.
But AI is now able to understand much more about individuals than we are comfortable with. By analysing our internet behaviour, our social media behaviour, our friends, our devices, our buying patterns, even the tenor of our emotional states when we post, AI can paint a near-perfect picture of who we are at any moment. This intrusion is a privacy nightmare, one demanding regulation, which may not be properly enforceable in a fast-fracturing and chaotic landscape.
There will, of course, be some advertising agencies which grab the nettle and shed their old skins to quickly embrace and exploit AI, perhaps pivoting quickly enough to other business models to avoid obsolescence.
Others will end up like the celluloid film editors I used to know, obstinately and proudly refusing to submit to the newfangled video editing systems that started arriving in the late 1990s.
They were brave and foolhardy, and they died alone. DM
Maverick451 in SA and Legend Times Group in the UK/EU, available now.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Crossed Wires: Artificial intelligence slouches towards the advertising industry
Crossed Wires: Artificial intelligence slouches towards the advertising industry

Daily Maverick

time5 hours ago

  • Daily Maverick

Crossed Wires: Artificial intelligence slouches towards the advertising industry

Quite suddenly, AI is shredding long-established norms everywhere in this vaunted industry. One of the most startling developments has been the release of Meta's Veo 3, a text-to-video application released a few weeks ago, which has to be seen to be believed. And what rough beast, its hour come round at last, Slouches towards Bethlehem to be born? — WB Yeats, The Second Coming Perhaps it's a bit of an overkill to link AI's looming encroachment on the advertising industry to Yeats' darkly foreboding poem. Yet, having just returned from Cannes, where the global ad industry's biggest event, the Golden Lions, is held, it was clear that AI was hanging like a shadow — not visible to everyone perhaps, but obvious at least to those who are certain of the disruption to come. They were the ones who looked like deer caught in the headlights, standing startled and paralysed amid the glitz and glamour of the event. I was there to present a paper titled 'AI in Advertising: Governance, Regulation and Other Troubles' on behalf of the Icas (International Council for Advertising Self-Regulation) Global Think Tank. I was not the only one talking about AI in Cannes; the conversations and presentations were everywhere. One disquieting question didn't have to be articulated: has the advertising industry arrived at its Fleet Street moment? The question refers to the collapse of the printed newspaper business in the mid-'90s, catalysed by digitisation and the internet, which brutally upended an industry that had remained largely unchanged for more than a century. There were many casualties and only a few survivors in its wake — which is what is likely to happen in advertising. Quite suddenly, AI is shredding long-established norms everywhere in this vaunted industry. One of the most startling developments has been the release of Meta's Veo 3, a text-to-video application released a few weeks ago, which has to be seen to be believed (just go to YouTube and search for Veo 3; here is but one example). The quality of the video and the AI 'actors' and locations is indistinguishable from those shot with cameras and populated by human actors and extras. With Veo 3, the user describes the scene they want to see, gives the actors a 'script' and 'directions', and Veo 3 does the rest. (Veo 3 is not the only text-to-video app, just the latest.) Professional-level text-to-video is a brand-new strand of Generative AI. There are, of course, grumbles. It has limitations. Currently, Veo 3 can only render eight seconds of video. Some visual elements are difficult to control or 'not quite right'. It is expensive. Expensive? Consider this: A marketing director will brief an agency to deliver a 30-second video commercial. The agency then refines the brief, perhaps with a rough storyboard and brand/campaign context, and passes it on to a few video production companies. One of those companies comes up with a creative approach and pitches a treatment: three days of shooting, four locations, three actors, 10 extras, two weeks of post-production. Budget? $1.5-million. Or the agency can use Veo 3 in the hands of a single tech-savvy director and perhaps a good human Veo 3 expert. Cost? $150,000, with 10 differently flavoured commercials rendered for presentation to the client within two weeks. It doesn't take a rocket scientist to see where this is going. It signals the end of video production companies, except for live events or productions with celebrity actors. One estimate I heard at the conference predicted 3,000 production company bankruptcies globally within two years. And it may mean the end of some ad agencies if some corporations decide to plough the money they're saving in production costs into forming new in-house agencies. Dystopian scenario This scenario isn't even the worst of it. Meta CEO Mark Zuckerberg recently spelt out the following audacious and dystopian scenario: 'We're going to get to a point where you're a business, you come to us, you tell us what your objective is, you connect to your bank account, you don't need any creative, you don't need any targeting demographic, you don't need any measurement, except to be able to read the results that we spit out. I think that's going to be huge, I think it is a redefinition of the category of advertising.' Here is his vision: A business comes to Meta with a product and a few ideas, and then Meta takes over; it does everything: creative concept, production, media strategy, analytics. Then AI constantly refines the ad in near-real time, on an ongoing basis, until it performs at maximum efficiency. Zuckerberg, somewhat brutally, implied that, in the future, advertising agencies will not be required. There are those who will strenuously object, who will talk about brand strategy and management, understanding client product roadmaps, and other assumed sacred cows — the 'deep' cores of the agency proposition. These too, I submit, will fall to AI as soon as it learns from hundreds of thousands of successful brand case studies and is able to generate a plethora of its own novel approaches. Audience targeting Finally, there is the matter of audience targeting. The holy grail of the advertising industry has long been the idea of the perfectly relevant ad — one that is pitched directly and only to individual consumers who are looking to buy that very product or service. Consumers have also sought the same thing: ads that matter to them and do not waste their attention. It has been assumed to be a perfect match of incentives. But AI is now able to understand much more about individuals than we are comfortable with. By analysing our internet behaviour, our social media behaviour, our friends, our devices, our buying patterns, even the tenor of our emotional states when we post, AI can paint a near-perfect picture of who we are at any moment. This intrusion is a privacy nightmare, one demanding regulation, which may not be properly enforceable in a fast-fracturing and chaotic landscape. There will, of course, be some advertising agencies which grab the nettle and shed their old skins to quickly embrace and exploit AI, perhaps pivoting quickly enough to other business models to avoid obsolescence. Others will end up like the celluloid film editors I used to know, obstinately and proudly refusing to submit to the newfangled video editing systems that started arriving in the late 1990s. They were brave and foolhardy, and they died alone. DM Maverick451 in SA and Legend Times Group in the UK/EU, available now.

Smart tools, smart kids: A parent's guide to AI in education
Smart tools, smart kids: A parent's guide to AI in education

The Citizen

time12 hours ago

  • The Citizen

Smart tools, smart kids: A parent's guide to AI in education

Not long ago, students studied in analogue: dog-eared textbooks, handwritten notes and the occasional text to an overachieving friend. Today, many learners are turning to something far more advanced – artificial intelligence (AI). In particular, large language models like ChatGPT are fast becoming study companions for a new generation of learners. Kempton Express reports that from drafting essays to summarising chapters, checking maths problems and brainstorming science projects, ChatGPT and similar tools are rapidly and dramatically redefining how young people approach their studies. But are these tools enhancing learning or replacing it? As the world marks Youth Month, Arno Jansen van Vuuren, the managing director at education insurance provider Futurewise, says it is a good time to ask: 'What does the rise of AI mean for education in SA, and how can parents help their children use it to support, not shortcut, their growth?' 'We often call today's children digital natives, but more accurately, they're becoming AI natives, growing up with tools that can generate ideas, write essays and respond almost like a human,' he says. 'While some schools were quick to ban these tools, especially early on, enforcing those rules outside the classroom is nearly impossible. AI is evolving faster than policy can keep up, and it's not going away. These tools are advancing so rapidly that we can't predict what they'll look like even a few months from now. 'As parents, we have a key role to play in helping children build healthy, responsible habits around these tools so that they learn with AI, not from it.' The good: Study support at their fingertips When used correctly, AI can be a powerful learning ally. It helps students break down complex concepts, rephrase difficult topics and generate practice questions or writing prompts. For children too shy to ask questions in class or struggling to focus using traditional study methods, it can be a game-changer. It's also available around the clock, offering consistent support during late-night cramming or weekend revision. The bad: Over-reliance and lost thinking skills 'If students begin relying on ChatGPT to think for them, their critical skills may fade. Generative AI doesn't truly understand topics – it predicts words based on patterns, so learners might copy answers without grasping their meaning. Over time, this can erode both confidence and creativity. 'Think of AI like a calculator. It's great for speeding things up once you understand the process, but if you rely on it before mastering the basics, you risk losing the ability to solve problems on your own, says Jansen van Vuuren. The dangers: Misinformation, privacy and bias While ChatGPT can sound convincing, it isn't always accurate. It can generate false information and, since it draws from data across the internet, may reflect biases or stereotypes. 'There are also privacy concerns. Children might unknowingly share personal information while chatting with AI bots, unaware that this data can be stored or used to train future models.' He advises parents to remind children never to share personal details and always verify AI-generated facts with trusted sources. What can parents do? 'The answer isn't to ban these tools – it's to build understanding. Start by using AI tools with your child. Explore how prompts work and compare AI responses with their school materials. Encourage questions like, 'How did you get that answer?' or 'Can you explain it another way?',' says Jansen van Vuuren. 'Discuss ethical use: When is it okay to use AI for help, and when does it cross into cheating? Help your child understand that the goal is to learn, not just to submit the perfect assignment.' If you're unsure how to start these conversations, resources like the Futurewise Learning Hub can help. The hub offers interactive tools that promote digital and emotional literacy, covering online safety, academic support and practical ways for parents and children to navigate technology together. Preparing for the future of learning AI isn't going away. As it becomes more embedded in society, it will play a major role in how today's learners study, work and solve problems throughout their lives. Teaching children to use it wisely is key to future-proofing their success. 'As technology evolves, so must our parenting. Our role isn't to shield kids from innovation but to help them use it safely and meaningfully. With the right tools and support, parents can turn AI from Breaking news at your fingertips… Follow Caxton Network News on Facebook and join our WhatsApp channel. Nuus wat saakmaak. Volg Caxton Netwerk-nuus op Facebook en sluit aan by ons WhatsApp-kanaal. Read original story on At Caxton, we employ humans to generate daily fresh news, not AI intervention. Happy reading!

AI in African education: We need to adapt it before we adopt it
AI in African education: We need to adapt it before we adopt it

Mail & Guardian

time18 hours ago

  • Mail & Guardian

AI in African education: We need to adapt it before we adopt it

Using AI without critical reflection widens the gap between relevance and convenience. Imagine a brilliant student from rural Limpopo. She presents a thorough case study to her class that is locally relevant and grounded in real-world African issues. Her classmate submits a technically perfect paper filled with American examples and Western solutions that don't apply to a rural African setting. The difference? Her classmate prompted ChatGPT and submitted a paraphrased version of its response. This example highlights an uncomfortable truth — generative AI is reshaping teaching and learning in higher education but, without critical reflection, it risks widening the gap between relevance and convenience. The recent This poses obvious risks, such as the unintended consequences of imposing Global North solutions onto vastly different educational, technological and socio-economic contexts. For example, an AI tool calibrated for English-speaking, well-resourced school systems could reinforce exclusion in multilingual classrooms or among students with limited internet access. A more subtle, longer-term concern is the growing influence of digital colonialism — the way global tech platforms shape what knowledge is visible, whose voices matter and how learning happens. In higher education, this risks weakening our academic independence and deepening reliance on systems that were never built with our contexts — or our students — in mind. Banning AI tools is not a solution. The question isn't about whether to use AI or not, it's how to do so with care, strategy and sovereignty. Too often, institutions swing between extremes of uncritical techno-optimism ('AI will solve everything') and fearful rejection ('Ban it before it breaks us'). Lost in the middle are students who lack guidance on responsibly working with these tools and shaping them for African futures. When an African law student queries ChatGPT, they're often served US case law. Ask for economic models, and the results tend to assume Western market conditions. Request cultural insights and Western assumptions are frequently presented as universal truths. It's not that AI tools can't provide localised or African-specific information, but without proper prompting and a trained awareness of the tools' limitations, most users will get default outputs shaped by largely Western training data. Our African perspective risks being overshadowed. This is the hidden curriculum of imported AI — it quietly reinforces the idea that knowledge flows from the North to the South. African students and lecturers become unpaid contributors, feeding data and insights into systems they don't own, while Silicon Valley collects the profits. So, what's the alternative? What is needed is a technocritical approach which is a mindset that acknowledges both AI's promise and pitfalls in our context. The five core principles are: Participatory design : Students and academic staff are not just users but co-creators, shaping how AI is embedded in their learning. Critical thinking: Learners are taught to critically interrogate all AI outputs. What data is presented here? Whose voices are missing? Contextual learning : Assignments require comparing AI outputs to local realities, to identify more nuanced insights and to acknowledge blind spots. Ongoing dialogue: Hold open and candid conversations about how AI influences knowledge in and beyond our classrooms. Ethics of care: Advance African perspectives and protect against harm by ensuring that AI use in education is guided by inclusion and people's real needs — not just speed or scale. The shape of AI in African education isn't pre-ordained. It will be defined by our choices. Will we passively apply foreign tools or actively shape AI to reflect our values and ambitions? We don't need to choose between relevance and progress. With a technocritical approach, we can pursue both — on our terms. Africa cannot afford to adopt AI without adaptation, nor should students be passive users of systems that do not reflect their reality. This is about more than access. It's about digital self-determination — equipping the next generation to engage critically, challenge defaults and build AI futures that reflect African voices, knowledge and needs. AI will shape the future of education, but we must shape AI first. Africa has the opportunity not just to consume technology, but to co-create it in a relevant way. A technocritical approach reminds us that true innovation doesn't mean catching up to the Global North — it means confidently charting our own course. Dr Miné de Klerk is the dean of curricula and research ( ) and Dr Nyx McLean is the head of research and postgraduate studies ( ) at Eduvos.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store