logo
Marketing Agencies Urged To Pivot As Meta Moves Toward Fully Automated Advertising By 2026

Marketing Agencies Urged To Pivot As Meta Moves Toward Fully Automated Advertising By 2026

Scoop10-06-2025

Press Release – Alexanders Digital Marketing
Agencies that once focused on deliverables like social posts and Google Ads are now being challenged to step into a new role: strategic enablers (helping clients convert leads into customers).
With Meta announcing its ambition to fully automate advertising campaigns by 2026 using artificial intelligence, social media marketing agencies are quaking in their boots, and being urged to rethink their role in a fast-evolving digital landscape.
According to a recent Reuters report, Meta is investing heavily in AI systems that will plan, purchase, and optimise ad campaigns with minimal human input, generating 30-40% better results at 10% of the cost, which could potentially wipe out much of the creative industry around social. The announcement signals a dramatic acceleration toward a future where media buying, and ad creative are machine-led.
This shift is already being felt across the industry. AI tools like ChatGPT, Canva, and Meta's own Advantage+ are allowing small and mid-sized businesses to produce marketing content and run campaigns in-house, reducing their reliance on traditional agencies for execution.
'Clients no longer need an agency to write every post, design every banner, or set up every ad campaign,' said Rachel Alexander, founder of Alexanders, Christchurch's first digital marketing agency. 'They have Canva, ChatGPT, HeyGen, MidJourney & Meta automation. What they need now is someone to help them make sense of it all,' she said.
Agencies that once focused on deliverables like social posts and Google Ads are now being challenged to step into a new role: strategic enablers (helping clients convert leads into customers).
A recent YouTube vlog 'Meta just killed the creative industry: The 2026 Automation Apocalypse' by Julia McCoy, CEO at First Movers & AI thought leader, describes this well. 'Agencies must pivot from being tactical executors to strategic advisors, bringing clarity, structure, and prioritisation to an increasingly overwhelming landscape,' said McCoy.
'Business marketers need to think of their agency as a marketing generalist doctor: diagnosing weak points, recommending tailored treatments, and coaching internal teams through implementation,' said Alexander.
With many SMEs building internal marketing teams and experimenting with DIY tools, the opportunity for agencies lies in offering higher-value services such as sales enablement, CRM integration, AI content workflows, and conversion strategy.
'It's less about deliverables, more about direction. Less about content calendars, more about conversion journeys…The marketing agency of the future is less like a factory and more like a consultancy,' said McCoy.
Alexander said she is mindful, but not anxious, because we've always been a hybrid between a marketing consultancy and marketing agency.
For New Zealand agencies looking to adapt, this means embracing AI, not competing with it and reasserting their value as interpreters, integrators, and insight-driven advisors.
'AI has been disruptive technology but being agile is the key to success. It's helped us survive for 28 years. Time to pivot again!,' said Alexander.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Varonis boosts ChatGPT Enterprise security with compliance tools
Varonis boosts ChatGPT Enterprise security with compliance tools

Techday NZ

time4 days ago

  • Techday NZ

Varonis boosts ChatGPT Enterprise security with compliance tools

Varonis has announced the integration of its Data Security Platform with the OpenAI ChatGPT Enterprise Compliance API, aiming to provide enhanced data protection and compliance monitoring for enterprise users of ChatGPT. The integration is designed to help organisations using ChatGPT Enterprise automatically identify sensitive data uploads, monitor the content of prompts and responses, and mitigate the risks of data breaches and compliance violations. ChatGPT Enterprise currently serves over 3 million business users, offering productivity tools that are enhanced by access to organisational data. As these AI models become more embedded in daily workflows, maintaining strict data governance becomes increasingly important for companies managing sensitive or regulated information. Expanded security measures The Varonis integration is intended to offer added protection against risks such as compromised accounts, insider threats, and accidental misuse, all of which can result in data security problems or regulatory penalties. The platform supports ongoing adjustment of user permissions and continuously monitors interactions within ChatGPT to limit unnecessary data flows and alert security teams to potentially risky or abnormal behaviours. "ChatGPT is becoming a critical part of how modern teams work. With Varonis, security teams can embrace this shift without losing visibility or control over their sensitive data," said Varonis EVP of Engineering and Chief Technology Officer David Bass. Through its partnership with OpenAI, Varonis delivers both automated security protocols and 24/7 data monitoring, allowing organisations to adopt artificial intelligence-based solutions while maintaining their obligations around privacy and data protection. Key functions The new offering brings several technical capabilities with a focus on automation and real-time oversight. Automated data classification allows Varonis to detect and label sensitive materials that are either uploaded to or generated by ChatGPT Enterprise. Continuous session monitoring ensures that any prompt or response within the ChatGPT environment is reviewed for compliance, preventing inappropriate or risky data from being uploaded or shared inadvertently. The platform also uses behaviour-based threat detection to flag unusual activity, such as large-scale file uploads or unauthorised changes to administrative access, which could indicate a potential breach. Focus on compliance and privacy The integration is positioned to offer both preventative and detective controls for AI-powered environments. These measures aim to ensure that users maximise the operational value of AI tools, such as ChatGPT, while minimising the risks associated with data exposure. The Varonis solution is described as complementing existing OpenAI security and privacy controls, rather than replacing them. This approach enables organisations to deploy generative AI models more confidently, even in regulated sectors or areas handling highly confidential information. Availability and assessment Customers will have access to Varonis for ChatGPT Enterprise in a private preview phase. As part of this launch, organisations can request a Varonis Data Risk Assessment, which reviews current practices and assesses an organisation's readiness for adopting AI in a secure and compliant way. Varonis continues to develop its portfolio of integrations and security tools as part of its core offering. The Data Security Platform sees application across numerous cloud environments, with a focus on automating security outcomes, data detection and response, data loss prevention, and insider risk management.

The rise of agentic AI and what it means for ANZ enterprise
The rise of agentic AI and what it means for ANZ enterprise

Techday NZ

time5 days ago

  • Techday NZ

The rise of agentic AI and what it means for ANZ enterprise

As much as we don't want to think about it or admit it, a lot of our time can be spent on tedious, repetitive tasks that eat away at our precious time and mental energy, preventing us from focusing on the truly strategic work—but agentic AI is changing that equation, and research has found that this technology is rapidly taking off in Australia and New Zealand. According to a study from YouGov and Salesforce, 69% of ANZ c-suite executives who prioritise AI are focused on implementing agentic AI over the next 12 months, and 38% say that they're already implementing the technology. Agentic AI is seen by many as the new frontier of AI innovation, and that's because these agents can automate tedious or repetitive processes without direct prompting from a human user, which opens up a wide array of possible applications. An AI agent could, for example, provide expert-level advice to customers, perform administrative work for finance or HR departments, or execute complex data analysis, among other potential use cases. In order to adopt AI agents securely and efficiently, however, organisations across ANZ and beyond will have to do more to secure and optimize the data that powers agentic tools. Without strong data security and governance, agents won't work effectively or securely, which can harm productivity and create unnecessary risk. What is agentic AI? Setting the record straight What is an AI agent? Microsoft defines it as an "[application] that automate and executes business processes, acting as [a] digital colleague to assist or even perform tasks on behalf of users or teams." Salesforce, meanwhile, calls it a "type of artificial intelligence (AI) that can operate independently, making decisions and performing tasks without human intervention," and IBM calls it "an artificial intelligence system that can accomplish a specific goal with limited supervision." While these definitions might not be perfectly identical (and there's definitely been some healthy debate in the industry!), the core concept is consistent: an AI agent is an AI system that can act intelligently and autonomously, without direct, continuous prompting from a human. It's this autonomy and advanced reasoning power that truly sets them apart from AI assistants like ChatGPT, Google Gemini, or Microsoft 365 Copilot. Think of it this way: an assistant helps you write, while an agent writes the report for you. This opens up a world of possibilities: expert-level customer advice, automated administrative work for finance or HR, or even executing complex data analysis on its own. For example, just this week I asked an AI Agent to put together a report for me comparing software product features against an international standard and then provide suggestions for additional functionality. This saved me about three days of research and I could spend that valuable time analysing the results. Why stronger data governance makes better, safer AI agents Agentic AI has unique benefits, but it also presents unique risks, and as more organisations adopt agentic AI, they're discovering that robust data governance— the establishment of policies, roles, and technology to manage and safeguard an organization's data assets—is essential when it comes to ensuring that these systems function securely and effectively. That's why, according to a recent study from Drexel University, 71% of organizations have data governance program, compared to 60% in 2023. Effective governance is on the rise because it helps address critical AI-related security and productivity issues like preventing data breaches and reducing AI-related errors. Without strong data governance measures, agents may inadvertently expose sensitive information or make flawed autonomous decisions. With strong data governance measures, organisations can proactively safeguard their data by implementing comprehensive governance policies and deploying technologies to monitor AI runtime environments. This not only enhances security but also ensures that agentic AI tools operate optimally, delivering significant value with minimal risk. Key elements of this approach include: • Securing data without a human in the loop: Agents rely on the data they consume and often don't have a human in the mix to ensure that data is consumed and dispensed correctly. This means that it's crucial that this data is accurately categorized to ensure relevance and mitigate risks. When a human isn't in the loop, strong data governance measures can step in to ensure that AI agents can access or repeat sensitive data. • Preventing errors and breaches: Robust governance frameworks help agents avoid "hallucinations"—instances where AI generates incorrect information—and protect sensitive content from accidental exposure by improving the quality of AI data. This significantly lowers the chances of autonomous agents making harmful decisions. To grapple with these and other AI-related challenges, Gartner now recommends that organisations apply its AI TRiSM (trust, risk, and security management) frameworks to their data environments. Data and information governance are a key part of this framework, along with AI governance and AI runtime inspection and enforcement technology. The very existence of this new framework underscores the immense potential—and the equally immense risks—of Agentic AI. Securing the future with AI The future of work is here, and it's powered by Agentic AI. While the wave of adoption is clearly building across ANZ, organisations must prioritise robust data security and governance. This isn't just about managing risk; it's about optimising the data that fuels these powerful tools, ensuring they work effectively and securely. Organisations cannot afford to be left behind so more needs to be done to ensure risks are managed and this powerful tooling will be effective.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store