logo
Cloudera urges telcos to invest in AI or risk falling behind

Cloudera urges telcos to invest in AI or risk falling behind

Techday NZ13-06-2025

Cloudera has issued a warning to telecommunications companies that those failing to adopt AI-driven networks risk being left behind, amid concerns that data fragmentation and scaling challenges are hampering progress in the sector.
Use cases for artificial intelligence in telecommunications are broad, such as predictive maintenance, automated anomaly detection, real-time network optimisation, and proactive service delivery. However, Anthony Behan, Global Managing Director, Communications, Media & Entertainment at Cloudera, says a lack of modernised data infrastructure could see organisations struggle to keep pace in a market experiencing sluggish growth.
Cloudera works with 80 of the world's top 100 telecom providers and reports that telcos are under increasing pressure to reduce costs, modernise infrastructure, and deliver better customer experiences, all while transforming their networks to meet new demands. The company stresses that scalable AI cannot happen without unified, reliable data; without AI, Behan warns, telcos could lose ground to competitors. "Telcos are drowning in vast volumes of operational and telemetry data – yet they can't act on it effectively," says Anthony Behan, Global Managing Director, Communications, Media & Entertainment at Cloudera.
Behan further highlights, "Regulatory compliance, cyber threats, and the slow pace of network virtualisation show just how overstretched networks already are. AI can really help, and the problem isn't a lack of data – it's that it's siloed, unstructured, and untrusted. Without strong data foundations, telcos can't scale AI."
Cloudera has recently joined the AI-RAN Alliance, a coalition including global companies such as Dell, NVIDIA, SoftBank, and T-Mobile, aiming to advance the integration of AI in developing telecommunications infrastructure.
Behan notes the importance of scaling AI applications, stating, "The next phase of AI will be about scale and production. Private AI allows for that kind of automation in the network, at carrier scale."
Barriers to adoption
Data across telecommunications networks is often siloed and managed through disparate systems, creating significant hurdles for organisations wanting to deploy AI at scale. Cloudera's advice to telecom operators includes supporting hybrid workload mobility across both cloud and on-premises environments via Private AI; establishing unified data governance covering both data platform domains and BSS/OSS stacks; allowing AI workloads to be trained on-premises and deployed either in the cloud or directly in the network; and reducing vendor lock-in by running workloads where it makes the most business sense.
Recent research from Cloudera shows that AI is already being utilised in some areas within telecommunications, including customer service (49%), experience management (44%), and security monitoring (49%). However, Cloudera points out that extending the benefits of AI to more advanced network functions such as predictive maintenance and real-time optimisation will depend on a scalable data and AI infrastructure.
AI-native opportunities
With improved data foundations, networks could unlock AI's greater potential, including automation of operations, performance gains for 5G and edge, and development of new revenue streams such as smart city solutions and support for autonomous technologies.
Looking ahead, Behan outlines his vision for the future of telecom networks: "If I could wave a magic wand and build the ideal telecom network, it would have GPUs in every base station and use AI not just for communication, but for distributed, sovereign, local intelligence. That's where Private AI comes in - you can't run everything in the public cloud, especially with sensitive data. You need on-premises capabilities for control and security, but also the flexibility to use the cloud where it makes sense. The network would be highly secure, fast, and elastic – capable of spinning up virtual resources automatically to handle congestion or block fraud in real time. While this vision is still perhaps five to ten years away telcos must begin laying the groundwork now. More investment and experimentation are needed today to realise the network of tomorrow."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google search changes turning web into 'wild west'
Google search changes turning web into 'wild west'

RNZ News

time12 hours ago

  • RNZ News

Google search changes turning web into 'wild west'

Google is transforming online search, and businesses wanting to get their websites in front of customers must change with it, according to a leading digital marketer here. The vast majority of searches online are done on Google, and the tech company began incorporating AI into its searches a little over a year ago. Last month its CEO announced a further step where the typical experience of getting links to websites would be gone entirely, replaced with an AI-generated article answering the search question. Auckland digital marketer Richard Conway says he has had to overhaul his business, moving from a focus on search engine optimisation to 'generative engine optimisation'. He says the ongoing changes to Google search are turning the web into something of a 'wild west' for those who operate businesses online. To embed this content on your own webpage, cut and paste the following: See terms of use.

‘Nanogirl' informs South on AI's use
‘Nanogirl' informs South on AI's use

Otago Daily Times

time4 days ago

  • Otago Daily Times

‘Nanogirl' informs South on AI's use

Even though "Nanogirl", Dr Michelle Dickinson, has worked with world leading tech giants, she prefers to inspire the next generation. About 60 Great South guests were glued to their Kelvin Hotel seats on Thursday evening as the United Kingdom-born New Zealand nanotechnologist shared her knowledge and AI's future impact. Business needed to stay informed about technology so it could future-proof, she said. The days were gone where the traditional five year business plan would be enough to futureproof due to the breakneck speed technology has been advancing. Owners also needed to understand the importance of maintaining a customer-centric business or risk becoming quickly irrelevant. "I care about that we have empty stores." The number of legacy institutions closing was evidence of its model not moving with the customer. "Not being customer-centric is the biggest threat to business." Schools were another sector which needed to adapt to the changing world as it predominantly catered to produce an "average" student. "Nobody wants their kids to be average." Were AI technology to be implemented it could be used to develop personalised learning models while removing the stress-inducing and labour-intensive tasks from teachers' workload. "Now you can be the best teacher you can be and stay in the field you love. "I don't want our teachers to be burnt out, I want them to be excited to be teaching." In 30 seconds, new technology could now produce individualised 12-week teaching plans aligned to the curriculum, in both Ma¯ori and English she said. Agriculture was another sector to benefit from the developing technology. Better crop yields and cost savings could now be achieved through localised soil and crop tracking information which pinpointed what fertiliser needs or moisture levels were required in specific sections of a paddock. While AI was a problem-solving tool which provided outcomes on the information available to it, to work well, it still needed the creative ideas to come from humans, she said. "People are the fundamentals of the future . . . and human side of why we do things should be at the forefront. "We, as humans, make some pretty cool decisions that aren't always based on logic." Personal and commercial security had also become imperative now there was the ability to produce realistic "deep-fake" productions with videos and audio was about to hit us. She urged families and organisations to have "safe words" that would not be present in deep fake recordings and allow family members or staff to identify fake from genuine cries for help. "This is the stuff we need to be talking about with our kids right now." Great South chief executive Chami Abeysinghe said Dr Dickinson's presentation raised some "thought-provoking" questions for Southland's business leaders. She believed there needed to be discussions about how Southland could position itself to be at the forefront of tech-driven innovation. "I think some of the points that she really raised was a good indication that we probably need to get a bit quicker at adopting and adapting. "By the time we get around to thinking about it, it has already changed again." AI was able to process information and data in a fraction of the time humans did, but the technology did not come without risks and it was critical businesses protected their operations. "If we are going to use it, we need to be able to know that it's secure." Information on ChatGPT entered the public realm that everyone could have access to and business policies had not kept up. "You absolutely have to have a [AI security] policy."

Nearly half of developers say over 50% of code is AI-generated
Nearly half of developers say over 50% of code is AI-generated

Techday NZ

time4 days ago

  • Techday NZ

Nearly half of developers say over 50% of code is AI-generated

Cloudsmith's latest report shows that nearly half of all developers using AI in their workflows now have codebases that are at least 50% AI-generated. The 2025 Artifact Management Report from Cloudsmith surveyed 307 software professionals in the US and UK, all working with AI as part of their development, DevOps, or CI/CD processes. Among these respondents, 42% reported that at least half of their current codebase is now produced by AI tools. Despite the large-scale adoption of AI-driven coding, oversight remains inconsistent. Only 67% of developers who use AI review the generated code before every deployment. This means nearly one-third of those working with AI-assisted code are deploying software without always performing a human review, even as new security risks linked to AI-generated code are emerging. Security concerns The report points to a gap between the rapid pace of AI integration in software workflows and the implementation of safety checks and controls. Attacks such as 'slopsquatting'—where malicious actors exploit hallucinated or non-existent dependencies suggested by AI code assistants—highlight the risks when AI-generated code is left unchecked. Cloudsmith's data shows that while 59% of developers say they apply extra scrutiny to AI-generated packages, far fewer have more systematic approaches in place for risk mitigation. Only 34% use tools that enforce policies specific to AI-generated artifacts, and 17% acknowledge they have no controls in place at all for managing AI-written code or dependencies. "Software development teams are shipping faster, with more AI-generated code and AI agent-led updates," said Glenn Weinstein, CEO at Cloudsmith. "AI tools have had a huge impact on developer productivity, which is great. That said, with potentially less human scrutiny on generated code, it's more important that leaders ensure the right automated controls are in place for the software supply chain." Developer perceptions The research reveals a range of attitudes towards AI-generated code among developers. While 59% are cautious and take extra steps to verify the integrity of code created by AI, 20% said they trust AI-generated code "completely." This suggests a marked difference in risk appetite and perception within developer teams, even as the majority acknowledge the need for vigilance. Across the sample, 86% of developers reported an increase in the use of AI-influenced packages or software dependencies in the past year, and 40% described this increase as "significant." Nonetheless, only 29% of those surveyed felt "very confident" in their ability to detect potential vulnerabilities in open-source libraries, from which AI tools frequently pull suggestions. "Controlling the software supply chain is the first step towards securing it," added Weinstein. "Automated checks and use of curated artifact repositories can help developers spot issues early in the development lifecycle." Tooling and controls The report highlights that adoption of automated tools specifically designed for AI-generated code remains limited, despite the stated importance of security among software development teams. While AI technologies accelerate the pace of software delivery and updating, adoption of stricter controls and policy enforcement is not keeping up with the new risks posed by machine-generated code. The findings indicate a potential lag in upgrading security processes or artifact management solutions to match the growing use of AI in coding. Developers from a range of industries—including technology, finance, healthcare, and manufacturing—participated in the survey, with roles spanning development, DevOps management, engineering, and security leadership in enterprises with more than 500 employees. The full Cloudsmith 2025 Artifact Management Report also explores other key issues, including how teams decide which open-source packages to trust, the expanding presence of AI in build pipelines, and the persistent challenges in prioritising tooling upgrades for security benefits.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store