
Datadog Broadens AI Security Features To Counter Critical Threats
Press Release – Datadog
Launch of Code Security and new security capabilities strengthen posture across the AI stack, from data and AI models to applications.
AUCKLAND – JUNE 11, 2025 – Datadog, Inc. (NASDAQ: DDOG), the monitoring and security platform for cloud applications, today announced new capabilities to detect and remediate critical security risks across customers' AI environments —from development to production—as the company further invests to secure its customers' cloud and AI applications.
AI has created a new security frontier in which organisations need to rethink existing threat models as AI workloads foster new attack surfaces. Every microservice can now spin up autonomous agents that can mint secrets, ship code and call external APIs without any human intervention. This means one mistake could trigger a cascading breach across the entire tech stack. The latest innovations to Datadog's Security Platform, presented at DASH, aim to deliver a comprehensive solution to secure agentic AI workloads.
'AI has exponentially increased the ever-expanding backlog of security risks and vulnerabilities organisations deal with. This is because AI-native apps are not deterministic; they're more of a black box and have an increased surface area that leaves them open to vulnerabilities like prompt or code injection,' said Prashant Prahlad, VP of Products, Security at Datadog. 'The latest additions to Datadog's Security Platform provide preventative and responsive measures—powered by continuous runtime visibility—to strengthen the security posture of AI workloads, from development to production.'
Securing AI Development
Developers increasingly rely on third-party code repositories which expose them to poisoned code and hidden vulnerabilities, including those that stem from AI or LLM models, that are difficult to detect with traditional static analysis tools.
To address this problem, Datadog Code Security, now Generally Available, empowers developer and security teams to detect and prioritise vulnerabilities in their custom code and open-source libraries, and uses AI to drive remediation of complex issues in both AI and traditional applications—from development to production. It also prioritises risks based on runtime threat activity and business impact, empowering teams to focus on what matters most. Deep integrations with developer tools, such as IDEs and GitHub, allow developers to remediate vulnerabilities without disrupting development pipelines.
Hardening Security Posture of AI Applications
AI-native applications act autonomously in non-deterministic ways, which makes them inherently vulnerable to new types of attacks that attempt to alter their behaviour such as prompt injection. To mitigate these threats, organisations need stronger security controls—such as separation of privileges, authorisation bounds, and data classification across their AI applications and the underlying infrastructure.
Datadog LLM Observability, now Generally Available, monitors the integrity of AI models and performs toxicity checks that look for harmful behavior across prompts and responses within an organisation's AI applications. In addition, with Datadog Cloud Security, organisations are able to meet AI security standards such as the NIST AI framework out-of-the-box. Cloud Security detects and remediates risks such as misconfigurations, unpatched vulnerabilities, and unauthorised access to data, apps, and infrastructure. And with Sensitive Data Scanner (SDS), organisations can prevent sensitive data—such as personally identifiable information (PII)—from leaking into LLM training or inference data-sets, with support for AWS S3 and RDS instances now available in Preview.
Securing AI at Runtime
The evolving complexity of AI applications is making it even harder for security analysts to triage alerts, recognise threats from noise and respond on-time. AI apps are particularly vulnerable to unbound consumption attacks that lead to system degradation or substantial economic losses.
The Bits AI Security Analyst, a new AI agent integrated directly into Datadog Cloud SIEM, autonomously triages security signals—starting with those generated by AWS CloudTrail—and performs in-depth investigations of potential threats. It provides context-rich, actionable recommendations to help teams mitigate risks more quickly and accurately. It also helps organisations save time and costs by providing preliminary investigations and guiding Security Operations Centers to focus on the threats that truly matter.
Finally, Datadog's Workload Protection helps customers continuously monitor the interaction between LLMs and their host environment. With new LLM Isolation capabilities, available in preview, it detects and blocks the exploitation of vulnerabilities, and enforces guardrails to keep production AI models secure.
To learn more about Datadog's latest AI Security capabilities, please visit: https://docs.datadoghq.com/security/.
Code Security, new tools in Cloud Security, Sensitive Data Scanner, Cloud SIEM, Workload and App Protection, as well as new security capabilities in LLM Observability were announced during the keynote at DASH, Datadog's annual conference. The replay of the keynote is available here. During DASH, Datadog also announced launches in AI Observability, Applied AI, Log Management and released its Internal Developer Portal.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

RNZ News
14 hours ago
- RNZ News
Google search changes turning web into 'wild west'
Google is transforming online search, and businesses wanting to get their websites in front of customers must change with it, according to a leading digital marketer here. The vast majority of searches online are done on Google, and the tech company began incorporating AI into its searches a little over a year ago. Last month its CEO announced a further step where the typical experience of getting links to websites would be gone entirely, replaced with an AI-generated article answering the search question. Auckland digital marketer Richard Conway says he has had to overhaul his business, moving from a focus on search engine optimisation to 'generative engine optimisation'. He says the ongoing changes to Google search are turning the web into something of a 'wild west' for those who operate businesses online. To embed this content on your own webpage, cut and paste the following: See terms of use.


NZ Herald
15 hours ago
- NZ Herald
Hawke's Bay runner plans marathon for 115 women killed on runs
Why US$42b DataDog is going all in on AI The enterprise software company DataDog is investing almost US$1b a year into artificial intelligence.


Otago Daily Times
4 days ago
- Otago Daily Times
‘Nanogirl' informs South on AI's use
Even though "Nanogirl", Dr Michelle Dickinson, has worked with world leading tech giants, she prefers to inspire the next generation. About 60 Great South guests were glued to their Kelvin Hotel seats on Thursday evening as the United Kingdom-born New Zealand nanotechnologist shared her knowledge and AI's future impact. Business needed to stay informed about technology so it could future-proof, she said. The days were gone where the traditional five year business plan would be enough to futureproof due to the breakneck speed technology has been advancing. Owners also needed to understand the importance of maintaining a customer-centric business or risk becoming quickly irrelevant. "I care about that we have empty stores." The number of legacy institutions closing was evidence of its model not moving with the customer. "Not being customer-centric is the biggest threat to business." Schools were another sector which needed to adapt to the changing world as it predominantly catered to produce an "average" student. "Nobody wants their kids to be average." Were AI technology to be implemented it could be used to develop personalised learning models while removing the stress-inducing and labour-intensive tasks from teachers' workload. "Now you can be the best teacher you can be and stay in the field you love. "I don't want our teachers to be burnt out, I want them to be excited to be teaching." In 30 seconds, new technology could now produce individualised 12-week teaching plans aligned to the curriculum, in both Ma¯ori and English she said. Agriculture was another sector to benefit from the developing technology. Better crop yields and cost savings could now be achieved through localised soil and crop tracking information which pinpointed what fertiliser needs or moisture levels were required in specific sections of a paddock. While AI was a problem-solving tool which provided outcomes on the information available to it, to work well, it still needed the creative ideas to come from humans, she said. "People are the fundamentals of the future . . . and human side of why we do things should be at the forefront. "We, as humans, make some pretty cool decisions that aren't always based on logic." Personal and commercial security had also become imperative now there was the ability to produce realistic "deep-fake" productions with videos and audio was about to hit us. She urged families and organisations to have "safe words" that would not be present in deep fake recordings and allow family members or staff to identify fake from genuine cries for help. "This is the stuff we need to be talking about with our kids right now." Great South chief executive Chami Abeysinghe said Dr Dickinson's presentation raised some "thought-provoking" questions for Southland's business leaders. She believed there needed to be discussions about how Southland could position itself to be at the forefront of tech-driven innovation. "I think some of the points that she really raised was a good indication that we probably need to get a bit quicker at adopting and adapting. "By the time we get around to thinking about it, it has already changed again." AI was able to process information and data in a fraction of the time humans did, but the technology did not come without risks and it was critical businesses protected their operations. "If we are going to use it, we need to be able to know that it's secure." Information on ChatGPT entered the public realm that everyone could have access to and business policies had not kept up. "You absolutely have to have a [AI security] policy."