logo
Datadog Broadens AI Security Features To Counter Critical Threats

Datadog Broadens AI Security Features To Counter Critical Threats

Scoop10-06-2025

AUCKLAND – JUNE 11, 2025 – Datadog, Inc. (NASDAQ: DDOG), the monitoring and security platform for cloud applications, today announced new capabilities to detect and remediate critical security risks across customers' AI environments —from development to production—as the company further invests to secure its customers' cloud and AI applications.
AI has created a new security frontier in which organisations need to rethink existing threat models as AI workloads foster new attack surfaces. Every microservice can now spin up autonomous agents that can mint secrets, ship code and call external APIs without any human intervention. This means one mistake could trigger a cascading breach across the entire tech stack. The latest innovations to Datadog's Security Platform, presented at DASH, aim to deliver a comprehensive solution to secure agentic AI workloads.
'AI has exponentially increased the ever-expanding backlog of security risks and vulnerabilities organisations deal with. This is because AI-native apps are not deterministic; they're more of a black box and have an increased surface area that leaves them open to vulnerabilities like prompt or code injection,' said Prashant Prahlad, VP of Products, Security at Datadog. 'The latest additions to Datadog's Security Platform provide preventative and responsive measures—powered by continuous runtime visibility—to strengthen the security posture of AI workloads, from development to production.'
Securing AI Development
Developers increasingly rely on third-party code repositories which expose them to poisoned code and hidden vulnerabilities, including those that stem from AI or LLM models, that are difficult to detect with traditional static analysis tools.
To address this problem, Datadog Code Security, now Generally Available, empowers developer and security teams to detect and prioritise vulnerabilities in their custom code and open-source libraries, and uses AI to drive remediation of complex issues in both AI and traditional applications—from development to production. It also prioritises risks based on runtime threat activity and business impact, empowering teams to focus on what matters most. Deep integrations with developer tools, such as IDEs and GitHub, allow developers to remediate vulnerabilities without disrupting development pipelines.
Hardening Security Posture of AI Applications
AI-native applications act autonomously in non-deterministic ways, which makes them inherently vulnerable to new types of attacks that attempt to alter their behaviour such as prompt injection. To mitigate these threats, organisations need stronger security controls—such as separation of privileges, authorisation bounds, and data classification across their AI applications and the underlying infrastructure.
Datadog LLM Observability, now Generally Available, monitors the integrity of AI models and performs toxicity checks that look for harmful behavior across prompts and responses within an organisation's AI applications. In addition, with Datadog Cloud Security, organisations are able to meet AI security standards such as the NIST AI framework out-of-the-box. Cloud Security detects and remediates risks such as misconfigurations, unpatched vulnerabilities, and unauthorised access to data, apps, and infrastructure. And with Sensitive Data Scanner (SDS), organisations can prevent sensitive data—such as personally identifiable information (PII)—from leaking into LLM training or inference data-sets, with support for AWS S3 and RDS instances now available in Preview.
Securing AI at Runtime
The evolving complexity of AI applications is making it even harder for security analysts to triage alerts, recognise threats from noise and respond on-time. AI apps are particularly vulnerable to unbound consumption attacks that lead to system degradation or substantial economic losses.
The Bits AI Security Analyst, a new AI agent integrated directly into Datadog Cloud SIEM, autonomously triages security signals—starting with those generated by AWS CloudTrail—and performs in-depth investigations of potential threats. It provides context-rich, actionable recommendations to help teams mitigate risks more quickly and accurately. It also helps organisations save time and costs by providing preliminary investigations and guiding Security Operations Centers to focus on the threats that truly matter.
Finally, Datadog's Workload Protection helps customers continuously monitor the interaction between LLMs and their host environment. With new LLM Isolation capabilities, available in preview, it detects and blocks the exploitation of vulnerabilities, and enforces guardrails to keep production AI models secure.
To learn more about Datadog's latest AI Security capabilities, please visit: https://docs.datadoghq.com/security/.
Code Security, new tools in Cloud Security, Sensitive Data Scanner, Cloud SIEM, Workload and App Protection, as well as new security capabilities in LLM Observability were announced during the keynote at DASH, Datadog's annual conference. The replay of the keynote is available here. During DASH, Datadog also announced launches in AI Observability, Applied AI, Log Management and released its Internal Developer Portal.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

C-suite divisions slow GenAI adoption due to security worries
C-suite divisions slow GenAI adoption due to security worries

Techday NZ

time5 days ago

  • Techday NZ

C-suite divisions slow GenAI adoption due to security worries

A new report from NTT DATA has highlighted a misalignment among senior executives regarding the adoption and security implications of generative artificial intelligence (GenAI) in organisations globally. NTT DATA's report, "The AI Security Balancing Act: From Risk to Innovation," is based on survey responses from more than 2,300 senior GenAI decision makers, including over 1,500 C-level executives across 34 countries. The findings underscore a gap between the optimism of CEOs and the caution of Chief Information Security Officers (CISOs) concerning GenAI deployment. C-Suite perspectives The report indicates that 99% of C-Suite executives are planning to increase their GenAI investments over the next two years, with 67% of CEOs preparing for significant financial commitments. In comparison, 95% of Chief Information Officers (CIOs) and Chief Technology Officers (CTOs) report that GenAI is already influencing, or will soon drive, greater spending on cybersecurity initiatives. Improved security was named among the top three benefits realised from GenAI adoption in the past year. Despite these high expectations, a considerable number of CISOs express reservations. Nearly half (45%) of CISOs surveyed shared negative sentiments about GenAI rollouts, identifying security gaps and the challenge of modernising legacy infrastructure as primary barriers. The report also finds differences in the perception of policy clarity. More than half of CISOs (54%) stated that internal GenAI policies are unclear, compared with just 20% of CEOs. This suggests a disconnect between business leaders' strategic vision and concerns raised by operational security managers. "As organisations accelerate GenAI adoption, cybersecurity must be embedded from the outset to reinforce resilience. While CEOs champion innovation, ensuring seamless collaboration between cybersecurity and business strategy is critical to mitigating emerging risks," said Sheetal Mehta, Senior Vice President and Global Head of Cybersecurity at NTT DATA, Inc. "A secure and scalable approach to GenAI requires proactive alignment, modern infrastructure and trusted co-innovation to protect enterprises from emerging threats while unlocking AI's full potential." Operational and skills challenges The study highlights that, while 97% of CISOs consider themselves GenAI decision makers, 69% acknowledge their teams currently lack the necessary skills to work effectively with GenAI technologies. Only 38% of CISOs said their organisation's GenAI and cyber security strategies are aligned, compared with 51% of CEOs. Another area of concern identified is the absence of clearly defined policies for GenAI use within organisations. According to the survey, 72% of respondents had yet to implement a formal GenAI usage policy, and just 24% of CISOs strongly agreed their company has an adequate framework for balancing the risks and rewards of GenAI adoption. Infrastructure and technology barriers Legacy technology also poses a significant challenge to GenAI integration. The research found that 88% of security leaders believe outdated infrastructure is negatively affecting both business agility and GenAI readiness. Upgrading systems such as Internet of Things (IoT), 5G, and edge computing was identified as crucial for future progress. To address these obstacles, 64% of CISOs reported prioritising collaboration with strategic IT partners and co-innovation, rather than relying on proprietary AI solutions. When choosing GenAI technology partners, security leaders ranked end-to-end service integration as their most important selection criterion. "Collaboration is highly valued by line-of-business leaders in their relationships with CISOs. However, disconnects remain, with gaps between the organisation's desired risk posture and its current cybersecurity capabilities," said Craig Robinson, Research Vice President, Security Services at IDC. "While the use of GenAI clearly provides benefits to the enterprise, CISOs and Global Risk and Compliance leaders struggle to communicate the need for proper governance and guardrails, making alignment with business leaders essential for implementation." Survey methodology The report's data derives from a global survey of 2,300 senior GenAI decision makers. Of these respondents, 68% were C-suite executives, with the remainder comprising vice presidents, heads of department, directors, and senior managers. The research, conducted by Jigsaw Research, aimed to capture perspectives on both the opportunities and risks associated with GenAI across different regions and sectors. The report points to the need for structured governance, clarity in strategic direction, and investment in modern infrastructure to ensure successful and secure GenAI deployments in organisations.

Secure Code Warrior unveils free AI security rules for developers
Secure Code Warrior unveils free AI security rules for developers

Techday NZ

time5 days ago

  • Techday NZ

Secure Code Warrior unveils free AI security rules for developers

Secure Code Warrior has released AI Security Rules on GitHub, offering developers a free resource aimed at improving code security when working with AI coding tools. The resource is designed for use with a variety of AI coding tools, including GitHub Copilot, Cline, Roo, Cursor, Aider, and Windsurf. The newly available rulesets are structured to provide security-focused guidance to developers who are increasingly using AI to assist with code generation and development processes. Secure Code Warrior's ongoing goal is to enable developers to produce more secure code from the outset when leveraging AI, aligning with broader efforts to embed security awareness and best practices across development workflows. The company emphasises that developers who possess a strong understanding of security can potentially create much safer and higher-quality code with AI assistance, compared to those who lack such proficiency. Security within workflow "These guardrails add a meaningful layer of defence, especially when developers are moving fast, multitasking, or find themselves trusting AI tools a little too much," said Pieter Danhieux, Secure Code Warrior Co-Founder & CEO. "We've kept our rules clear, concise and strictly focused on security practices that work across a wide range of environments, intentionally avoiding language or framework-specific guidance. Our vision is a future where security is seamlessly integrated into the developer workflow, regardless of how code is written. This is just the beginning." The AI Security Rules offer what the company describes as a pragmatic and lightweight baseline that can be adopted by any developer or organisation, regardless of whether they are a Secure Code Warrior customer. The rules are presented in a way that reduces reliance on language- or framework-specific advice, allowing broad applicability. Features and flexibility The rulesets function as secure defaults, guiding AI tools away from hazardous coding patterns and well-known security pitfalls such as unsafe use of functions like eval, insecure authentication methods, or deployment without parameterised queries. The rules are grouped by development domain—including web frontend, backend, and mobile—so that developers in varied environments can benefit. They are designed to be adaptable and can be incorporated with AI coding tools that support external rule files. Another feature highlighted is the public availability and ease of adjustment, meaning development teams of any size or configuration can tailor the rules to their workflow, technology stack, or project requirements. This is intended to foster consistency and collaboration within and between development teams when reviewing or generating AI-assisted code. Supplementary content The introduction of the AI Security Rules follows several recent releases from Secure Code Warrior centred around artificial intelligence and large language model (LLM) security. These include four new courses—such as "Coding With AI" and "OWASP Top 10 for LLMs"—along with six interactive walkthrough missions, upwards of 40 new AI Challenges, and an expanded set of guidelines and video content. All resources are available on-demand within the Secure Code Warrior platform. This rollout represents the initial phase of a broader initiative to provide ongoing training and up-to-date resources supporting secure development as AI technologies continue to be integrated into software engineering practices. The company states that additional related content is already in development and is expected to be released in the near future. Secure Code Warrior's efforts align with increasing industry focus on the intersection of AI and cybersecurity, as the adoption of AI coding assistants becomes widespread. The emphasis on clear, practical security rules is intended to help mitigate common vulnerabilities that can be introduced through both manual and AI-assisted programming. The AI Security Rules are publicly available on GitHub for any developers or organisations wishing to incorporate the guidance into their existing development operations using compatible AI tools.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store