
Gemini Code Assist launches for all, delivers 2.5x boost
Gemini Code Assist for individuals and for GitHub is now generally available and powered by Gemini 2.5.
Gemini Code Assist, which is a free AI-coding assistant, launched public previews for individuals and a code review agent compatible with GitHub a few months ago. According to Group Product Manager for Gemini Code Assist, Damith Karunaratne, since the February preview announcement, the company has been "requesting input, listening to feedback and shipping capabilities developers are asking for."
Gemini 2.5 now powers both the free and paid versions of Gemini Code Assist. The tool is designed to enhance coding performance and assist developers with tasks such as creating visually compelling web applications, as well as handling code transformation and editing requirements. Both Gemini Code Assist for individuals and for GitHub are now available, and developers can start using them swiftly.
Karunaratne said, "Now we're announcing that Gemini Code Assist for individuals and Gemini Code Assist for GitHub are generally available, and developers can get started in less than a minute. Gemini 2.5 now powers both the free and paid versions of Gemini Code Assist, features advanced coding performance; and helps developers excel at tasks like creating visually compelling web apps, along with code transformation and editing."
Recognising that developers often spend significant time personalising their coding environments for efficiency and collaborative purposes, the latest updates to Gemini Code Assist focus on expanded customisation. The company states that all versions now offer more options to accommodate individual and team preferences, including workflow customisation, the option to resume tasks from where they were paused, and new tools to enforce team coding standards, style guides and architectural patterns.
"We know developers spend a lot of time personalizing their coding environment so they can be more efficient and work better in team settings. Our latest updates to Gemini Code Assist, across all versions, give more customization options for you and your team's preferences. This includes more ways to customize workflows to fit different project needs, the ability to more easily pick up tasks exactly from where you were left off, and new tooling to enforce a team's coding standards, style guides and architectural patterns," said Karunaratne.
Some recent updates to Gemini Code Assist include the ability to resume work and explore new directions using chat history and threads, shaping the AI's responses by specifying persistent rules such as "always add unit tests," and automating repetitive tasks with custom commands like "generate exception handling logic." Other features allow developers to review and accept chat code suggestions in parts, across files, or all at once, with improvements aimed at streamlining the code review and suggestion process.
Karunaratne outlined, "Here are some examples of recent updates you can explore in Gemini Code Assist: Quickly resume where you left off and jump into new directions with chat history and threads. Shape Gemini's responses by specifying rules (i.e., 'always add unit tests') that you want applied to every AI generation in the chat. Automate repetitive tasks by creating custom commands (i.e., "generate exception handling logic") Save time by choosing to review and accept chat code suggestions in parts, across files, or accept all together. Reviewing and accepting code suggestions is now significantly improved."
The company also confirmed that when a 2 million token context window becomes available on Vertex AI, Gemini Code Assist Standard and Enterprise customers will have access to it as well. This expanded capability is intended to aid those working on complex, large-scale development challenges, such as bug tracing, code transformations, and compiling extensive onboarding materials for new members of sizeable codebases.
"And when we make a 2 million token context window available on Vertex AI, Gemini Code Assist Standard and Enterprise developers will get it too. This expanded context window will help developers with complex tasks at large scale, like bug tracing, code transformations, and generating comprehensive onboarding guides for people new to a vast codebase," Karunaratne explained.
New data provided by the company shows notable improvements in developer productivity when using Gemini Code Assist. In an internal experiment comparing developers utilising the tool to those without coding assistance software, results showed that Gemini Code Assist increased the likelihood of successfully completing common development tasks by 2.5 times.
"New data shows that Gemini Code Assist significantly helps developers get things done. In an experiment comparing developers using Gemini Code Assist to developers without any coding assistance tools, we found that Gemini Code Assist significantly boosts developers' odds of success in completing common development tasks by 2.5 times," Karunaratne noted.
The Gemini Code Assist extension is available for download in both Visual Studio Code and JetBrains integrated development environments, and the code review agent is accessible through the GitHub app. The service is now also available in Android Studio, allowing businesses to utilise Gemini at every phase of the Android development lifecycle.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
5 days ago
- Techday NZ
Secure Code Warrior unveils free AI security rules for developers
Secure Code Warrior has released AI Security Rules on GitHub, offering developers a free resource aimed at improving code security when working with AI coding tools. The resource is designed for use with a variety of AI coding tools, including GitHub Copilot, Cline, Roo, Cursor, Aider, and Windsurf. The newly available rulesets are structured to provide security-focused guidance to developers who are increasingly using AI to assist with code generation and development processes. Secure Code Warrior's ongoing goal is to enable developers to produce more secure code from the outset when leveraging AI, aligning with broader efforts to embed security awareness and best practices across development workflows. The company emphasises that developers who possess a strong understanding of security can potentially create much safer and higher-quality code with AI assistance, compared to those who lack such proficiency. Security within workflow "These guardrails add a meaningful layer of defence, especially when developers are moving fast, multitasking, or find themselves trusting AI tools a little too much," said Pieter Danhieux, Secure Code Warrior Co-Founder & CEO. "We've kept our rules clear, concise and strictly focused on security practices that work across a wide range of environments, intentionally avoiding language or framework-specific guidance. Our vision is a future where security is seamlessly integrated into the developer workflow, regardless of how code is written. This is just the beginning." The AI Security Rules offer what the company describes as a pragmatic and lightweight baseline that can be adopted by any developer or organisation, regardless of whether they are a Secure Code Warrior customer. The rules are presented in a way that reduces reliance on language- or framework-specific advice, allowing broad applicability. Features and flexibility The rulesets function as secure defaults, guiding AI tools away from hazardous coding patterns and well-known security pitfalls such as unsafe use of functions like eval, insecure authentication methods, or deployment without parameterised queries. The rules are grouped by development domain—including web frontend, backend, and mobile—so that developers in varied environments can benefit. They are designed to be adaptable and can be incorporated with AI coding tools that support external rule files. Another feature highlighted is the public availability and ease of adjustment, meaning development teams of any size or configuration can tailor the rules to their workflow, technology stack, or project requirements. This is intended to foster consistency and collaboration within and between development teams when reviewing or generating AI-assisted code. Supplementary content The introduction of the AI Security Rules follows several recent releases from Secure Code Warrior centred around artificial intelligence and large language model (LLM) security. These include four new courses—such as "Coding With AI" and "OWASP Top 10 for LLMs"—along with six interactive walkthrough missions, upwards of 40 new AI Challenges, and an expanded set of guidelines and video content. All resources are available on-demand within the Secure Code Warrior platform. This rollout represents the initial phase of a broader initiative to provide ongoing training and up-to-date resources supporting secure development as AI technologies continue to be integrated into software engineering practices. The company states that additional related content is already in development and is expected to be released in the near future. Secure Code Warrior's efforts align with increasing industry focus on the intersection of AI and cybersecurity, as the adoption of AI coding assistants becomes widespread. The emphasis on clear, practical security rules is intended to help mitigate common vulnerabilities that can be introduced through both manual and AI-assisted programming. The AI Security Rules are publicly available on GitHub for any developers or organisations wishing to incorporate the guidance into their existing development operations using compatible AI tools.


Scoop
10-06-2025
- Scoop
Datadog Broadens AI Security Features To Counter Critical Threats
AUCKLAND – JUNE 11, 2025 – Datadog, Inc. (NASDAQ: DDOG), the monitoring and security platform for cloud applications, today announced new capabilities to detect and remediate critical security risks across customers' AI environments —from development to production—as the company further invests to secure its customers' cloud and AI applications. AI has created a new security frontier in which organisations need to rethink existing threat models as AI workloads foster new attack surfaces. Every microservice can now spin up autonomous agents that can mint secrets, ship code and call external APIs without any human intervention. This means one mistake could trigger a cascading breach across the entire tech stack. The latest innovations to Datadog's Security Platform, presented at DASH, aim to deliver a comprehensive solution to secure agentic AI workloads. 'AI has exponentially increased the ever-expanding backlog of security risks and vulnerabilities organisations deal with. This is because AI-native apps are not deterministic; they're more of a black box and have an increased surface area that leaves them open to vulnerabilities like prompt or code injection,' said Prashant Prahlad, VP of Products, Security at Datadog. 'The latest additions to Datadog's Security Platform provide preventative and responsive measures—powered by continuous runtime visibility—to strengthen the security posture of AI workloads, from development to production.' Securing AI Development Developers increasingly rely on third-party code repositories which expose them to poisoned code and hidden vulnerabilities, including those that stem from AI or LLM models, that are difficult to detect with traditional static analysis tools. To address this problem, Datadog Code Security, now Generally Available, empowers developer and security teams to detect and prioritise vulnerabilities in their custom code and open-source libraries, and uses AI to drive remediation of complex issues in both AI and traditional applications—from development to production. It also prioritises risks based on runtime threat activity and business impact, empowering teams to focus on what matters most. Deep integrations with developer tools, such as IDEs and GitHub, allow developers to remediate vulnerabilities without disrupting development pipelines. Hardening Security Posture of AI Applications AI-native applications act autonomously in non-deterministic ways, which makes them inherently vulnerable to new types of attacks that attempt to alter their behaviour such as prompt injection. To mitigate these threats, organisations need stronger security controls—such as separation of privileges, authorisation bounds, and data classification across their AI applications and the underlying infrastructure. Datadog LLM Observability, now Generally Available, monitors the integrity of AI models and performs toxicity checks that look for harmful behavior across prompts and responses within an organisation's AI applications. In addition, with Datadog Cloud Security, organisations are able to meet AI security standards such as the NIST AI framework out-of-the-box. Cloud Security detects and remediates risks such as misconfigurations, unpatched vulnerabilities, and unauthorised access to data, apps, and infrastructure. And with Sensitive Data Scanner (SDS), organisations can prevent sensitive data—such as personally identifiable information (PII)—from leaking into LLM training or inference data-sets, with support for AWS S3 and RDS instances now available in Preview. Securing AI at Runtime The evolving complexity of AI applications is making it even harder for security analysts to triage alerts, recognise threats from noise and respond on-time. AI apps are particularly vulnerable to unbound consumption attacks that lead to system degradation or substantial economic losses. The Bits AI Security Analyst, a new AI agent integrated directly into Datadog Cloud SIEM, autonomously triages security signals—starting with those generated by AWS CloudTrail—and performs in-depth investigations of potential threats. It provides context-rich, actionable recommendations to help teams mitigate risks more quickly and accurately. It also helps organisations save time and costs by providing preliminary investigations and guiding Security Operations Centers to focus on the threats that truly matter. Finally, Datadog's Workload Protection helps customers continuously monitor the interaction between LLMs and their host environment. With new LLM Isolation capabilities, available in preview, it detects and blocks the exploitation of vulnerabilities, and enforces guardrails to keep production AI models secure. To learn more about Datadog's latest AI Security capabilities, please visit: Code Security, new tools in Cloud Security, Sensitive Data Scanner, Cloud SIEM, Workload and App Protection, as well as new security capabilities in LLM Observability were announced during the keynote at DASH, Datadog's annual conference. The replay of the keynote is available here. During DASH, Datadog also announced launches in AI Observability, Applied AI, Log Management and released its Internal Developer Portal.


Techday NZ
10-06-2025
- Techday NZ
Datadog unveils AI-powered security tools for cloud & code
Datadog has introduced a suite of artificial intelligence security tools designed to detect and mitigate risks across cloud and AI environments. New AI agent The company has launched Bits AI Security Analyst, an AI agent that autonomously investigates potential threats and supports teams in managing risks with greater efficiency and accuracy. Integrated into Datadog Cloud SIEM, this agent triages security signals—starting with those generated by AWS CloudTrail—and performs detailed investigations into possible threats. Actionable, context-driven recommendations are then provided to help security teams respond more swiftly. "AI has exponentially increased the ever-expanding backlog of security risks and vulnerabilities organizations deal with. This is because AI-native apps are not deterministic; they're more of a black box and have an increased surface area that leaves them open to vulnerabilities like prompt or code injection," said Prashant Prahlad, Vice President of Products, Security at Datadog. "The latest additions to Datadog's Security Platform provide preventative and responsive measures—powered by continuous runtime visibility—to strengthen the security posture of AI workloads, from development to production." Enhancing code security Datadog Code Security, now generally available, aims to help developers and security teams detect and prioritise vulnerabilities not just in proprietary code but also within open-source libraries. The platform is specifically designed to uncover issues that may be present in large language model (LLM) integrations and AI-powered code, as these can be difficult to identify using traditional static analysis tools. The solution also uses artificial intelligence to facilitate the remediation of complex problems and ranks risks based on runtime activity and business impact. Deep integrations with widely-used developer environments, including integrated development environments (IDEs) and GitHub, are intended to allow faster remediation workflows without interrupting established development processes. Strengthening AI application security With AI-native applications operating autonomously and often in unpredictable ways, new types of attacks such as prompt injection have become more prevalent. Datadog's updated security offerings include features to help organisations implement stronger security controls through measures such as separation of privileges, finely-tuned authorisation, and data classification throughout their AI application landscape and infrastructure. Datadog LLM Observability, now also generally available, monitors the integrity of AI models, with tools to identify harmful or toxic behaviours across prompts and responses in enterprise AI applications. Other updates to Datadog Cloud Security support compliance with standards such as the NIST AI framework. This suite can uncover and remediate misconfigurations, unpatched vulnerabilities, and instances of unauthorised data or infrastructure access. The Sensitive Data Scanner, now supporting AWS S3 and RDS instances in preview, helps prevent personal or sensitive information from inadvertently being incorporated in LLM training data or inference processes. Monitoring runtime risks The complexity of AI-based applications increases the challenge for security analysts to manage alerts, distinguish credible threats from benign signals, and respond in a timely manner. According to Datadog, AI applications are at particular risk of attacks that could lead to resource exhaustion or financial damage if not detected early. Bits AI Security Analyst is designed to reduce the workload on Security Operations Centres by providing initial investigations and filtering for more relevant threats. The new solution aims to enable teams to act on rich context and prioritised guidance so they can focus resources where they matter most. Additional updates include Datadog Workload Protection, which now features LLM Isolation capabilities in preview. This enables continuous monitoring of interactions between LLMs and their host environments, helping to detect and prevent exploitation of vulnerabilities while enforcing controls to protect production AI models. Datadog's new security features encompass Code Security, updated Cloud Security tools, Sensitive Data Scanner, Cloud SIEM, Workload and Application Protection, and expanded abilities within LLM Observability. These updates are designed to give organisations multiple layers of risk mitigation as they increasingly deploy AI in critical workflows.