
Cloud Security Alliance launches Valid-AI-ted tool for STAR checks
The Cloud Security Alliance has launched Valid-AI-ted, an AI-powered tool providing automated quality checks of STAR Level 1 self-assessments for cloud service providers.
Valid-AI-ted integrates large language model (LLM) technology to offer an automated assessment of assurance information in the STAR Registry, aiming to improve transparency and trust in cloud security declarations.
Jim Reavis, Chief Executive Officer and Co-Founder, Cloud Security Alliance, said, "With agile, vendor-neutral programs and a global network of industry experts, CSA is uniquely positioned to develop authoritative AI tools that address the real-world challenges of cloud service providers. Our focus on security-conscious innovation led to the creation of Valid-AI-ted and will continue to see us deliver forward-looking initiatives that will push the boundaries of secure, AI-driven technology."
CSA members can use Valid-AI-ted without charge and submit assessments as frequently as needed. Non-member providers are limited to ten resubmissions and can remediate their entries based on feedback provided by the tool. If assessments meet the required standard, providers receive a STAR Level 1 Valid-AI-ted badge for display on the STAR Registry as well as their own platforms.
Assessment process
Valid-AI-ted uses AI-driven evaluation to systematically grade responses to the STAR Level 1 questionnaire, producing detailed reports with scores for each question and domain. Reports are delivered privately to the submitter and contain granular feedback that identifies strengths and areas for improvement.
The automation, according to CSA, is unique in the cloud security assurance landscape, as it offers objective, rapid, and scalable validation of self-assessment submissions. The process utilises a standardised scoring model informed by the Cloud Controls Matrix (CCM), which underpins CSA's approach to cloud security best practices.
A key feature of Valid-AI-ted is the opportunity for continuous improvement. The ability for organisations to revise and resubmit assessments is highlighted as beneficial for those seeking STAR certification or looking to enhance their transparency among customers and regulators.
Comparative advantages
CSA highlights several advantages of Valid-AI-ted when compared to traditional STAR Level 1 evaluations. The tool is intended to improve assurance by reducing variability in the quality of responses, as traditionally, customer interpretation is required when reviewing self-assessment answers.
With Valid-AI-ted, users receive qualitative analysis and actionable feedback aligned with established CCM guidance. This approach is positioned to support organisations in maturing their processes and can serve as a stepping stone towards the more rigorous STAR Level 2 third-party assessments.
The STAR Level 1 Valid-AI-ted badge, awarded to successful assessment submissions, is intended to offer heightened recognition for providers. CSA says this distinction can help providers stand out to customers, partners, and regulators by demonstrating a commitment to more than basic compliance requirements.
STAR Registry context
The STAR Registry is an online resource that publicly lists the security and privacy controls of cloud providers. It enables organisations to demonstrate compliance with various regulations and standards while supporting transparency and reducing the need for multiple customer questionnaires. The registry is based on principles detailed in the Cloud Controls Matrix, including transparency, auditing, and harmonisation of standards.
The Valid-AI-ted tool and STAR Level 1 evaluations are part of a suite of assessments that build on these principles, aiming to support both providers and customers in understanding cloud security postures.
Licensing and integration
Solution providers interested in incorporating Valid-AI-ted grading into governance, risk, and compliance (GRC) solutions can obtain access to the relevant scoring rubric and prompts by securing a CCM licence from CSA.
While Valid-AI-ted is available to CSA members at no charge, non-members can access the service for $595. Discounts are also available for participants attending CSA's Cloud Trust Summit, who will be provided with a code for a $200 reduction in fees through the end of June.
With the launch of Valid-AI-ted, CSA seeks to provide automated, standardised, and actionable assurance assessment, utilising AI to address the evolving demands of cloud security and compliance.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

RNZ News
16 hours ago
- RNZ News
Google search changes turning web into 'wild west'
Google is transforming online search, and businesses wanting to get their websites in front of customers must change with it, according to a leading digital marketer here. The vast majority of searches online are done on Google, and the tech company began incorporating AI into its searches a little over a year ago. Last month its CEO announced a further step where the typical experience of getting links to websites would be gone entirely, replaced with an AI-generated article answering the search question. Auckland digital marketer Richard Conway says he has had to overhaul his business, moving from a focus on search engine optimisation to 'generative engine optimisation'. He says the ongoing changes to Google search are turning the web into something of a 'wild west' for those who operate businesses online. To embed this content on your own webpage, cut and paste the following: See terms of use.


Otago Daily Times
4 days ago
- Otago Daily Times
‘Nanogirl' informs South on AI's use
Even though "Nanogirl", Dr Michelle Dickinson, has worked with world leading tech giants, she prefers to inspire the next generation. About 60 Great South guests were glued to their Kelvin Hotel seats on Thursday evening as the United Kingdom-born New Zealand nanotechnologist shared her knowledge and AI's future impact. Business needed to stay informed about technology so it could future-proof, she said. The days were gone where the traditional five year business plan would be enough to futureproof due to the breakneck speed technology has been advancing. Owners also needed to understand the importance of maintaining a customer-centric business or risk becoming quickly irrelevant. "I care about that we have empty stores." The number of legacy institutions closing was evidence of its model not moving with the customer. "Not being customer-centric is the biggest threat to business." Schools were another sector which needed to adapt to the changing world as it predominantly catered to produce an "average" student. "Nobody wants their kids to be average." Were AI technology to be implemented it could be used to develop personalised learning models while removing the stress-inducing and labour-intensive tasks from teachers' workload. "Now you can be the best teacher you can be and stay in the field you love. "I don't want our teachers to be burnt out, I want them to be excited to be teaching." In 30 seconds, new technology could now produce individualised 12-week teaching plans aligned to the curriculum, in both Ma¯ori and English she said. Agriculture was another sector to benefit from the developing technology. Better crop yields and cost savings could now be achieved through localised soil and crop tracking information which pinpointed what fertiliser needs or moisture levels were required in specific sections of a paddock. While AI was a problem-solving tool which provided outcomes on the information available to it, to work well, it still needed the creative ideas to come from humans, she said. "People are the fundamentals of the future . . . and human side of why we do things should be at the forefront. "We, as humans, make some pretty cool decisions that aren't always based on logic." Personal and commercial security had also become imperative now there was the ability to produce realistic "deep-fake" productions with videos and audio was about to hit us. She urged families and organisations to have "safe words" that would not be present in deep fake recordings and allow family members or staff to identify fake from genuine cries for help. "This is the stuff we need to be talking about with our kids right now." Great South chief executive Chami Abeysinghe said Dr Dickinson's presentation raised some "thought-provoking" questions for Southland's business leaders. She believed there needed to be discussions about how Southland could position itself to be at the forefront of tech-driven innovation. "I think some of the points that she really raised was a good indication that we probably need to get a bit quicker at adopting and adapting. "By the time we get around to thinking about it, it has already changed again." AI was able to process information and data in a fraction of the time humans did, but the technology did not come without risks and it was critical businesses protected their operations. "If we are going to use it, we need to be able to know that it's secure." Information on ChatGPT entered the public realm that everyone could have access to and business policies had not kept up. "You absolutely have to have a [AI security] policy."


Techday NZ
4 days ago
- Techday NZ
Nearly half of developers say over 50% of code is AI-generated
Cloudsmith's latest report shows that nearly half of all developers using AI in their workflows now have codebases that are at least 50% AI-generated. The 2025 Artifact Management Report from Cloudsmith surveyed 307 software professionals in the US and UK, all working with AI as part of their development, DevOps, or CI/CD processes. Among these respondents, 42% reported that at least half of their current codebase is now produced by AI tools. Despite the large-scale adoption of AI-driven coding, oversight remains inconsistent. Only 67% of developers who use AI review the generated code before every deployment. This means nearly one-third of those working with AI-assisted code are deploying software without always performing a human review, even as new security risks linked to AI-generated code are emerging. Security concerns The report points to a gap between the rapid pace of AI integration in software workflows and the implementation of safety checks and controls. Attacks such as 'slopsquatting'—where malicious actors exploit hallucinated or non-existent dependencies suggested by AI code assistants—highlight the risks when AI-generated code is left unchecked. Cloudsmith's data shows that while 59% of developers say they apply extra scrutiny to AI-generated packages, far fewer have more systematic approaches in place for risk mitigation. Only 34% use tools that enforce policies specific to AI-generated artifacts, and 17% acknowledge they have no controls in place at all for managing AI-written code or dependencies. "Software development teams are shipping faster, with more AI-generated code and AI agent-led updates," said Glenn Weinstein, CEO at Cloudsmith. "AI tools have had a huge impact on developer productivity, which is great. That said, with potentially less human scrutiny on generated code, it's more important that leaders ensure the right automated controls are in place for the software supply chain." Developer perceptions The research reveals a range of attitudes towards AI-generated code among developers. While 59% are cautious and take extra steps to verify the integrity of code created by AI, 20% said they trust AI-generated code "completely." This suggests a marked difference in risk appetite and perception within developer teams, even as the majority acknowledge the need for vigilance. Across the sample, 86% of developers reported an increase in the use of AI-influenced packages or software dependencies in the past year, and 40% described this increase as "significant." Nonetheless, only 29% of those surveyed felt "very confident" in their ability to detect potential vulnerabilities in open-source libraries, from which AI tools frequently pull suggestions. "Controlling the software supply chain is the first step towards securing it," added Weinstein. "Automated checks and use of curated artifact repositories can help developers spot issues early in the development lifecycle." Tooling and controls The report highlights that adoption of automated tools specifically designed for AI-generated code remains limited, despite the stated importance of security among software development teams. While AI technologies accelerate the pace of software delivery and updating, adoption of stricter controls and policy enforcement is not keeping up with the new risks posed by machine-generated code. The findings indicate a potential lag in upgrading security processes or artifact management solutions to match the growing use of AI in coding. Developers from a range of industries—including technology, finance, healthcare, and manufacturing—participated in the survey, with roles spanning development, DevOps management, engineering, and security leadership in enterprises with more than 500 employees. The full Cloudsmith 2025 Artifact Management Report also explores other key issues, including how teams decide which open-source packages to trust, the expanding presence of AI in build pipelines, and the persistent challenges in prioritising tooling upgrades for security benefits.