Latest news with #MLSecOps


Forbes
3 days ago
- Forbes
How To Implement MLSecOps In Your Organization
Neel Sendas is a Principal Technical Account Manager at Amazon Web Services (AWS). As a cloud operations professional focusing on machine learning (ML), my work helps organizations grasp ML systems' security challenges and develop strategies to mitigate risks throughout the ML lifecycle. One of the key aspects of solving these challenges is machine learning security operations (MLSecOps), a framework that helps organizations integrate security practices into their ML development, deployment and maintenance. Let's look at ML systems' unique security challenges and how MLSecOps can help to address them. Understanding vulnerabilities and implementing robust security measures throughout the ML lifecycle is crucial for maintaining system reliability and performance. For instance, data poisoning, adversarial attacks and transfer learning attacks pose critical security risks to ML systems. Cornell research shows that data poisoning can degrade model accuracy by up to 27% in image recognition and 22% in fraud detection. Likewise, subtle input modifications during inference—also known as adversarial attacks—can completely misclassify results. Transfer learning attacks exploit pre-trained models, enabling malicious model replacement during fine-tuning. MLSecOps—which relies on effective collaboration between security teams, engineers and data scientists—is an important aspect of addressing these evolving challenges. This framework protects models, data and infrastructure by implementing security at every stage of the ML lifecycle. The foundation includes threat modeling, data security and secure coding, and it is enhanced by techniques like protected model integration, secure deployment, continuous monitoring, anomaly detection and incident response. Implementing MLSecOps effectively requires a systematic approach to ensure comprehensive security throughout the ML lifecycle. The process begins with assessing the security needs of ML systems, which involves identifying data sources and infrastructure, evaluating potential risks and threats, conducting thorough risk assessments and defining clear security objectives. Working with large organizations, I've found that incorporating MLSecOps into an organization's existing security practices and tools can be complex, requiring a deep understanding of both traditional cybersecurity practices and ML-specific security considerations. Additionally, certain industries and jurisdictions have specific regulations and guidelines regarding the use of AI and ML systems, particularly in areas like finance, healthcare and criminal justice. Understanding these regulations and ensuring compliance may be challenging for those without MLSecOps expertise. Next, you'll need to establish a cross-functional security team that combines data scientists, ML engineers and security experts. Once the team has been established, define comprehensive policies and procedures, including security policies, incident response procedures and clear documentation and communication guidelines. Implementing such policies can be challenging, as it requires orchestrating various teams with diverse expertise and aligning their efforts toward a common goal. To address this challenge, organizations can develop a clear governance model that outlines the roles, responsibilities, decision-making processes and communication channels for all parties involved. This governance framework should be regularly reviewed and updated as necessary. I recommend that the team take a step back to adapt MLSecOps to their organizational needs. One way to do this is to understand the five pillars of MLSecOps—supply chain vulnerability, ML model provenance, model governance and compliance, trusted AI and adversarial ML—laid out by Ian Swanson in a Forbes Technology Council article—and understand how they will impact your organization. Once you've understood those specific processes, ensure that you've built a secure development lifecycle through integrated measures and secure coding. Security monitoring and response activities are also essential, which involve deploying monitoring tools and incident response plans to monitor ML workloads and detect threats. Beyond using tools, companies like Netflix use "chaos engineering" to inject failures into production to validate security controls as well as incident response effectiveness. Regular audits and assessments will be crucial, but implementing employee training on risks and vigilant practices is one of the most important tasks—and one of the most difficult to achieve. Securing buy-in for training programs from non-security stakeholders is often complicated. To overcome this, I've found that collaboration with leadership can help position security training as a strategic, shared responsibility. I've also found that tailoring programs to specific roles can make the training more relevant and engaging. In my experience, implementing comprehensive security throughout the ML lifecycle requires a combination of strategic planning, collaboration across teams, continuous learning and adaptation and a strong focus on building a security-conscious culture. Organizations seeking comprehensive MLOps security can achieve end-to-end protection by following established security best practices. This approach safeguards against various threats, including data poisoning, injection attacks, adversarial attacks and model inversion attempts. As ML applications grow more complex, continuous monitoring and proactive security measures become crucial. This robust security framework enables organizations to scale their ML operations confidently while protecting assets and accelerating growth. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Yahoo
29-04-2025
- Business
- Yahoo
Palo Alto Networks to buy Protect AI to extend AI security
Cybersecurity firm Palo Alto Networks has agreed to acquire enterprise AI security solutions provider Protect AI to bolster Palo Alto Networks' position in next-generation cybersecurity. Protect AI specializes in securing the use of AI and machine learning applications and models. Protect AI provides comprehensive suite of products for advanced AI scanning, LLM security, and GenAI red teaming that operate on single enterprise ready platform. By integrating with existing AI and security operations, Protect AI helps in protecting AI investments for organisations while enhancing operational efficiency, adopt MLSecOps and meet global data security and privacy standards. Founded by AI leaders from Oracle and Amazon, Protect AI is backed by investors, including boldstart ventures, Acrew Capital, Evolution Equity Partners, Knollwood Capital, Pelion Ventures, 01 Advisors, Samsung, StepStone Group, and Salesforce Ventures. Palo Alto Networks expects the integration of Protect AI's solutions and expertise is expected to significantly enhance its newly unveiled Prisma AIRS AI security platform. Prisma AIRS aims to provide customers with better protection throughout the entire AI development lifecycle, addressing the need for model scanning, risk assessment, GenAI runtime security, posture management, and AI agent security. This platform is designed to empower organisations to confidently integrate AI into their operational processes. Palo Alto Networks senior vice-president and general manager Anand Oswal said: 'By extending our AI security capabilities to include Protect AI's innovative solutions for Securing for AI, businesses will be able to build AI applications with comprehensive security. 'With the addition of Protect AI's existing portfolio of solutions and team of experts, Palo Alto Networks will be well-positioned to offer a wide range of solutions for customers' current needs, and also be able to continue innovating on delivering new solutions that are needed for this dynamic threat landscape.' Upon completion of the transaction, the CEO, founders, and employees of Protect AI are anticipated to join Palo Alto Networks. Subject to standard closing conditions, including regulatory approvals, the acquisition is planned to be closed by the first quarter of Palo Alto Networks' fiscal year 2026. Protect AI co-founder and CEO Ian Swanson said: 'Joining forces with Palo Alto Networks will enable us to scale our mission of making the AI landscape more secure for users and organisations of all sizes. 'We are excited for the opportunity to unite with a company that shares our vision and brings the operational scale and cybersecurity prowess to amplify our impact globally.' In 2024, Palo Alto Networks and Deloitte expanded their strategic alliance into the EMEA and JAPAC regions. This allows Palo Alto Networks AI-powered cybersecurity solutions and joint offerings available to Deloitte clients globally. "Palo Alto Networks to buy Protect AI to extend AI security" was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.