logo
#

Latest news with #ML

Turkish competition authority launches probe into Google's PMAX
Turkish competition authority launches probe into Google's PMAX

Time of India

timea day ago

  • Business
  • Time of India

Turkish competition authority launches probe into Google's PMAX

Turkey's antitrust authority will launch a probe into Google 's Performance Max (PMAX) to determine if the AI-powered ad campaign product violates competition laws, it said on Friday. In a statement, the competition board said the probe will examine whether Google has engaged in unfair practices against advertisers and if it has hindered competition through data consolidation with PMAX. Google's Performance Max uses AI and automatically finds the best placements for a brand's ads across Google services including email, search and YouTube. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories

How To Implement MLSecOps In Your Organization
How To Implement MLSecOps In Your Organization

Forbes

time3 days ago

  • Forbes

How To Implement MLSecOps In Your Organization

Neel Sendas is a Principal Technical Account Manager at Amazon Web Services (AWS). As a cloud operations professional focusing on machine learning (ML), my work helps organizations grasp ML systems' security challenges and develop strategies to mitigate risks throughout the ML lifecycle. One of the key aspects of solving these challenges is machine learning security operations (MLSecOps), a framework that helps organizations integrate security practices into their ML development, deployment and maintenance. Let's look at ML systems' unique security challenges and how MLSecOps can help to address them. Understanding vulnerabilities and implementing robust security measures throughout the ML lifecycle is crucial for maintaining system reliability and performance. For instance, data poisoning, adversarial attacks and transfer learning attacks pose critical security risks to ML systems. Cornell research shows that data poisoning can degrade model accuracy by up to 27% in image recognition and 22% in fraud detection. Likewise, subtle input modifications during inference—also known as adversarial attacks—can completely misclassify results. Transfer learning attacks exploit pre-trained models, enabling malicious model replacement during fine-tuning. MLSecOps—which relies on effective collaboration between security teams, engineers and data scientists—is an important aspect of addressing these evolving challenges. This framework protects models, data and infrastructure by implementing security at every stage of the ML lifecycle. The foundation includes threat modeling, data security and secure coding, and it is enhanced by techniques like protected model integration, secure deployment, continuous monitoring, anomaly detection and incident response. Implementing MLSecOps effectively requires a systematic approach to ensure comprehensive security throughout the ML lifecycle. The process begins with assessing the security needs of ML systems, which involves identifying data sources and infrastructure, evaluating potential risks and threats, conducting thorough risk assessments and defining clear security objectives. Working with large organizations, I've found that incorporating MLSecOps into an organization's existing security practices and tools can be complex, requiring a deep understanding of both traditional cybersecurity practices and ML-specific security considerations. Additionally, certain industries and jurisdictions have specific regulations and guidelines regarding the use of AI and ML systems, particularly in areas like finance, healthcare and criminal justice. Understanding these regulations and ensuring compliance may be challenging for those without MLSecOps expertise. Next, you'll need to establish a cross-functional security team that combines data scientists, ML engineers and security experts. Once the team has been established, define comprehensive policies and procedures, including security policies, incident response procedures and clear documentation and communication guidelines. Implementing such policies can be challenging, as it requires orchestrating various teams with diverse expertise and aligning their efforts toward a common goal. To address this challenge, organizations can develop a clear governance model that outlines the roles, responsibilities, decision-making processes and communication channels for all parties involved. This governance framework should be regularly reviewed and updated as necessary. I recommend that the team take a step back to adapt MLSecOps to their organizational needs. One way to do this is to understand the five pillars of MLSecOps—supply chain vulnerability, ML model provenance, model governance and compliance, trusted AI and adversarial ML—laid out by Ian Swanson in a Forbes Technology Council article—and understand how they will impact your organization. Once you've understood those specific processes, ensure that you've built a secure development lifecycle through integrated measures and secure coding. Security monitoring and response activities are also essential, which involve deploying monitoring tools and incident response plans to monitor ML workloads and detect threats. Beyond using tools, companies like Netflix use "chaos engineering" to inject failures into production to validate security controls as well as incident response effectiveness. Regular audits and assessments will be crucial, but implementing employee training on risks and vigilant practices is one of the most important tasks—and one of the most difficult to achieve. Securing buy-in for training programs from non-security stakeholders is often complicated. To overcome this, I've found that collaboration with leadership can help position security training as a strategic, shared responsibility. I've also found that tailoring programs to specific roles can make the training more relevant and engaging. In my experience, implementing comprehensive security throughout the ML lifecycle requires a combination of strategic planning, collaboration across teams, continuous learning and adaptation and a strong focus on building a security-conscious culture. Organizations seeking comprehensive MLOps security can achieve end-to-end protection by following established security best practices. This approach safeguards against various threats, including data poisoning, injection attacks, adversarial attacks and model inversion attempts. As ML applications grow more complex, continuous monitoring and proactive security measures become crucial. This robust security framework enables organizations to scale their ML operations confidently while protecting assets and accelerating growth. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Artificial intelligence may cause mass unemployment, says Geoffrey Hinton; 'Godfather of AI' reveals 'safe' jobs
Artificial intelligence may cause mass unemployment, says Geoffrey Hinton; 'Godfather of AI' reveals 'safe' jobs

Mint

time4 days ago

  • Business
  • Mint

Artificial intelligence may cause mass unemployment, says Geoffrey Hinton; 'Godfather of AI' reveals 'safe' jobs

The 'Godfather of AI', Geoffrey Hinton, recently stated that some professions are safer than others when it comes to being replaced by AI. In an interview on the podcast "Diary of a CEO", which aired on Monday, Hinton said AI has the potential to cause mass joblessness, especially in white-collar jobs. Hinton reiterated his point on AI superiority. "I think for mundane intellectual labour, AI is just going to replace everybody," he said. "Mundane intellectual labour" refers to white-collar jobs. He also specified that AI would take the form of a person and do the work that 10 people did previously. Hinton said that he would be "terrified" to work in a call centre right now due to the potential for automation. However, he pointed out that blue-collar work would take a longer time to be replaced by AI. "I'd say it's going to be a long time before AI is as good at physical manipulation," Hinton said in the podcast. 'So, a good bet would be to be a plumber.' In the podcast, Hinton also challenged the notion that AI would create new jobs, mentioning that if AI automated intellectual tasks, there would be few jobs left for people to do. A person has to be very skilled to have a job that AI just couldn't do," Hinton said. Geoffrey Hinton, 78, is given the title of 'Godfather of AI' due to his work on neural networks, which he started in the late 1970s. He won the 2024 Nobel Prize in Physics for his work on machine learning (ML) and is currently teaching computer science at the University of Toronto. The interview comes just after OpenAI announced its restructuring plans in which the company's for-profit arm will become a public benefit corporation (PBC), in an attempt to appease the company's investors. OpenAI said that the plan will allow it to raise more capital to keep pace in the expensive AI race, reported Reuters. However, a group of critics raised concerns claiming that the plan" might be a step in the right direction", yet it does not adequately ensure that OpenAI sticks to its original mission to develop artificial intelligence for the benefit of humanity. The critics include Geoffrey Hinton and former OpenAI employees. They objected to OpenAI's proposed reorganisation because they said it would have put investors' profit motives ahead of the public good. OpenAI co-founder Elon Musk, who now is a competitor through his company xAI, also objected to the proposal on the same grounds, and is suing OpenAI for breaching the company's founding contract, reported Reuters.

Political row erupts in Buxar over reconstruction of roads
Political row erupts in Buxar over reconstruction of roads

Time of India

time4 days ago

  • Politics
  • Time of India

Political row erupts in Buxar over reconstruction of roads

Buxar: A political controversy has erupted in the Dumraon subdivision of Buxar district over reconstruction of 114 roads under the rural works department, with the project estimated to cost Rs 189 crore. CPI (ML) MLA Ajit Kumar Singh had recently laid foundation stones for several road projects in the Chaugain and Nawanagar blocks under his constituency. However, the plaques bearing his name at these project sites were vandalised. JD(U) leader and former Dumraon councillor Dheeraj Kumar accused the MLA of attempting to take undue credit for govt-funded initiatives. He alleged that the MLA was seeking to claim ownership of projects sanctioned by chief minister Nitish Kumar by holding ceremonial inaugurations. Ajit refuted the allegations, stating that it is the responsibility of opposition legislators to recommend development projects. He claimed that these roadworks were approved based on his proposals and accused the govt of selectively discriminating against initiatives put forth by opposition members. "I have fought a long battle to get these roads sanctioned and have complete documentation to support this," he said. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like What She Did Mid-Air Left Passengers Speechless medalmerit Learn More Undo "If anyone has evidence to the contrary, they are welcome to present it," he added. He strongly condemned the vandalism of the inauguration plaques. "These plaques serve as public markers of development, showing which representative initiated the work. When members of the ruling party destroy them, it's not just the destruction of a stone slab—it is an affront to the people's mandate, to elected representatives, and ultimately, to democracy itself," he said. He further alleged that district officials are being pressured by senior department officials and a state minister at the behest of local ruling party leaders.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store