Latest news with #Crowdstrike


Mint
10-06-2025
- Business
- Mint
India prepares reporting standard as AI failures may hold clues to managing risks
India is framing guidelines for companies, developers and public institutions to report artificial intelligence-related incidents as the government seeks to create a database to understand and manage the risks AI poses to critical infrastructure. The proposed standard aims to record and classify problems such as AI system failures, unexpected results, or harmful effects of automated decisions, according to a new draft from the Telecommunications Engineering Centre (TEC). Mint has reviewed the document released by the technical arm of the Department of Telecommunications (DoT). The guidelines will ask stakeholders to report events such as telecom network outages, power grid failures, security breaches, and AI mismanagement, and document their impact, according to the draft. 'Consultations with stakeholders are going on pertaining to the draft standard to document such AI-related incidents. TEC's focus is primarily on the telecom and other critical digital infrastructure sectors such as energy and power,"said a government official, speaking on the condition of anonymity. 'However, once a standard to record such incidents is framed, it can be used interoperably in other sectors as AI is being used everywhere." The plan is to create a central repository and pitch the standard globally to the United Nations' International Telecommunication Union, the official said. Recording and analysing AI incidents is important because system failures, bias, privacy breaches, and unexpected results have raised concerns about how the technology affects people and society. 'AI systems are now instrumental in making decisions that affect individuals and society at large," TEC said in the document proposing the draft standard. 'Despite their numerous benefits, these systems are not without risks and challenges." Queries emailed to TEC didn't elicit a response till press time. Also read | AI at war: Artificial intelligence is reshaping defence strategies Incidents similar to the recent Crowdstrike incident, the largest IT outage in history, can be reported under India's proposed standard. Any malfunction in chatbots, cyber breaches, telecom service quality degradation, IoT sensor failures, etc. will also be covered. The draft requires developers, companies, regulators, and other entities to report the name of the AI application involved in an incident, the cause, location, and industry/sector affected, as well as the severity and kind of harm it caused. Like OECD AI Monitor The TEC's proposal builds on a recommendation from a MeitY sub-committee of on 'AI Governance and Guidelines Development'. The panel's report in January had called for the creation of a national AI incident database to improve transparency, oversight, and accountability. MeitY is also engaged in developing a comprehensive governance framework for the country, with a focus on fostering innovation while ensuring responsible and ethical development and deployment of AI. According to the TEC, the draft defines a standardized scheme for AI incident databases in telecommunications and critical digital infrastructure. 'It also establishes a structured taxonomy for classifying AI incidents systematically. The schema ensures consistency in how incidents are recorded, making data collection and exchange more uniform across different systems," the draft document said. Also read | Apple quietly opens AI gates to developers at WWDC 2025 India's proposed framework is similar to the AI Incidents Monitor of the Organization for Economic Co-operation and Development (OECD), which documents incidents to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable information about the real-world risks and harms posed by the technology. 'So far, most of the conversations have been primarily around first principles of ethical and responsible AI. However, there is a need to have domain and sector-specific discussions around AI safety," said Dhruv Garg, a tech policy lawyer and partner at Indian Governance and Policy Project (IGAP). 'We need domain specialist technical bodies like TEC for setting up a standardized approach to AI incidents and risks of AI for their own sectoral use cases," Garg said. 'Ideally, the sectoral approach may feed into the objective of the proposed AI Safety Institute at the national level and may also be discussed internationally through the network of AI Safety Institutes." Need for self-reglation In January, MeitY announced the IndiaAI Safety Institute under the ₹10,000 crore IndiaAI Mission to address AI risks and safety challenges. The institute focuses on risk assessment and management, ethical frameworks, deepfake detection tools, and stress testing tools. 'Standardisation is always beneficial as it has generic advantages," said Satya N. Gupta, former principal advisor at the Telecom Regulatory Authority of India (Trai). 'Telecom and Information and Communication Technology (ICT) cuts across all sectors and, therefore, once standards to mitigate AI risks are formed here, then other sectors can also take a cue." Also read | AI hallucination spooks law firms, halts adoption According to Gupta, recording the AI issues should start with guidelines and self-regulation, as enforcing these norms will increase the compliance burden on telecom operators and other companies. The MeitY sub-committee had recommended that the AI incident database should not be started as an enforcement tool and its objective should not be to penalise people who report AI incidents. 'There is a clarity within the government that the plan is not to do fault finding with this exercise but help policy makers, researchers, AI practitioners, etc., learn from the incidents to minimize or prevent future AI harms," the official cited above said.
Yahoo
06-06-2025
- Business
- Yahoo
CrowdStrike (CRWD) Price Target Raised to $515 as AI Cybersecurity Demand Soars
We recently published a list of . In this article, we are going to take a look at where CrowdStrike Holdings, Inc. (NASDAQ:CRWD) stands against other AI stocks on Wall Street's radar. On June 2nd, Rosenblatt analyst Catherine Trebnick raised the price target on CrowdStrike Holdings, Inc. (NASDAQ:CRWD) to $515.00 (from $450.00) while maintaining a 'Buy' rating. The price target revision reflects the firm's optimism about Crowdstrike's future financial outlook. According to the analysts, the growing trend toward IT consolidation is improving Crowdstrike's performance. Annual recurring revenue (ARR) and revenue growth are anticipated to align with market estimates, projecting a 21% and 20% increase, respectively. The firm further noted how businesses, despite being careful with spending, are choosing Crowdstrike for its comprehensive AI-powered security solutions. Security personnel at their consoles, monitoring a global network of threats in real-time. Crowdstrike's Q1 report is anticipated today, June 3rd, with analysts estimating an 'inline to marginally better quarter, fueled by the persistent IT consolidation trend.' The firm also noted how its increased target multiple on the shares is backed by the 31% expansion in cybersecurity sector multiples over the past two months, as well as optimism in Crowdstrike's 'strong execution and broad platform tailored to the key IT consolidation trend.' CrowdStrike Holdings, Inc. (NASDAQ:CRWD) is a leader in AI-driven endpoint and cloud workload protection. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. This article is originally published at Insider Monkey.

CNBC
05-06-2025
- Business
- CNBC
CrowdStrike CEO talks DOJ inquiry: 'We stand by the accounting of those transactions'
In a Wednesday interview with CNBC's Jim Cramer, CrowdStrike CEO George Kurtz indicated the cybersecurity outfit is confident in its finances as it faces a government inquiry about information related to a massive outage last year, along with deals and other matters. '"Someone asks a question, we're going to cooperate. It's an inquiry, and we'll give them the answers they need and, and we'll go from there," Kurtz said. "We stand by the accounting of those transactions." Last July, CrowdStrike suffered a major IT outage that disrupted businesses around the world, including airlines, hospitals and financial services firms. CrowdStrike attributed the issues to a faulty software update, "not a security incident or cyberattack," Kurtz said at the time. Crowdstrike released its quarterly report Tuesday night, which sent shares plummeting during Wednesday's session, down 5.77% by close. Although the company posted solid earnings and revenue, it disappointed Wall Street with a weaker-than-expected revenue forecast for the current quarter. Kurtz told Cramer that customers have been sticking with the company despite the outage, saying CrowdStrike has seen a 97% retention rate. He also explained that the company's package to help customers and partners deal with the issue lead to an $11 million loss in the quarter. "I think we handled it the right way, I think customers respect us for that," Kurtz said. "And, ultimately, we gained greater intimacy with those customers, and they're buying more through Falcon Flex." Click here to download Jim Cramer's Guide to Investing at no cost to help you build long-term wealth and invest


CNBC
04-06-2025
- Business
- CNBC
Three Stock Lunch: Crowdstrike, Microsoft and Tesla
Eddie Ghabour, Key Advisors Wealth Management managing partner, joins 'Power Lunch' to discuss Ghabour's investing take on three stocks: Crowdstrike, Microsoft and Tesla.


TechCrunch
04-06-2025
- Business
- TechCrunch
CrowdStrike's former CTO on cyber rivalries and how automation can undermine security for early-stage startups
'One of the biggest vulnerabilities in companies is actually humans,' Crowdstrike co-founder and former CTO Dmitri Alperovitch told TechCrunch in this week's episode of Equity. 'The more you automate, the more opportunities there are for people to find vulnerabilities in your system.' With the $50 billion Chinese AI market potentially slipping out of reach for U.S. chipmakers like Nvidia, and with cyber threats escalating from state actors and criminal groups alike, it's a strong reminder of how tightly tech, security, and geopolitics are intertwined. On TechCrunch's Equity podcast, Rebecca Bellan sits down with Alperovitch, who is chairman of the Silverado Policy Accelerator, to talk about the evolving cybersecurity landscape, the role of startups, and why he says we're living in a World on the Brink. Listen to the full episode to hear about: What early-stage secure-by-design startup founders are missing when it comes to maintaining security while building quickly and crisis management. How AI export controls and global rivalries are reshaping innovation. What investors are really looking for when backing cybersecurity startups today. Equity will be back Friday with a behind-the-scenes look at TC Sessions: AI, so don't miss it! Equity is TechCrunch's flagship podcast, produced by Theresa Loconsolo, and posts every Wednesday and Friday. Subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod. For the full episode transcript, for those who prefer reading over listening, check out our full archive of episodes here.