Latest news with #Operator

Mint
5 days ago
- Business
- Mint
Mint Primer: AI's twin impact: Better security, worse dangers
AI and generative AI are proving to be double-edged swords, boosting cyber defences while also enabling threats like deepfakes, voice cloning and even attacks by autonomous AI agents. With over two-thirds of Indian firms hit by such threats last year, how do we keep up? What sets AI-powered cyberthreats apart? AI-powered cyberthreats supercharge traditional attacks, making phishing, malware, and impersonation faster, stealthier, and more convincing. GenAI tools create deepfakes, polymorphic malware that mutates constantly, and generate personalized phishing emails. AI bots test stolen credentials, bypass CAPTCHAs that detect bots using puzzles, and scan networks for vulnerabilities. Tools like ChatGPT are used to send 100,000 spam emails for just $1,250. Symantec researchers have shown how AI agents like OpenAI's Operator can run a phishing attack via email with little human intervention. Also read: Artificial intelligence may cause mass unemployment, says Geoffrey Hinton; 'Godfather of AI' reveals 'safe' jobs How big is this threat for India? Nearly 72% of Indian firms faced AI-driven cyberattacks in the past year, reveals an IDC–Fortinet report. Key threats include insider risks, zero-day exploits (attacks before developers can fix software bugs, offering zero defence on day one), phishing, ransomware, and supply chain attacks. These threats are rising fast—70% saw cases double, 12% saw a threefold surge. These attacks are harder to detect. The fallout is costly: 56% suffered financial losses, 20% lost over $500,000, the report noted. Data theft (60%), trust erosion (50%), regulatory fines (46%), and operational disruptions (42%) are the other top business impacts. The threats are evolving. Are we? Only 14% of firms feel equipped to handle AI-driven threats, while 21% can't track them at all, notes IDC. Skills and tool gaps persist, mainly in detecting adaptive threats and using GenAI in red teaming (when ethical hackers mimic real attackers to test a firm's cyber defences). Other gaps include lean security teams, and few chief information security officers. Also read: Google flags over 500 million scam messages monthly as cybercrime soars in India What about laws on AI-led cybercrime? Most countries are addressing AI-related cybercrime using existing laws and evolving AI frameworks. In India, efforts rely on the IT Act, the Indian Computer Emergency Response Team, cyber forensics labs, global ties, and the Indian Cybercrime Coordination Centre under the Union home ministry, which oversees a cybercrime portal logging 6,000 daily cases. The draft Digital India Act may tackle AI misuse. While several states are forming AI task forces, a national AI cybersecurity framework may also be needed. Also read: Israeli startup Coralogix to invest bulk of $115 million fundraise in India How to build cyber defence for AI threats? Evolving AI threats call for AI-savvy governance, regular training, and simulations. Firms must adopt an 'AI vs AI" defence, train staff on phishing and deepfakes, enforce Zero Trust (every access request must be verified) and multi-factor authentication, and conduct GenAI red-team drills. Airtel, for instance, now uses AI to block spam and scam links in real time; Darktrace uses self-learning AI to detect threats without prior data. Cyber insurance must also cover reputational and regulatory risks.
%3Amax_bytes(150000)%3Astrip_icc()%2FTAL-play-airlines-plane-PLAYSALE1223-cc7c22c387534a85aa09d211c9fe50a7.jpg&w=3840&q=100)

Travel + Leisure
12-06-2025
- Business
- Travel + Leisure
This Budget Airline Is Canceling All U.S. Flights—What Travelers Should Know
It's the final boarding call for U.S. flights from a popular low-cost airline. Iceland-based Play Airlines recently announced it would stop operations to and from the United States, as well as all of North America, this fall. 'All flights to North America cease as of October 2025,' the airline confirmed in a statement on its website. The airline first launched flights to the U.S. in 2021 and currently operates routes from Baltimore, Boston, and New York to Reykjavik, Iceland. Once in Iceland, travelers had the opportunity to fly to a variety of European destinations including Berlin, Copenhagen, Dublin, London, and Porto. Despite October being the announced date for the end of operations, the airline is no longer selling any tickets for travel from New York to Iceland after Sept. 1, 2025. Tickets on the route for travel on Sept. 1 are currently going for as little as €174 one-way (approximately $201). While the airline operates flights out of New York, it does not use the main airports like LaGuardia Airport (LGA), John F. Kennedy International Airport (JFK), or Newark International Airport (EWR). Instead, it uses New York Stewart International Airport (SWF) in Windsor, New York, which is approximately 77 miles north of New York City. Although that is a significant distance from the city, the airport often provides a discounted option for travelers and a regular shuttle service. A representative for the airline told Travel + Leisure that Play would contact all affected passengers for trip modification or refunds if needed. In addition to the end of the airline's North America flights, Play will also undergo a restructure and switch from its existing Iceland-based Air Operator Certificate, to a Maltese-based certificate. The airline will also remove its stock exchange listing and fly to fewer destinations. It will also lease aircraft to other vendors. The decision of Play Airlines to end U.S. flights comes at a time when other airlines have reduced routes or shut down. For example, Silver Airways, a regional airline that operates flights throughout the Bahamas, the Caribbean Islands, and Florida, recently announced a sudden shut down as well.
Yahoo
12-06-2025
- Business
- Yahoo
Investment CEO Tells Convention Audience That 60 Percent of Them Will Be Unemployed Next Year Due to AI
Although hundreds of billions of dollars have been poured into AI development, nearly 75 percent of businesses have failed to deliver the return on investment promised to them. The hyped up tech is notoriously buggy and in some ways now actually getting worse, with project failure rates on the rise. Despite staring into the maw of a colossal money pit, tech CEOs are doubling down, announcing plans to increase spending on AI development, going as far as laying off armies of workers to cut down on expenditures. And while some investors footing the bill for big tech's AI bacchanalia are starting to wonder when they'll see cash start trickling back into their pockets, private equity billionaire Robert Smith isn't one of them. Speaking at the SuperReturn conference in Berlin last week, Smith told a crowd of 5,500 of his fellow ultrarich investors that at least 60 percent of them would be out on the street within a year thanks to the power of AI. "We think that next year, 40 percent of the people at this conference will have an AI agent and the remaining 60 percent will be looking for work," Smith lectured. "There are 1 billion knowledge workers on the planet today and all of those jobs will change. I'm not saying they'll all go away, but they will all change." "You will have hyperproductive people in organizations, and you will have people who will need to find other things to do," the investor ominously intoned. Smith was speaking primarily about "AI agents," a vague sales term that mostly seems to mean "large language model that can complete tasks on its own." For example, OpenAI rolled out a research "Operator" that was supposed to help compile research from all over the web into detailed analytical reports earlier this year. There's only one issue with the billionaire's prediction — AI agents so far remain absolutely awful at doing all but the simplest tasks, and there's little indication the industry is about to rapidly revolutionize their potential anytime soon. (OpenAI's Operator is no exception, often conflating internet rumor with scholarly fact.) Meanwhile in the real world, a growing number of businesses that rushed to replace workers with AI agents, like the financial startup Klarna, have now come to regret their decision as it largely blows up in their faces. It doesn't take an AI agent to scrape together another explanation for Smith's absurd claim. His private equity fund, Vista Equity Partners, is among the largest in the world. Dealing almost exclusively in software and tech, Smith has a cozy relationship with OpenAI CEO Sam Altman, and just raised $20 billion for AI spending — its largest fund to date. Now responsible for billions of dollars in investments that are tied down to a disappointing AI industry, it's really just a matter of time for Smith before his claims either pay out — or the chickens come home to roost. More on AI: Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat Sign in to access your portfolio

Yahoo
11-06-2025
- Business
- Yahoo
Sam Altman thinks AI will have 'novel insights' next year
In a new essay published Tuesday called "The Gentle Singularity," OpenAI CEO Sam Altman shared his latest vision for how AI will change the human experience over the next 15 years. The essay is a classic example of Altman's futurism: hyping up the promise of AGI — and arguing that his company is quite close to the feat — while simultaneously downplaying its arrival. The OpenAI CEO frequently publishes essays of this nature, cleanly laying out a future in which AGI disrupts our modern conception of work, energy, and the social contract. But often, Altman's essays contain hints about what OpenAI is working on next. At one point in the essay, Altman claimed that next year, in 2026, the world will "likely see the arrival of [AI] systems that can figure out novel insights." While this is somewhat vague, OpenAI executives have recently indicated that the company is focused on getting AI models to come up with new, interesting ideas about the world. When announcing OpenAI's o3 and o4-mini AI reasoning models in April, co-founder and President Greg Brockman said these were the first models that scientists had used to generate new, helpful ideas. Altman's blog post suggests that in the coming year, OpenAI itself may ramp up its efforts to develop AI that can generate novel insights. OpenAI certainly wouldn't be the only company focused on this effort — several of OpenAI's competitors have shifted their focus to training AI models that can help scientists come up with new hypotheses, and thus, novel discoveries about the world. In May, Google released a paper on AlphaEvolve, an AI coding agent that the company claims to have generated novel approaches to complex math problems. Another startup backed by former Google CEO Eric Schmidt, FutureHouse, claims its AI agent tool has been capable of making a genuine scientific discovery. In May, Anthropic launched a program to support scientific research. If successful, these companies could automate a key part of the scientific process, and potentially break into massive industries such as drug discovery, material science, and other fields with science at their core. This wouldn't be the first time Altman has tipped his hat about OpenAI's plans in a blog. In January, Altman wrote another blog post suggesting that 2025 would be the year of agents. His company then proceeded to drop its first three AI agents: Operator, Deep Research, and Codex. But getting AI systems to generate novel insights may be harder than making them agentic. The broader scientific community remains somewhat skeptical of AI's ability to generate genuinely original insights. Earlier this year, Hugging Face's Chief Science Officer Thomas Wolf wrote an essay arguing that modern AI systems cannot ask great questions, which is key to any great scientific breakthrough. Kenneth Stanley, a former OpenAI research lead, also previously told TechCrunch that today's AI models cannot generate novel hypotheses. Stanley is now building out a team at Lila Sciences, a startup that raised $200 million to create an AI-powered laboratory specifically focused on getting AI models to come up with better hypotheses. This is a difficult problem, according to Stanley, because it involves giving AI models a sense for what is creative and interesting. Whether OpenAI truly creates an AI model that is capable of producing novel insights remains to be seen. Still, Altman's essay may feature something familiar -- a preview of where OpenAI is likely headed next. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
11-06-2025
- Business
- Yahoo
Sam Altman thinks AI will have 'novel insights' next year
In a new essay published Tuesday called "The Gentle Singularity," OpenAI CEO Sam Altman shared his latest vision for how AI will change the human experience over the next 15 years. The essay is a classic example of Altman's futurism: hyping up the promise of AGI — and arguing that his company is quite close to the feat — while simultaneously downplaying its arrival. The OpenAI CEO frequently publishes essays of this nature, cleanly laying out a future in which AGI disrupts our modern conception of work, energy, and the social contract. But often, Altman's essays contain hints about what OpenAI is working on next. At one point in the essay, Altman claimed that next year, in 2026, the world will "likely see the arrival of [AI] systems that can figure out novel insights." While this is somewhat vague, OpenAI executives have recently indicated that the company is focused on getting AI models to come up with new, interesting ideas about the world. When announcing OpenAI's o3 and o4-mini AI reasoning models in April, co-founder and President Greg Brockman said these were the first models that scientists had used to generate new, helpful ideas. Altman's blog post suggests that in the coming year, OpenAI itself may ramp up its efforts to develop AI that can generate novel insights. OpenAI certainly wouldn't be the only company focused on this effort — several of OpenAI's competitors have shifted their focus to training AI models that can help scientists come up with new hypotheses, and thus, novel discoveries about the world. In May, Google released a paper on AlphaEvolve, an AI coding agent that the company claims to have generated novel approaches to complex math problems. Another startup backed by former Google CEO Eric Schmidt, FutureHouse, claims its AI agent tool has been capable of making a genuine scientific discovery. In May, Anthropic launched a program to support scientific research. If successful, these companies could automate a key part of the scientific process, and potentially break into massive industries such as drug discovery, material science, and other fields with science at their core. This wouldn't be the first time Altman has tipped his hat about OpenAI's plans in a blog. In January, Altman wrote another blog post suggesting that 2025 would be the year of agents. His company then proceeded to drop its first three AI agents: Operator, Deep Research, and Codex. But getting AI systems to generate novel insights may be harder than making them agentic. The broader scientific community remains somewhat skeptical of AI's ability to generate genuinely original insights. Earlier this year, Hugging Face's Chief Science Officer Thomas Wolf wrote an essay arguing that modern AI systems cannot ask great questions, which is key to any great scientific breakthrough. Kenneth Stanley, a former OpenAI research lead, also previously told TechCrunch that today's AI models cannot generate novel hypotheses. Stanley is now building out a team at Lila Sciences, a startup that raised $200 million to create an AI-powered laboratory specifically focused on getting AI models to come up with better hypotheses. This is a difficult problem, according to Stanley, because it involves giving AI models a sense for what is creative and interesting. Whether OpenAI truly creates an AI model that is capable of producing novel insights remains to be seen. Still, Altman's essay may feature something familiar -- a preview of where OpenAI is likely headed next. Sign in to access your portfolio