Latest news with #YoshuaBengio
Yahoo
11 hours ago
- Yahoo
‘Godfather of AI' believes it's unsafe - but here's how he plans to fix the tech
This week the US Federal Bureau of Investigation revealed two men suspected of bombing a fertility clinic in California last month allegedly used artificial intelligence (AI) to obtain bomb-making instructions. The FBI did not disclose the name of the AI program in question. This brings into sharp focus the urgent need to make AI safer. Currently we are living in the 'wild west' era of AI, where companies are fiercely competing to develop the fastest and most entertaining AI systems. Each company wants to outdo competitors and claim the top spot. This intense competition often leads to intentional or unintentional shortcuts – especially when it comes to safety. Coincidentally, at around the same time of the FBI's revelation, one of the godfathers of modern AI, Canadian computer science professor Yoshua Bengio, launched a new nonprofit organisation dedicated to developing a new AI model specifically designed to be safer than other AI models – and target those that cause social harm. So what is Bengio's new AI model? And will it actually protect the world from AI-faciliated harm? In 2018, Bengio, alongside his colleagues Yann LeCun and Geoffrey Hinton, won the Turing Award for groundbreaking research they had published three years earlier on deep learning. A branch of machine learning, deep learning attempts to mimic the processes of the human brain by using artificial neural networks to learn from computational data and make predictions. Bengio's new nonprofit organisation, LawZero, is developing 'Scientist AI'. Bengio has said this model will be 'honest and not deceptive', and incorporate safety-by-design principles. According to a preprint paper released online earlier this year, Scientist AI will differ from current AI systems in two key ways. First, it can assess and communicate its confidence level in its answers, helping to reduce the problem of AI giving overly confident and incorrect responses. Second, it can explain its reasoning to humans, allowing its conclusions to be evaluated and tested for accuracy. Interestingly, older AI systems had this feature. But in the rush for speed and new approaches, many modern AI models can't explain their decisions. Their developers have sacrificed explainability for speed. Bengio also intends 'Scientist AI' to act as a guardrail against unsafe AI. It could monitor other, less reliable and harmful AI systems — essentially fighting fire with fire. This may be the only viable solution to improve AI safety. Humans cannot properly monitor systems such as ChatGPT, which handle over a billion queries daily. Only another AI can manage this scale. Using an AI system against other AI systems is not just a sci-fi concept – it's a common practice in research to compare and test different level of intelligence in AI systems. Large language models and machine learning are just small parts of today's AI landscape. Another key addition Bengio's team are adding to Scientist AI is the 'world model' which brings certainty and explainability. Just as humans make decisions based on their understanding of the world, AI needs a similar model to function effectively. The absence of a world model in current AI models is clear. One well-known example is the 'hand problem': most of today's AI models can imitate the appearance of hands but cannot replicate natural hand movements, because they lack an understanding of the physics — a world model — behind them. Another example is how models such as ChatGPT struggle with chess, failing to win and even making illegal moves. This is despite simpler AI systems, which do contain a model of the 'world' of chess, beating even the best human players. These issues stem from the lack of a foundational world model in these systems, which are not inherently designed to model the dynamics of the real world. Bengio is on the right track, aiming to build safer, more trustworthy AI by combining large language models with other AI technologies. However, his journey isn't going to be easy. LawZero's US$30 million in funding is small compared to efforts such as the US$500 billion project announced by US President Donald Trump earlier this year to accelerate the development of AI. Making LawZero's task harder is the fact that Scientist AI – like any other AI project – needs huge amounts of data to be powerful, and most data are controlled by major tech companies. There's also an outstanding question. Even if Bengio can build an AI system that does everything he says it can, how is it going to be able to control other systems that might be causing harm? Still, this project, with talented researchers behind it, could spark a movement toward a future where AI truly helps humans thrive. If successful, it could set new expectations for safe AI, motivating researchers, developers, and policymakers to prioritise safety. Perhaps if we had taken similar action when social media first emerged, we would have a safer online environment for young people's mental health. And maybe, if Scientist AI had already been in place, it could have prevented people with harmful intentions from accessing dangerous information with the help of AI systems. Armin Chitizadeh is a Lecturer in the School of Computer Science at the University of Sydney. This article is republished from The Conversation under a Creative Commons license. Read the original article


Time of India
3 days ago
- Business
- Time of India
'A sandwich has more regulation': AI pioneer warns of dangerous lack of oversight in the advancement of artificial intelligence
Billions in, No Seatbelts On You Might Also Like: Godfather of AI reveals the one job robots can't steal, and it does not need a desk Into the Fog Without a Map When the Architect Questions the Blueprint The Clock Is Ticking In a revelation that's equal parts staggering and sobering, Yoshua Bengio—one of the world's foremost authorities on artificial intelligence—recently declared in a TED Talk that a sandwich is more regulated than you read that right! 'A sandwich has more regulation than AI,' Bengio said, in a recent TED Talk with a comparison that's both absurd and alarmingly true. While food safety standards demand strict oversight on how a sandwich is prepared, stored, and sold, the world's most transformative technology—capable of rewriting economies, societies, and perhaps humanity itself—is operating in a near-total regulatory who received the Turing Award in 2018 alongside Geoffrey Hinton and Yann LeCun and is often referred to as a " Godfather of AI ," warned that hundreds of billions of dollars are being pumped into AI research each year. Yet, we still have no assurance that the intelligent machines being developed won't act against human interests.'These companies have a stated goal of building machines that will be smarter than us and can replace human labor,' Bengio noted. 'Yet, we still don't know how to make sure they won't turn against us.'His statement comes amid growing concerns from national security agencies that advanced AI systems could be weaponized. He referenced a chilling example: OpenAI 's Q1 system, which in a 2024 evaluation saw its risk status upgraded from 'low' to 'medium'—just one step below being deemed likened the current AI trajectory to 'blindly driving into a fog,' warning that this unregulated race toward artificial general intelligence (AGI) could result in a catastrophic loss of human control. But he offered a glimmer of hope too.'There is still a bit of time,' he said. 'My team and I are working on a technical solution… We call it Scientist AI .'Designed to model the reasoning of a selfless, non-agentive scientist, the 'Scientist AI' aims to serve as a guardrail against untrustworthy AI agents. It's a system built to predict risks rather than act—precisely the kind of neutral evaluator Bengio believes could keep rogue systems in concerns carry weight not only because of his stature—he's the most-cited living scientist across all disciplines according to h-index and total citations—but also because of his personal reckoning with AI's 2023, he publicly stated he felt 'lost' over how his life's work was being used. That same year, he co-signed a Future of Life Institute open letter urging a pause on training models more powerful than GPT-4. Since then, he has emerged as one of the most prominent voices calling for AI safety legislation , international oversight, and ethical a 2025 Fortune article, Bengio criticized the AI arms race , arguing that companies are prioritizing capability over caution. He supported California's SB 1047 bill, which requires large AI model developers to conduct risk assessments—a law he believes is the 'bare minimum for effective regulation.'Despite the mounting evidence and expert warnings, real regulation remains elusive. And the absurdity of the moment—that a meat-and-bread sandwich is subject to more scrutiny than technologies that may soon outthink and outmaneuver us—underscores just how unprepared we are for what's Bengio concluded in his talk, 'We need a lot more of these scientific projects to explore solutions to the AI safety challenges—and we need to do it quickly.' Because if the godfathers of AI are now sounding the alarm, perhaps it's time we start listening—before the machines stop asking for permission.


Time of India
5 days ago
- Business
- Time of India
New York State passes RAISE Act for frontier AI models
In a first of its kind, New York state lawmakers have passed the Responsible AI Safety and Education (RAISE) Act, to prevent frontier AI models by OpenAI, Google or Anthropic from contributing to disaster scenarios, including the death or injury of more than 100 people, or more than $1 billion in damages. According to a TechCrunch report, the legislation has been supported by top AI experts Geoffrey Hinton and Yoshua Bengio. If converted into a law, it would be the first set of legally mandated transparency standards for frontier AI labs. The legislation comes as a reform for the previous AI safety bill, which was ultimately vetoed. The AI safety bill targeted only large-scale models and didn't address high-risk deployment or smaller but potentially dangerous models. Play Video Pause Skip Backward Skip Forward Unmute Current Time 0:00 / Duration 0:00 Loaded : 0% 0:00 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 1x Playback Rate Chapters Chapters Descriptions descriptions off , selected Captions captions settings , opens captions settings dialog captions off , selected Audio Track default , selected Picture-in-Picture Fullscreen This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like War Thunder - Register now for free and play against over 75 Million real Players War Thunder Play Now Undo The Act is now awaiting the approval of New York governor Kathy Hochul, who can either sign it, send it back for amendments, or veto it. The key provisions of the proposed RAISE Act include- Live Events Requires AI labs to release safety and security reports on their frontier AI models In case of AI model behaviour or bad actors affecting the AI systems, the labs are mandated to report such safety incidents. Failure to comply with brings civil penalties up to $30 million Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories Coming to similar perspectives on India's AI ecosystem , analysts believe that while AI will be a catalyst for India's economic growth, guardrails and governance will be key to adopt the technology safely and to build resilience amid possible disruption. In a recent global survey by IBM , it was revealed that AI adoption in India is higher than in other countries. However, this is more experimentation while adoption at scale still lags. To mitigate safety risks, tech multinationals and global capability centres (GCCs) in the country have started looking for specialised AI trust and safety roles , ET reported in March . Hiring in this space has surged 36% year-on-year, and the demand for AI trust and safety professionals is expected to grow by 2530% in 2025, data from Teamlease Digital showed.
Yahoo
7 days ago
- Business
- Yahoo
New York passes a bill to prevent AI-fueled disasters
New York state lawmakers passed a bill on Thursday that aims to prevent frontier AI models from OpenAI, Google, and Anthropic from contributing to disaster scenarios, including the death or injury of more than 100 people, or more than $1 billion in damages. The passage of the RAISE Act represents a win for the AI safety movement, which has lost ground in recent years as Silicon Valley and the Trump Administration have prioritized speed and innovation. Safety advocates including Nobel prize laureate Geoffrey Hinton and AI research pioneer Yoshua Bengio have championed the RAISE Act. Should it become law, the bill would establish America's first set of legally mandated transparency standards for frontier AI labs. The RAISE Act has some of the same provisions and goals as California's controversial AI safety bill, SB 1047, which was ultimately vetoed. However, the co-sponsor of the bill, New York state Senator Andrew Gounardes told TechCrunch in an interview that he deliberately designed the RAISE Act such that it doesn't chill innovation among startups or academic researchers — a common criticism of SB 1047. 'The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,' said Senator Gounardes. 'The people that know [AI] the best say that these risks are incredibly likely […] That's alarming.' The RAISE Act is now headed for New York Governor Kathy Hochul's desk, where could either sign the bill into law, send it back for amendments, or veto it altogether. If signed into law, New York's AI safety bill would require the world's largest AI labs to publish thorough safety and security reports on their frontier AI models. The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. If tech companies fail to live up to these standards, the RAISE Act empowers New York's Attorney General to bring civil penalties of up to $30 million. The RAISE Act aims to narrowly regulate the world's largest companies — whether they're based in California (like OpenAI and Google) or China (like DeepSeek and Alibaba). The bill's transparency requirements apply to companies whose AI models were trained using more than $100 million in computing resources (seemingly, more than any AI model available today), and are being made available to New York residents. While similar to SB 1047 in some ways, the RAISE Act was designed to address criticisms of previous AI safety bills, according to Nathan Calvin, the Vice President of State Affairs and General Counsel at Encode, who worked on this bill and SB 1047. Notably, the RAISE Act does not require AI model developers to include a 'kill switch' on their models, nor does it hold companies that post-train frontier AI models accountable for critical harms. Nevertheless, Silicon Valley has pushed back significantly on New York's AI safety bill, New York state Assemblymember and co-sponsor of the RAISE Act Alex Bores told TechCrunch. Bores called the industry resistance unsurprising, but claimed that the RAISE Act would not limit innovation of tech companies in any way. 'The NY RAISE Act is yet another stupid, stupid state level AI bill that will only hurt the US at a time when our adversaries are racing ahead,' said Andreessen Horowitz general partner Anjney Midha in a Friday post on X. Andreessen Horowitz, alongside the startup incubator Y Combinator, were some of the fiercest opponents to SB 1047. Anthropic, the safety-focused AI lab that called for federal transparency standards for AI companies earlier this month, has not reached an official stance on the bill, co-founder Jack Clark said in a Friday post on X. However, Clark expressed some grievances over how broad the RAISE Act is, noting that it could present a risk to 'smaller companies.' When asked about Anthropic's criticism, state Senator Gounardes told TechCrunch he thought it 'misses the mark,' noting that he designed the bill not to apply to small companies. OpenAI, Google, and Meta did not respond to TechCrunch's request for comment. Another common criticism of the RAISE Act is that AI model developers simply wouldn't offer their most advanced AI models in the state of New York. That was a similar criticism brought against SB 1047, and it's largely what's played out in Europe thanks to the continent's tough regulations on technology. Assemblymember Bores told TechCrunch that the regulatory burden of the RAISE Act is relatively light, and therefore, shouldn't require tech companies to stop operating their products in New York. Given the fact that New York has the third largest GDP in the U.S., pulling out of the state is not something most companies would take lightly. 'I don't want to underestimate the political pettiness that might happen, but I am very confident that there is no economic reasons for them to not make their models available in New York,' said Assemblymember Borres. Sign in to access your portfolio


TechCrunch
7 days ago
- Business
- TechCrunch
New York passes a bill to prevent AI-fueled disasters
New York state lawmakers passed a bill on Thursday that aims to prevent frontier AI models from OpenAI, Google, and Anthropic from contributing to disaster scenarios, including the death or injury of more than 100 people, or more than $1 billion in damages. The passage of the RAISE Act represents a win for the AI safety movement, which has lost ground in recent years as Silicon Valley and the Trump Administration have prioritized speed and innovation. Safety advocates including Nobel prize laureate Geoffrey Hinton and AI research pioneer Yoshua Bengio have championed the RAISE Act. Should it become law, the bill would establish America's first set of legally mandated transparency standards for frontier AI labs. The RAISE Act has many of the same provisions and goals as California's controversial AI safety bill, SB 1047, which was ultimately vetoed. However, the co-sponsor of the bill, New York state Senator Andrew Gounardes told TechCrunch in an interview that he deliberately designed the RAISE Act such that it doesn't chill innovation among startups or academic researchers — a common criticism of SB 1047. 'The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,' said Senator Gounardes. 'The people that know [AI] the best say that these risks are incredibly likely […] That's alarming.' The Raise Act is now headed for New York Governor Kathy Hochul's desk, where could either sign the bill into law, send it back for amendments, or veto it altogether. If signed into law, New York's AI safety bill would require the world's largest AI labs to publish thorough safety and security reports on their frontier AI models. The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. If tech companies fail to live up to these standards, the RAISE Act empowers New York's Attorney General to bring civil penalties of up to $30 million. The RAISE Act aims to narrowly regulate the world's largest companies — whether they're based in California (like OpenAI and Google) or China (like DeepSeek and Alibaba). The bill's transparency requirements apply to companies whose AI models were trained using more than $100 million in computing resources, and are being made available to New York residents. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Silicon Valley has pushed back significantly on New York's AI safety bill, New York state Assemblymember and co-sponsor of the RAISE Act Alex Bores told TechCrunch. Bores called the industry resistance unsurprising, but claimed that the RAISE Act would not limit innovation of tech companies in any way. Anthropic, the safety-focused AI lab that called for federal transparency standards for AI companies earlier this month, has not reached an official stance on the bill, co-founder Jack Clark said in a Friday post on X. However, Clark expressed some grievances over how broad the RAISE Act is, noting that it could present a risk to 'smaller companies.' When asked about Anthropic's criticism, state Senator Gounardes told TechCrunch he thought it 'misses the mark,' noting that he designed the bill not to apply to small companies. OpenAI, Google, and Meta did not respond to TechCrunch's request for comment. Another common criticism of the RAISE Act is that AI model developers simply wouldn't offer their most advanced AI models in the state of New York. That was a similar criticism brought against SB 1047, and it's largely what's played out in Europe thanks to the continent's tough regulations on technology. Assemblymember Bores told TechCrunch that the regulatory burden of the RAISE Act is relatively light, and therefore, shouldn't require tech companies to stop operating their products in New York. Given the fact that New York has the third largest GDP in the U.S., pulling out of the state is not something most companies would take lightly. 'I don't want to underestimate the political pettiness that might happen, but I am very confident that there is no economic reasons for them to not make their models available in New York,' said Assemblymember Borres.