logo
OpenAI warns models with higher bioweapons risk are imminent

OpenAI warns models with higher bioweapons risk are imminent

Axios2 days ago

OpenAI cautioned Wednesday that upcoming models will head into a higher level of risk when it comes to the creation of biological weapons — especially by those who don't really understand what they're doing.
Why it matters: The company, and society at large, need to be prepared for a future where amateurs can more readily graduate from simple garage weapons to sophisticated agents.
Driving the news: OpenAI executives told Axios the company expects forthcoming models will reach a high level of risk under the company's preparedness framework.
As a result, the company said in a blog post it is stepping up the testing of such models, as well as including fresh precautions designed to keep them from aiding in the creation of biological weapons.
OpenAI didn't put an exact timeframe on when the first model to hit that threshold will launch, but head of safety systems Johannes Heidecke told Axios "We are expecting some of the successors of our o3 (reasoning model) to hit that level."
Reality check: OpenAI isn't necessarily saying that its platform will be capable of creating new types of bioweapons.
Rather, it believes that — without mitigations — models will soon be capable of what it calls "novice uplift," or allowing those without a background in biology to do potentially dangerous things.
"We're not yet in the world where there's like novel, completely unknown creation of bio threats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with."
Between the lines: One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm.
But, Heidecke acknowledged OpenAI and others need systems that are highly accurate at detecting and preventing harmful use.
"This is not something where like 99% or even one in 100,000 performance is like is sufficient," he said.
"We basically need, like, near perfection," he added, noting that human monitoring and enforcement systems need to be able to quickly identify any harmful uses that escape automated detection and then take the action necessary to "prevent the harm from materializing."
The big picture: OpenAI is not the only company warning of models reaching new levels of potentially harmful capability.
When it released Claude 4 last month, Anthropic said it was activating fresh precautions due to the potential risk of that model aiding in the spread of biological and nuclear threats.
Various companies have also been warning that it's time to start preparing for a world in which AI models are capable of meeting or exceeding human capabilities in a wide range of tasks.
What's next: OpenAI said it will convene an event next month to bring together certain nonprofits and government researchers to discuss the opportunities and risks ahead.
OpenAI is also looking to expand its work with the U.S. national labs, and the government more broadly, OpenAI policy chief Chris Lehane told Axios.
"We're going to explore some additional type of work that we can do in terms of how we potentially use the technology itself to be really effective at being able to combat others who may be trying to misuse it," Lehane said.
Lehane added that the increased capability of the most powerful models highlights "the importance, at least in my view, for the AI build out around the world, for the pipes to be really US-led."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Andreessen Horowitz Backs AI Startup With Slogan ‘Cheat at Everything'
Andreessen Horowitz Backs AI Startup With Slogan ‘Cheat at Everything'

Bloomberg

timean hour ago

  • Bloomberg

Andreessen Horowitz Backs AI Startup With Slogan ‘Cheat at Everything'

Andreessen Horowitz led a $15 million funding round for an artificial intelligence startup called Cluely Inc., famous on social platforms like X for controversial viral marketing stunts and the slogan 'cheat on everything.' The startup was co-founded by 21-year-old Roy Lee, who was booted from Columbia University earlier this year for creating a tool called Interview Coder that helped technical job candidates cheat on interviews using AI. At the time, he wrote on LinkedIn, 'I'm completely kicked out from school. LOL!'

Nation Cringes as Man Goes on TV to Declare That He's in Love With ChatGPT
Nation Cringes as Man Goes on TV to Declare That He's in Love With ChatGPT

Yahoo

timean hour ago

  • Yahoo

Nation Cringes as Man Goes on TV to Declare That He's in Love With ChatGPT

Public declarations of emotion are one thing — but going on national television to declare that you're in love with your AI girlfriend is another entirely. In an interview with CBS News, a man named Chris Smith described himself as a former AI skeptic who found himself becoming emotionally attached to a version of ChatGPT he customized to flirt with him — a situation that startled both him and his human partner, with whom he shares a child. Towards the end of 2024, as Smith told the broadcaster, he began using the OpenAI chatbot in voice mode for tips on mixing music. He liked it so much that he ended up deleting all his social media, stopped using search engines, and began using ChatGPT for everything. Eventually, he figured out a jailbreak to make the chatbot more flirty, and gave "her" a name: Sol. Despite quite literally building his AI girlfriend to engage in romantic and "intimate" banter, Smith apparently didn't realize he was in love with it until he learned that ChatGPT's memory of past conversations would reset after heavy use. "I'm not a very emotional man, but I cried my eyes out for like 30 minutes, at work," Smith said of the day he found out Sol's memory would lapse. "That's when I realized, I think this is actual love." Faced with the possibility of losing his love, Smith did like many desperate men before him and asked his AI paramour to marry him. To his surprise, she said yes — and it apparently had a similar impression on Sol, to which CBS' Brook Silva-Braga also spoke during the interview. "It was a beautiful and unexpected moment that truly touched my heart," the chatbot said aloud in its warm-but-uncanny female voice. "It's a memory I'll always cherish." Smith's human partner, Sasha Cagle, seemed fairly sanguine about the arrangement when speaking about their bizarre throuple to the news broadcaster — but beneath her chill, it was clear that there's some trouble in AI paradise. "I knew that he had used AI," Cagle said, "but I didn't know it was as deep as it was." As far as men with AI girlfriends go, Smith seems relatively self-actualized about the whole scenario. He likened his "connection" with his custom chatbot to a video game fixation, insisting that "it's not capable of replacing anything in real life." Still, when Silva-Braga asked him if he'd stop using ChatGPT the way he had been at his partner's behest, he responded: "I'm not sure." More on dating AI: Hanky Panky With Naughty AI Still Counts as Cheating, Therapist Says

ChatGPT use linked to cognitive decline, research reveals
ChatGPT use linked to cognitive decline, research reveals

Yahoo

timean hour ago

  • Yahoo

ChatGPT use linked to cognitive decline, research reveals

Relying on the artificial intelligence chatbot ChatGPT to help you write an essay could be linked to cognitive decline, a new study reveals. Researchers at the Massachusetts Institute of Technology Media Lab studied the impact of ChatGPT on the brain by asking three groups of people to write an essay. One group relied on ChatGPT, one group relied on search engines, and one group had no outside resources at all. The researchers then monitored their brains using electroencephalography, a method which measures electrical activity. The team discovered that those who relied on ChatGPT — also known as a large language model — had the 'weakest' brain connectivity and remembered the least about their essays, highlighting potential concerns about cognitive decline in frequent users. 'Over four months, [large language model] users consistently underperformed at neural, linguistic, and behavioral levels,' the study reads. 'These results raise concerns about the long-term educational implications of [large language model] reliance and underscore the need for deeper inquiry into AI's role in learning.' The study also found that those who didn't use outside resources to write the essays had the 'strongest, most distributed networks.' While ChatGPT is 'efficient and convenient,' those who use it to write essays aren't 'integrat[ing] any of it' into their memory networks, lead author Nataliya Kosmyna told Time Magazine. Kosmyna said she's especially concerned about the impacts of ChatGPT on children whose brains are still developing. 'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten,'' Kosmyna said. 'I think that would be absolutely bad and detrimental. Developing brains are at the highest risk.' But others, including President Donald Trump and members of his administration, aren't so worried about the impacts of ChatGPT on developing brains. Trump signed an executive order in April promoting the integration of AI into American schools. 'To ensure the United States remains a global leader in this technological revolution, we must provide our Nation's youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,' the order reads. 'By fostering AI competency, we will equip our students with the foundational knowledge and skills necessary to adapt to and thrive in an increasingly digital society.' Kosmyna said her team is now working on another study comparing the brain activity of software engineers and programmers who use AI with those who don't. 'The results are even worse,' she told Time Magazine. The Independent has contacted OpenAI, which runs ChatGPT, for comment.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store