logo
Corpse flower countdown: Stink Floyd nears bloom

Corpse flower countdown: Stink Floyd nears bloom

Axios14-05-2025

Stink Floyd, the corpse flower at Iowa State's Reiman Gardens, is this close to unleashing its signature stench.
The big picture: When? "That's the million-dollar question!" Reiman spokesperson Andrew Gogerty tells Axios.
"We know it's close, but that's all we know."
The intrigue: Corpse flowers, endangered plants originating from Sumatra, are the drama queens of the plant world — taking up to a decade to bloom for the first time and remaining unpredictable.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI warns models with higher bioweapons risk are imminent
OpenAI warns models with higher bioweapons risk are imminent

Axios

time5 days ago

  • Axios

OpenAI warns models with higher bioweapons risk are imminent

OpenAI cautioned Wednesday that upcoming models will head into a higher level of risk when it comes to the creation of biological weapons — especially by those who don't really understand what they're doing. Why it matters: The company, and society at large, need to be prepared for a future where amateurs can more readily graduate from simple garage weapons to sophisticated agents. Driving the news: OpenAI executives told Axios the company expects forthcoming models will reach a high level of risk under the company's preparedness framework. As a result, the company said in a blog post it is stepping up the testing of such models, as well as including fresh precautions designed to keep them from aiding in the creation of biological weapons. OpenAI didn't put an exact timeframe on when the first model to hit that threshold will launch, but head of safety systems Johannes Heidecke told Axios "We are expecting some of the successors of our o3 (reasoning model) to hit that level." Reality check: OpenAI isn't necessarily saying that its platform will be capable of creating new types of bioweapons. Rather, it believes that — without mitigations — models will soon be capable of what it calls "novice uplift," or allowing those without a background in biology to do potentially dangerous things. "We're not yet in the world where there's like novel, completely unknown creation of bio threats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with." Between the lines: One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm. But, Heidecke acknowledged OpenAI and others need systems that are highly accurate at detecting and preventing harmful use. "This is not something where like 99% or even one in 100,000 performance is like is sufficient," he said. "We basically need, like, near perfection," he added, noting that human monitoring and enforcement systems need to be able to quickly identify any harmful uses that escape automated detection and then take the action necessary to "prevent the harm from materializing." The big picture: OpenAI is not the only company warning of models reaching new levels of potentially harmful capability. When it released Claude 4 last month, Anthropic said it was activating fresh precautions due to the potential risk of that model aiding in the spread of biological and nuclear threats. Various companies have also been warning that it's time to start preparing for a world in which AI models are capable of meeting or exceeding human capabilities in a wide range of tasks. What's next: OpenAI said it will convene an event next month to bring together certain nonprofits and government researchers to discuss the opportunities and risks ahead. OpenAI is also looking to expand its work with the U.S. national labs, and the government more broadly, OpenAI policy chief Chris Lehane told Axios. "We're going to explore some additional type of work that we can do in terms of how we potentially use the technology itself to be really effective at being able to combat others who may be trying to misuse it," Lehane said. Lehane added that the increased capability of the most powerful models highlights "the importance, at least in my view, for the AI build out around the world, for the pipes to be really US-led."

Study: ChatGPT's creativity gap
Study: ChatGPT's creativity gap

Axios

time5 days ago

  • Axios

Study: ChatGPT's creativity gap

AI can generate a larger volume of creative ideas than any human, but those ideas are too much alike, according to research newly published in Nature Human Behavior. Why it matters: AI makers say their tools are "great for brainstorming," but experts find that chatbots produce a more limited range of ideas than a group of humans. How it works: Study participants were asked to brainstorm product ideas for a toy involving a brick and a fan, using either ChatGPT, their own ideas, or their ideas combined with web searches. Ninety-four percent of ideas from those who used ChatGPT "shared overlapping concepts." Participants who used their own ideas with the help of web searches produced the most "unique concepts," meaning a group of one or more ideas that did not overlap with any other ideas in the set. Researchers used ChatGPT 3.5 and ChatGPT-4 and reported that while ChatGPT-4 is creating more diverse ideas than 3.5, it still falls short ("by a lot") relative to humans. Case in point: Nine participants using ChatGPT independently named their toy "Build-a-Breeze Castle." The big picture: Wharton professors Gideon Nave and Christian Terwiesch and Wharton researcher Lennart Meincke found that subjects came up with a broader range of creative ideas when they used their own thoughts and web searches, compared to when they used ChatGPT. Groups that used ChatGPT tended to converge on similar concepts, reducing overall idea diversity. "We're not talking about diversity as a DEI type of diversity," Terwiesch told Axios. "We're talking about diversity in terms of the ideas being different from each in biology, we need a diverse ecosystem." Zoom in: A 2024 study found similar results. Participants were asked to write short fiction with and without ChatGPT. Generative AI–enabled stories were found to be more similar to each other than stories by humans. Yes, but: ChatGPT can be used as part of the brainstorming process. Terwiesch says idea variance comes from using ChatGPT to generate ideas, while also coming up with your own ideas and collecting original ideas from others. Terwiesch also recommends "chain of thought prompting," which means asking your chatbot to generate several ideas, but also specifically asking the bot to make those ideas different from each other. "If I just sit back and let ChatGPT do the work, I'm not taking the full advantage of what this tool has to offer. I can do better than that," Terwiesch told Axios. A spokesperson from OpenAI shared best practices for prompting ChatGPT, advice from writers on how to use the tool and a student's guide to writing with ChatGPT.

Amazon boosts Washington's space workforce
Amazon boosts Washington's space workforce

Axios

time6 days ago

  • Axios

Amazon boosts Washington's space workforce

Aerospace jobs are booming in Washington state, and Amazon is helping some frontline employees trade warehouse gigs for the stars. Why it matters: Washington is becoming a hub in the commercial space race, and Amazon's education benefits are helping train a new generation of satellite-savvy workers for the company's Project Kuiper and beyond. By the numbers: Redmond-based companies produce more than half of the satellites in Earth's orbit, according to the Washington State Department of Commerce and the Redmond Space District, a business consortium of local aerospace companies. Statewide, the space sector supports more than 13,000 jobs and generates $4.6 billion in economic activity and nearly $80 million in annual state taxes, per the state. The latest: Three of the nine March graduates of Lake Washington Institute of Technology's new aerospace manufacturing and assembly certificate programs were Amazon workers, said company spokesperson Max Gleber. The programs — developed with input from Amazon — are open to the public, but eligible Amazon employees have their tuition fully paid through the company, he said. What they're saying: Project Kuiper, Amazon's satellite internet initiative, is based in Redmond, and company execs are betting on local talent to help fill the job pipeline, Amazon VP of public policy and community engagement Brian Huseman told Axios. "Washington state is becoming the Silicon Valley of space, and we want that to continue," he said. Certificate programs like those at LWTech help residents "learn those skills and get those jobs." Catch up quick: Project Kuiper is Amazon's plan to launch thousands of low-Earth-orbit satellites to expand global broadband access. Zoom in: Dezmond Hernandez, 24, spent about three years in Amazon fulfillment centers, earning around $15 an hour before he applied for an inventory job with Project Kuiper to get his foot in the door, he told Axios. While working full time, he enrolled in aerospace manufacturing and assembly courses at LWTech. Now he works at the company's space simulation lab in Redmond, testing satellites in vacuum chambers, reviewing data, and troubleshooting systems. His salary has more than doubled, he said. "It really is life-changing," he told Axios last week. "I always had an interest in space, but I never thought I'd be working on satellites."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store