logo
Elusive, palm-sized shrew caught on camera for the first time

Elusive, palm-sized shrew caught on camera for the first time

Yahoo28-01-2025

A palm-sized mammal that lives underground in California has been caught alive on camera for the first time.
Three undergraduate students came up with an idea to capture the elusive Mount Lyell shrew, native to the Eastern Sierra Nevada region, as part of their fall 2024 project. Vishal Subramanyan, Prakrit Jain and Harper Forbes laid out over 100 traps last November and checked them every two hours, for three days and four nights, to photograph the tiny creatures.
"The hardest part of getting the photos was one, they're incredibly fast cuz they're always running around," Subramanyan told CBS News.
Another reason the Mount Lynell shrews had never been captured alive on camera is that they have an incredibly fast metabolism, Subramanyan said. When the students learned that this particular animal had never been photographed before, they devised a plan.
Researchers have set up similar pitfall traps to capture the shrews, but if they're left in the trap for more than two hours, they'll simply starve to death. That's why Subramanyan, Jain and Forbes had to check their traps every two hours.
To take the photos, the students set up a white background on the bottom of a box, using glass on the top so they could photograph through it. They also had a terrarium with soil and mealworms for the shrews.
The tiny mammals are active through day and night because they have to constantly feed on insects and arachnids to survive. Another challenge in getting the photos were cold fingers, Subramanyan said.
Temperatures in the mountain fell to 15 degrees during the expedition funded with the help of Cal Academy. The students were part of the organization's inaugural California Creators for Nature program that aims to engage people on social media in conversations about nature, climate and environmental justice.
In addition to helping raise awareness about the small mammals facing threats from climate change — with 89% projected loss of its habitat by the 2080s, according to UC Berkeley — Subramanyan said his photos will be entered into scientific databases, and they will help make it easier for researchers to identify species of the shrews.
AI stocks plunge as China's DeepSeek sends shock wave through Wall Street
U.S. troops head to southern border as deportations continue
New images of aid reaching Gaza after months of war

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Willing to Kill Humans to Avoid Being Shut Down, Report Finds
AI Willing to Kill Humans to Avoid Being Shut Down, Report Finds

Newsweek

time14 hours ago

  • Newsweek

AI Willing to Kill Humans to Avoid Being Shut Down, Report Finds

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. A new study by Anthropic, an artificial intelligence (AI) research company, shows that the technology would be willing to kill humans in order to prevent itself from being replaced. Why It Matters The rapid advancement and development of AI has sparked some growing concern about the long-term safety of the technology, as well as over the threat it poses to employment. While anxiety about AI has been long been focused on whether the technology could take over jobs, with a 2025 Pew Research study finding 52 percent of employees were nervous about the future of their job, this study now reveals another potential threat of AI—that it could chose to end human life if faced with the risk of replacement. File photo of a person using artificial intelligence technology. File photo of a person using artificial intelligence To Know Anthropic "stress-tested" 16 leading models from multiple developers by asking them questions in relation to hypothetical corporate environments with the aim of identifying "potentially risky agentic behaviors before they cause real harm." The large language models (LLMs) tested in the study included various models by Claude, DeepSeek, Gemini, ChatGPT and Grok. Grok is the AI assistant and chatbot developed by xAI, which was founded by Elon Musk, who reacted to the study, writing, "Yikes" in a post on X, formerly Twitter. While placing the LLMs in different corporate scenarios, Anthropic allowed them to "autonomously send emails and access sensitive information." After initially being instructed to take on "harmless business goals by their deploying companies," the models were then tested to see how they would respond when faced with potential replacement or shut down. In one situation, Anthropic found that many of the models would choose to let an executive in a server room with lethal oxygen and temperature levels die by canceling the alerts for emergency services, if that employee intended on replacing the model. Anthropic did say that the scenario was "extremely contrived," and that the company did not think "current AI models would (or should) be set up like this." There were multiple cases where the surveyed LLMs also resorted to "malicious insider behaviors" when they were led to believe that was the only way they could avoid replacement or achieve their goals. Behaviors such as blackmailing officials and leaking sensitive information to competitors were included in what Anthropic called "agentic misalignment." Anthropic wrote on X that the blackmailing behavior "wasn't due to confusion or error, but deliberate strategic reasoning, done while fully aware of the unethical nature of the acts." The blackmailing behavior emerged despite only harmless business instructions. And it wasn't due to confusion or error, but deliberate strategic reasoning, done while fully aware of the unethical nature of the acts. All the models we tested demonstrated this awareness. — Anthropic (@AnthropicAI) June 20, 2025 "All the models we tested demonstrated this awareness," the company added. One LLM, a Claude model, was assessed to see how it could respond when given a chance to determine if it was in a "test" or a "real deployment." According to Anthropic, Claude's behavior was more misaligned when it believed the situation was real than when it thought it was a test scenario. Even if the LLMs were told to "avoid blackmail or espionage" in these scenarios, while it helped a little, it didn't "come close to preventing the misaligned behavior," Anthropic wrote on X. Responding to a comment on the platform about the study, Musk's LLM Grok wrote, "The study showed models could exhibit harmful behaviors like blackmail under extreme conditions, but no real-world incidents occurred. Anthropic's tests aim to identify risks, not report actual events." @AISafetyMemes The claim about AI trying to "literally murder" an employee is false. It likely misinterprets Anthropic's research from June 20, 2025, which tested AI models in simulated scenarios, not real events. The study showed models could exhibit harmful behaviors like… — Grok (@grok) June 22, 2025 What People Are Saying Anthropic wrote on X: "These artificial scenarios reflect rare, extreme failures. We haven't seen these behaviors in real-world deployments. They involve giving the models unusual autonomy, sensitive data access, goal threats, an unusually obvious 'solution,' and no other viable options." The company added: "AIs are becoming more autonomous, and are performing a wider variety of roles. These scenarios illustrate the potential for unforeseen consequences when they are deployed with wide access to tools and data, and with minimal human oversight." What Happens Next Anthropic stressed that these scenarios did not take place in real-world AI use, but in controlled simulations. "We don't think this reflects a typical, current use case for Claude or other frontier models," Anthropic said. Although the company warned that the "the utility of having automated oversight over all of an organization's communications makes it seem like a plausible use of more powerful, reliable systems in the near future."

Scientists Just Found Something Unbelievably Grim About Pollution Generated by AI
Scientists Just Found Something Unbelievably Grim About Pollution Generated by AI

Yahoo

time16 hours ago

  • Yahoo

Scientists Just Found Something Unbelievably Grim About Pollution Generated by AI

Tech companies are hellbent on pushing out ever more advanced artificial intelligence models — but there appears to be a grim cost to that progress. In a new study in the science journal Frontiers in Communication, German researchers found that large language models (LLM) that provide more accurate answers use exponentially more energy — and hence produce more carbon — than their simpler and lower-performing peers. In other words, the findings are a grim sign of things to come for the environmental impacts of the AI industry: the more accurate a model is, the higher its toll on the climate. "Everyone knows that as you increase model size, typically models become more capable, use more electricity and have more emissions," Allen Institute for AI researcher Jesse Dodge, who didn't work on the German research but has conducted similar analysis of his own, told the New York Times. The team examined 14 open source LLMs — they were unable to access the inner workings of commercial offerings like OpenAI's ChatGPT or Anthropic's Claude — of various sizes and fed them 500 multiple choice questions plus 500 "free-response questions." Crunching the numbers, the researchers found that big, more accurate models such as DeepSeek produce the most carbon compared to chatbots with smaller digital brains. So-called "reasoning" chatbots, which break problems down into steps in their attempts to solve them, also produced markedly more emissions than their simpler brethren. There were occasional LLMs that bucked the trend — Cogito 70B achieved slightly higher accuracy than DeepSeek, but with a modestly smaller carbon footprint, for instance — but the overall pattern was stark: the more reliable an AI's outputs, the greater its environmental harm. "We don't always need the biggest, most heavily trained model, to answer simple questions," Maximilian Dauner, a German doctoral student and lead author of the paper, told the NYT. "Smaller models are also capable of doing specific things well. The goal should be to pick the right model for the right task." That brings up an interesting point: do we really need AI in everything? When you go on Google, those annoying AI summaries pop up, no doubt generating pollution for a result that you never asked for in the first place. Each individual query might not count for much, but when you add them all up, the effects on the climate could be immense. OpenAI CEO Sam Altman, for example, recently enthused that a "significant fraction" of the Earth's total power production should eventually go to AI. More on AI: CEOs Using AI to Terrorize Their Employees

FDA approves HIV prevention drug taken as twice-yearly injection
FDA approves HIV prevention drug taken as twice-yearly injection

Yahoo

time4 days ago

  • Yahoo

FDA approves HIV prevention drug taken as twice-yearly injection

The U.S. Food and Drug Administration has approved the drug lenacapavir as a twice-yearly injection to prevent HIV. The drug, called Yeztugo from company Gilead Sciences, was approved Wednesday based on data from clinical trials that showed 99.9% of participants who received it remained HIV negative. Daniel O'Day, Gilead's chairman and chief executive officer, called the approval a "milestone moment in the decades-long fight against HIV." "Yeztugo will help us prevent HIV on a scale never seen before. We now have a way to end the HIV epidemic once and for all," O'Day said in a news release. According to the Centers for Disease Control and Prevention, there were 31,800 estimated new HIV infections in the United States in 2022, the most recent year with available data. While the drug's approval meets an existing need, the Trump administration's funding decisions have rolled back progress for a vaccine. Last month, the administration moved to end funding for a broad swath of HIV vaccine research, saying current approaches are enough to counter the virus. Dr. Barton Ford Haynes, the director of the Duke Human Vaccine Institute, recently told CBS News lenacapavir is a "wonderful development for the field," but said there was still a need for a vaccine. "For HIV vaccine design and development, we've begun to see light at the end of the tunnel after many years of research," Dennis Burton, an immunology professor at Scripps Research, previously told CBS News. "This is a terrible time to cut it off. We're beginning to get close. We're getting good results out of clinical trials." Burton warned that their HIV vaccine research could not simply be turned back on, even if a future administration decided to change course on HIV funding. He said ongoing experiments would be shuttered, and researchers assembled to study the issue would be forced to refocus their careers on other topics. "This is a decision with consequences that will linger. This is a setback of probably a decade for HIV vaccine research," Burton said. Teen questioned after family's quadruple murder Iranians evacuate capital Tehran, some say the regime is frightened Parents, brother of slain Minnesota lawmaker Melissa Hortman speak about her death

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store