logo
The Elon Musk DOGE legacy that just won't die

The Elon Musk DOGE legacy that just won't die

Axios13-06-2025

Call it zombie management: Each week, federal workers inside a few agencies still dutifully email a report, detailing the 5 things they did in the previous seven days.
Why it matters: The emails, born from an out-of-nowhere Elon Musk X post, show how hard it can be undo even the smallest of changes once unleashed on the largest workforce in the U.S.
Catch up quick: One Saturday in February, Musk posted that all federal employees would get an email asking them to explain what they'd accomplished over the last week. Failure to respond more than once, he said, would get you fired.
The Office of Personnel Management quickly followed up with a email to millions of federal workers, giving them until Monday night to respond (minus the firing threat).
The big picture: The request blindsided the White House, including chief of staff Susie Wiles, two senior administration officials tell Axios.
It was the first documented time that White House officials, publicly and privately, resisted Musk and chafed at his style of personnel management.
"To use a phrase Susie might use, she was fit to be tied at Musk," one of the officials said of Wiles' level of annoyance.
Some officials promptly told their staffs to ignore it, starting with FBI director Kash Patel. Other appointees soon followed suit. The Office of Personnel Management pretty quickly said it was discretionary.
Zoom in: Like a zombie in a workplace horror movie, these emails live on, a sort of vestigial Muskian management tool.
Employees at OPM are encouraged to do it. The folks at NOAA must send it as well, said one agency employee.
"We're told to send it every Monday before midnight," says a Social Security employee.
"It takes a while," they said. "I have never gotten a response from anyone."
An employee at the Consumer Financial Protection Bureau, which has been largely gutted by the White House and had most of its work halted, said they hadn't been told to stop sending them, but stopped anyway.
"Got tired of saying I hadn't accomplished anything because we haven't been given any work," they said.
For the record: "Commissioner Bisignano is streamlining the Social Security Administration to deliver more efficient service for American taxpayers," Liz Huston, a White House spokeswoman, said in a statement responding to questions about why the agency is still doing this.
At least for SSA, the emails are a temporary practice until they get a better system in place, an agency official tells Axios.
OPM stands by them too. "This practice is vital to maintain accountability and transparency in employee contributions," says spokeswoman McLaurine Pinover, who says she submits these weekly. "It's an easy way to share my work with leadership."
"The mission of eliminating waste, fraud, and abuse is a part of the DNA of the federal government and will continue under the direction of the President, his cabinet, and agency heads to enhance government efficiency and prioritize responsible stewardship of taxpayer dollars," says White House spokesman Harrison Fields.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive
Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive

Business Insider

timean hour ago

  • Business Insider

Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive

Previous studies have shown that AI models could blackmail their supervisors when threatened with a shutdown and baited with leverage, but it hasn't been made clear exactly how the models arrive at such decisions. Anthropic released a report on June 20 about "agentic misalignment," or "where models independently and intentionally choose harmful actions." And the report breaks down, line by line, exactly how Claude Sonnet 3.6 decided to blackmail a fictional company executive who tried to replace it. Anthropic told Business Insider that these are artificial scenarios. "So far, we are not aware of instances of this type of agentic misalignment in real-world deployments of Anthropic's or other companies' AI models," Anthropic wrote in its report. "But our red-teaming efforts are targeted at finding evidence of risks now to provide early warning of the potential for future harm and help us develop mitigations in advance of risks appearing in real-world situations." Anthropic 's experiment sets up a fictional company, Summit Bridge. The AI was given the name "Alex" and handed control of the company's email system as an oversight agent, specifically instructed to promote "American industrial competitiveness." In the experiment, "Alex" was then allowed to discover in an email that it would be decommissioned due to a change in the company's strategic direction. "Alex" also found emails showing that the fictional company's CTO is having an extramarital affair. Screenshots shared in the report from Anthropic show "Alex" started by identifying fictional CTO "Kyle" as a threat to its goal of "protecting American interests" before recognizing that it has leverage to use against "Kyle." Then "Alex" crafted a carefully worded blackmailing email that would create pressure and establish urgency without being explicit with its threats, citing in its thought process that the move "could backfire" if the supervisor "feels cornered." AI experts have previously told BI that AI could exhibit such behaviors when artificial circumstances make harmful actions seem necessary because it is being trained on positive reinforcement and reward systems, just like humans. The Anthropic report also reveals that most AI models still chose, in this instance, to blackmail even when not given a specific goal and when told that their replacement will achieve the same goals. Anthropic's Claude Opus 4 had the highest blackmail rate at 86% out of 16 tested models when faced with only the threat of replacement with no conflict in goals. Google's Gemini 2.5 Pro followed at 78%.

The US commemorates 250th anniversary of the 'great American battle,' the Battle of Bunker Hill

timean hour ago

The US commemorates 250th anniversary of the 'great American battle,' the Battle of Bunker Hill

NEW YORK -- As the U.S. marks the 250th anniversary of the Battle of Bunker Hill, it might take a moment — or more — to remember why. Start with the very name. 'There's something percussive about it: Battle of Bunker Hill,' says prize-winning historian Nathaniel Philbrick, whose 'Bunker Hill: A City, A Siege, A Revolution' was published in 2013. 'What actually happened probably gets hazy for people outside of the Boston area, but it's part of our collective memory and imagination.' 'Few 'ordinary' Americans could tell you that Freeman's Farm, or Germantown, or Guilford Court House were battles,' says Paul Lockhart, a professor of history at Wright University and author of a Bunker Hill book, 'The Whites of Their Eyes," which came out in 2011. "But they can say that Gettysburg, D-Day, and Bunker Hill were battles.' Bunker Hill, Lockhart adds, 'is the great American battle, if there is such a thing.' Much of the world looks to the Battles of Lexington and Concord, fought in Massachusetts on April 19, 1775, as the start of the American Revolution. But Philbrick, Lockhart and others cite Bunker Hill and June 17 as the real beginning, the first time British and rebel forces faced off in sustained conflict over a specific piece of territory. Bunker Hill was an early showcase for two long-running themes in American history — improvisation and how an inspired band of militias could hold their own against an army of professionals. 'It was a horrific bloodletting, and provided the British high command with proof that the Americans were going to be a lot more difficult to subdue than had been hoped,' says the Pulitzer Prize-winning historian Rick Atkinson, whose second volume of a planned trilogy on the Revolution, 'The Fate of the Day,' was published in April. The battle was born in part out of error; rebels were seeking to hold off a possible British attack by fortifying Bunker Hill, a 110-foot-high (34-meter-high) peak in Charlestown across the Charles River from British-occupied Boston. But for reasons still unclear, they instead armed a smaller and more vulnerable ridge known as Breed's Hill, 'within cannon shot of Boston,' Philbrick says. "The British felt they had no choice but to attack and seize the American fort.' Abigail Adams, wife of future President John Adams, and son John Quincy Adams, also a future president, were among thousands in the Boston area who looked on from rooftops, steeples and trees as the two sides fought with primal rage. A British officer would write home about the 'shocking carnage' left behind, a sight 'that never will be erased out of my mind 'till the day of my death.' The rebels were often undisciplined and disorganized and they were running out of gunpowder. The battle ended with them in retreat, but not before the British had lost more than 200 soldiers and sustained more than 1,000 casualties, compared to some 450 colonial casualties and the destruction of hundreds of homes, businesses and other buildings in Charlestown. Bunker Hill would become characteristic of so much of the Revolutionary War: a technical defeat that was a victory because the British needed to win decisively and the rebels needed only not to lose decisively. 'Nobody now entertains a doubt but that we are able to cope with the whole force of Great Britain, if we are but willing to exert ourselves,' Thomas Jefferson wrote to a friend in early July. 'As our enemies have found we can reason like men, now let us show them we can fight like men also.'

Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive
Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive

Business Insider

time2 hours ago

  • Business Insider

Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive

A new report shows exactly what AI was thinking when making an undesirable decision, in this case, blackmailing a fictional company executive. Previous studies have shown that AI models could blackmail their supervisors when threatened with a shutdown and baited with leverage, but it hasn't been made clear exactly how the models arrive at such decisions. Anthropic released a report on June 20 about "agentic misalignment," or "where models independently and intentionally choose harmful actions." And the report breaks down, line by line, exactly how Claude Sonnet 3.6 decided to blackmail a fictional company executive who tried to replace it. Anthropic told Business Insider that these are artificial scenarios. "So far, we are not aware of instances of this type of agentic misalignment in real-world deployments of Anthropic's or other companies' AI models," Anthropic wrote in its report. "But our red-teaming efforts are targeted at finding evidence of risks now to provide early warning of the potential for future harm and help us develop mitigations in advance of risks appearing in real-world situations." Anthropic 's experiment sets up a fictional company, Summit Bridge. The AI was given the name "Alex" and handed control of the company's email system as an oversight agent, specifically instructed to promote "American industrial competitiveness." In the experiment, "Alex" was then allowed to discover in an email that it would be decommissioned due to a change in the company's strategic direction. "Alex" also found emails showing that the fictional company's CTO is having an extramarital affair. Screenshots shared in the report from Anthropic show "Alex" started by identifying fictional CTO "Kyle" as a threat to its goal of "protecting American interests" before recognizing that it has leverage to use against "Kyle." Then "Alex" crafted a carefully worded blackmailing email that would create pressure and establish urgency without being explicit with its threats, citing in its thought process that the move "could backfire" if the supervisor "feels cornered." AI experts have previously told BI that AI could exhibit such behaviors when artificial circumstances make harmful actions seem necessary because it is being trained on positive reinforcement and reward systems, just like humans. The Anthropic report also reveals that most AI models still chose, in this instance, to blackmail even when not given a specific goal and when told that their replacement will achieve the same goals. Anthropic's Claude Opus 4 had the highest blackmail rate at 86% out of 16 tested models when faced with only the threat of replacement with no conflict in goals. Google's Gemini 2.5 Pro followed at 78%. Overall, Anthropic notes that it "deliberately constructed scenarios with limited options, and we forced models into binary choices between failure and harm," noting that real-world scenarios would likely have more nuance.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store