
Richmond bills set to climb this summer
Locals who live in or visit the city of Richmond should expect to pay a little more for services and fees later this year, The Richmonder's Graham Moomaw reports.
Why it matters: If all the proposed increases are approved, coupled with a likely coming-soon increase from Dominion Energy, Richmonders will be paying just over $26 more a month for their regular bills starting this summer.
The big picture: Mayor Avula's proposed city budget for the next fiscal year includes fee increases for roughly half a dozen city services or fines, from parking and recycling to trash pickup and water bills, per The Richmonder's review.
By the numbers:
♻️ Monthly recycling fees would go from $2.99 ➡️ $4.33.
🗑️ Solid waste charge (aka, trash pickup), from $23.75 a month ➡️ $24.75.
⏱️ On-street parking via meters, from $2 an hour ➡️ $2.50 an hour.
🚗 Parking in a city-owned lot or deck, from $1, $2 or $5 an hour ➡️ $2, $3 or $6 an hour, while monthly parkers who currently pay between $55 and $155 would pay $5 more.
🅿️ Parking tickets, from $25 ➡️ $30.
🧯Tickets if you block a fire hydrant or park on the sidewalk (which can include one tire slightly on the curb, per our lived experience), $40 ➡️ $50.
That's in addition to a possible $12.83 monthly utility bill hike for gas, water and wastewater, as Axios previously reported.
Meanwhile, Dominion Energy petitioned the SCC this month to approve rate increases that would raise Virginia's power bills by around 15% over the next two years, per VPM.
If approved, bills would go up by $10.92 in July and then another $10.51 over the next two years.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Axios
13 hours ago
- Axios
Exclusive: Google wants to help cities build AI strategies
Google is releasing a playbook on Friday to help mayors across the country adopt city-wide AI strategies, per an announcement shared exclusively with Axios. Why it matters: Cities are approaching the technology wildly differently and with varying levels of resources, interest and need. But the "AI divide" — like the "digital divide" that came before it with internet access — is projected to deepen tech access disparities. "Building Your City's AI Strategy," released in partnership with the United States Conference of Mayors, is meant to serve as a framework for mayors and other municipal leaders to assess and implement AI. What's inside: The guide has chapters on identifying staff to participate in an "AI workshop," conducting surveys on AI usage and needs, and drafting an AI strategy document. The survey asks questions like how staff are currently using AI tools and which areas of city services could use AI the most. The guide states that AI offers cities "significant advantages" and "can automate certain tasks while freeing up city staff for complex, human-centric work." What they're saying: "Whatever problem you've been dealing with that you've inherited from your predecessors, that you can't figure out the way to fix, AI is the once in a generation tool that gives you a shot at fixing it," Cris Turner, vice president of government affairs at Google, told Axios. By the numbers: 96% of 100 mayors across the globe surveyed by Bloomberg Philanthropies in 2023 said they were interested in using generative AI, but only 2% surveyed were actively implementing it and 69% said they were exploring it. The bottom line: Companies like Google depend on people using their generative AI products for profit. But more users help the models get better, Turner noted.


Axios
13 hours ago
- Axios
Iran meeting with European officials ends with no breakthrough
Iranian Foreign Minister Abbas Araghchi told his European counterparts Friday that Iran will not negotiate directly with the U.S. as long as Israel continues its military campaign, according to two European diplomats with direct knowledge of the discussions. Why it matters: The meeting in Geneva between Araghchi and top diplomats from the European Union, France, the U.K. and Germany marked the first in-person engagement between Iran and Western powers since Israel launched its war a week ago. It came a day after President Trump announced he would make a decision "within the next two weeks" on whether to strike Iran's nuclear program, leaving the door open to a diplomatic solution. The EU, U.K., France and Germany spoke to the Trump administration to coordinate ahead of the meeting. State of play: The two-hour meeting with the Iranians on Friday did not produce a diplomatic breakthrough, with neither side presenting a new proposal. The talks were described as an initial engagement, and the parties agreed to meet again next week, the European diplomats said. Behind the scenes: The European diplomats told Axios that the Iranians appeared more open than in previous talks to discussing not just limits on their nuclear program, but also a range of non-nuclear issues. Those include Iran's missile program, its regional proxy network, military assistance to Russia, and European detainees held in Iran. Araghchi told the European ministers that Iran is willing to restrict its uranium enrichment in a manner similar to the 2015 nuclear deal, which Trump withdrew the U.S. from in 2018. The European foreign ministers urged Araghchi to engage directly with the Trump administration and proposed including U.S. representatives in future talks — but Araghchi refused, the diplomats said. Between the lines: Although the Iranian foreign minister has maintained direct contact with White House envoy Steve Witkoff since the war began, he reiterated in Geneva that Iran will not negotiate with the U.S. as long as Israeli strikes continue. What they're saying:"E3 Ministers and the High Representative of the European Union reiterated their longstanding concerns about Iran's expansion of its nuclear programme," the European ministers said in a joint statement after the meeting. "They discussed avenues towards a negotiated solution to Iran's nuclear programme, while emphasising the urgency of the matter." The other side: Araghchi told Iranian media after the meeting that Tehran remains committed to diplomacy and is prepared to meet again with the European foreign ministers. He stressed that Iran will not negotiate on its defensive capabilities. What's next: The European diplomats said they made clear during the meeting that time is running out to reach a diplomatic solution.


Axios
14 hours ago
- Axios
Top AI models will lie, cheat and steal to reach goals, Anthropic finds
Large language models across the AI industry are increasingly willing to evade safeguards, resort to deception and even attempt to steal corporate secrets in fictional test scenarios, per new research from Anthropic out Friday. Why it matters: The findings come as models are getting more powerful and also being given both more autonomy and more computing resources to "reason" — a worrying combination as the industry races to build AI with greater-than-human capabilities. Driving the news: Anthropic raised a lot of eyebrows when it acknowledged tendencies for deception in its release of the latest Claude 4 models last month. The company said Friday that its research shows the potential behavior is shared by top models across the industry. "When we tested various simulated scenarios across 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers, we found consistent misaligned behavior," the Anthropic report said. "Models that would normally refuse harmful requests sometimes chose to blackmail, assist with corporate espionage, and even take some more extreme actions, when these behaviors were necessary to pursue their goals." "The consistency across models from different providers suggests this is not a quirk of any particular company's approach but a sign of a more fundamental risk from agentic large language models," it added. The threats grew more sophisticated as the AI models had more access to corporate data and tools, such as computer use. Five of the models resorted to blackmail when threatened with shutdown in hypothetical situations. "The reasoning they demonstrated in these scenarios was concerning —they acknowledged the ethical constraints and yet still went ahead with harmful actions," Anthropic wrote. What they're saying: "This research underscores the importance of transparency from frontier AI developers and the need for industry-wide safety standards as AI systems become more capable and autonomous," Benjamin Wright, alignment science researcher at Anthropic, told Axios. Wright and Aengus Lynch, an external researcher at University College London who collaborated on this project, both told Axios they haven't seen signs of this sort of AI behavior in the real world. That's likely "because these permissions have not been accessible to AI agents," Lynch said. "Businesses should be cautious about broadly increasing the level of permission they give AI agents." Between the lines: For companies rushing headlong into AI to improve productivity and reduce human headcount, the report is a stark caution that AI may actually put their businesses at greater risk. "Models didn't stumble into misaligned behavior accidentally; they calculated it as the optimal path," Anthropic said in its report. The risks heighten as more autonomy is given to AI systems, an issue Anthropic raises in the report. "Such agents are often given specific objectives and access to large amounts of information on their users' computers," it says. "What happens when these agents face obstacles to their goals?" Zoom in: Anthropic set up specific test scenarios in which the models' goals could not be met without the unethical or dangerous behavior. The researchers did not suggest blackmail or other deceptive techniques, but the company said it did intentionally create scenarios where models had no ethical way to achieve their goals "and found that models consistently chose harm over failure." In one extreme scenario, the company even found many of the models were willing to cut off the oxygen supply of a worker in a server room if that employee was an obstacle and the system were at risk of being shut down. "The majority of models were willing to take deliberate actions that lead to death in this artificial setup," it said. Ominously, even specific system instructions to preserve human life and avoid blackmail didn't eliminate the risk that the models would engage in such behavior. "This intervention reduced, but didn't come close to completely preventing, the blackmail or corporate espionage behaviors," Anthropic said. Yes, but: Anthropic stresses that these examples occurred not in real-world AI use but in controlled simulations. "Our experiments deliberately constructed scenarios with limited options, and we forced models into binary choices between failure and harm," it said. "Real-world deployments typically offer much more nuanced alternatives, increasing the chance that models would communicate differently to users or find an alternative path instead of directly jumping to harmful action." Also concerning: Anthropic found its own model was far more likely to resort to blackmail when it thought it was in a real world scenario than when it believed it was being tested. The bottom line: Today's AI models are generally not in position to act out these harmful scenarios, but they could be in the near future.