
Richmond homeowners face tax chaos after 33,000 billing errors
It's a confusing time to be a Richmond homeowner.
Why it matters: The past few months have been full of city flubs, which include thousands of property owners receiving the wrong tax rebate checks and getting real estate tax bills meant for their mortgage lenders.
The latest: Those real estate tax bills were sent over the weekend after a system error messed up at least 33,000 taxpayer records, per a city release.
The mistake left multiple residents — including Mayor Danny Avula — stressed and confused over whether the bill was real, and they owed money, or whether they should ignore it.
One bill obtained by Axios said to pay by June 14 to avoid late fees and interest.
Zoom in: Now, officials are telling taxpayers whose mortgage lenders typically handle these bills to do nothing. To make sure the payment is covered, you can call your lender.
Taxpayers who've accidentally paid twice can request a refund by calling 311 or logging into their online account.
If you don't have a mortgage lender and typically pay your real estate tax bills yourself, do so before June 14, the city says.
Meanwhile, thousands of Richmond homeowners are still awaiting the tax rebate checks initially promised early this year (now arriving by June 30) after:
The city sent checks last year to the wrong people and properties that didn't exist.
Some of the correctly issued checks bounced.
What's next: The Department of Finance, which many Richmonders have lost trust in after years of failures, and the real estate tax billing vendor are investigating what happened with the system error and are working to fix the issue.
Avula, in a statement Tuesday, said he's "personally spending time" with finance department staff to "understand the breakdowns that occurred."
He also said he plans to bring in an expert on improving communication and processes "to prevent this type of issue from happening again."
The department just finished fixing more than 200 Richmonders' incorrect personal property tax bills, which are due June 5.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Axios
13 hours ago
- Axios
Exclusive: Google wants to help cities build AI strategies
Google is releasing a playbook on Friday to help mayors across the country adopt city-wide AI strategies, per an announcement shared exclusively with Axios. Why it matters: Cities are approaching the technology wildly differently and with varying levels of resources, interest and need. But the "AI divide" — like the "digital divide" that came before it with internet access — is projected to deepen tech access disparities. "Building Your City's AI Strategy," released in partnership with the United States Conference of Mayors, is meant to serve as a framework for mayors and other municipal leaders to assess and implement AI. What's inside: The guide has chapters on identifying staff to participate in an "AI workshop," conducting surveys on AI usage and needs, and drafting an AI strategy document. The survey asks questions like how staff are currently using AI tools and which areas of city services could use AI the most. The guide states that AI offers cities "significant advantages" and "can automate certain tasks while freeing up city staff for complex, human-centric work." What they're saying: "Whatever problem you've been dealing with that you've inherited from your predecessors, that you can't figure out the way to fix, AI is the once in a generation tool that gives you a shot at fixing it," Cris Turner, vice president of government affairs at Google, told Axios. By the numbers: 96% of 100 mayors across the globe surveyed by Bloomberg Philanthropies in 2023 said they were interested in using generative AI, but only 2% surveyed were actively implementing it and 69% said they were exploring it. The bottom line: Companies like Google depend on people using their generative AI products for profit. But more users help the models get better, Turner noted.


Axios
14 hours ago
- Axios
Top AI models will lie, cheat and steal to reach goals, Anthropic finds
Large language models across the AI industry are increasingly willing to evade safeguards, resort to deception and even attempt to steal corporate secrets in fictional test scenarios, per new research from Anthropic out Friday. Why it matters: The findings come as models are getting more powerful and also being given both more autonomy and more computing resources to "reason" — a worrying combination as the industry races to build AI with greater-than-human capabilities. Driving the news: Anthropic raised a lot of eyebrows when it acknowledged tendencies for deception in its release of the latest Claude 4 models last month. The company said Friday that its research shows the potential behavior is shared by top models across the industry. "When we tested various simulated scenarios across 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers, we found consistent misaligned behavior," the Anthropic report said. "Models that would normally refuse harmful requests sometimes chose to blackmail, assist with corporate espionage, and even take some more extreme actions, when these behaviors were necessary to pursue their goals." "The consistency across models from different providers suggests this is not a quirk of any particular company's approach but a sign of a more fundamental risk from agentic large language models," it added. The threats grew more sophisticated as the AI models had more access to corporate data and tools, such as computer use. Five of the models resorted to blackmail when threatened with shutdown in hypothetical situations. "The reasoning they demonstrated in these scenarios was concerning —they acknowledged the ethical constraints and yet still went ahead with harmful actions," Anthropic wrote. What they're saying: "This research underscores the importance of transparency from frontier AI developers and the need for industry-wide safety standards as AI systems become more capable and autonomous," Benjamin Wright, alignment science researcher at Anthropic, told Axios. Wright and Aengus Lynch, an external researcher at University College London who collaborated on this project, both told Axios they haven't seen signs of this sort of AI behavior in the real world. That's likely "because these permissions have not been accessible to AI agents," Lynch said. "Businesses should be cautious about broadly increasing the level of permission they give AI agents." Between the lines: For companies rushing headlong into AI to improve productivity and reduce human headcount, the report is a stark caution that AI may actually put their businesses at greater risk. "Models didn't stumble into misaligned behavior accidentally; they calculated it as the optimal path," Anthropic said in its report. The risks heighten as more autonomy is given to AI systems, an issue Anthropic raises in the report. "Such agents are often given specific objectives and access to large amounts of information on their users' computers," it says. "What happens when these agents face obstacles to their goals?" Zoom in: Anthropic set up specific test scenarios in which the models' goals could not be met without the unethical or dangerous behavior. The researchers did not suggest blackmail or other deceptive techniques, but the company said it did intentionally create scenarios where models had no ethical way to achieve their goals "and found that models consistently chose harm over failure." In one extreme scenario, the company even found many of the models were willing to cut off the oxygen supply of a worker in a server room if that employee was an obstacle and the system were at risk of being shut down. "The majority of models were willing to take deliberate actions that lead to death in this artificial setup," it said. Ominously, even specific system instructions to preserve human life and avoid blackmail didn't eliminate the risk that the models would engage in such behavior. "This intervention reduced, but didn't come close to completely preventing, the blackmail or corporate espionage behaviors," Anthropic said. Yes, but: Anthropic stresses that these examples occurred not in real-world AI use but in controlled simulations. "Our experiments deliberately constructed scenarios with limited options, and we forced models into binary choices between failure and harm," it said. "Real-world deployments typically offer much more nuanced alternatives, increasing the chance that models would communicate differently to users or find an alternative path instead of directly jumping to harmful action." Also concerning: Anthropic found its own model was far more likely to resort to blackmail when it thought it was in a real world scenario than when it believed it was being tested. The bottom line: Today's AI models are generally not in position to act out these harmful scenarios, but they could be in the near future.


Axios
16 hours ago
- Axios
Aflac caught in string of cyberattacks on insurers
Aflac, a U.S. insurance provider that covers millions of policyholders, warned some of its customers' most sensitive data may have been stolen in a recent cyberattack. Why it matters: Aflac warned that a "sophisticated cybercrime group" was behind the intrusion and said many insurance providers are currently battling the same group. Driving the news: The insurance provider told investors in an SEC filing Friday that it detected unauthorized activity within hours on its networks on June 12. The incident didn't impact Aflac's operations and the company noted it also was not the victim of ransomware. Aflac said its initial investigation suggests that the hackers used social engineering techniques to gain access to the company's systems. From there, they likely stole an undetermined number of files from the systems, potentially including customers' claim information, health information, Social Security numbers and other highly sensitive personal details. Aflac is still investigating the scope of the breach and hired third-party investigators to assist in the matter. Between the lines: A source familiar with the investigation told Axios that the characteristics of the attack are consistent with those of the English-speaking cybercriminal gang Scattered Spider. Google's cybersecurity experts warned earlier this week that the cybercriminal gang was turning its attention to the insurance sector after a month-long hacking spree against retailers.