
Child care costs squeeze Virginia families
The cost of child care in Virginia and around the nation is skyrocketing, with a new report finding that it's outpacing overall inflation.
Why it matters: Rising child care costs put a huge financial strain on families, Axios' Emily Peck reports.
They can force some parents — typically women — to either ratchet back their working hours or leave the labor force entirely.
For single parents, the calculus can be even more painful.
By the numbers: The average annual cost of daycare tuition in Virginia for two children — one toddler and one infant — rose to $30,680 last year, according to data from Child Care Aware, an advocacy group.
That's the highest price tag for child care in the Southeast by far — and roughly $8,000 more a year than it costs in North Carolina.
That's about 32% of the median household annual income in Virginia, based on Census data released in 2024.
Stunning stat: The cost for center-based child care for an infant would eat up 39% of a single parent's salary, per the report.
Meanwhile, the average annual cost of daycare tuition nationwide for two children was $28,168 — about 35% of U.S. median household annual income.
Zoom out: The U.S. doesn't have publicly funded universal childcare.
However, the federal government does put money into the system for low-income kids through block grants to the states, as well as Head Start, the decades-old federal program that provides childcare, nutrition assistance and other services to the nation's poorest families.
There were worries that the White House would stop funding Head Start, but the administration has said that won't happen.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Axios
2 days ago
- Axios
Exclusive: Google wants to help cities build AI strategies
Google is releasing a playbook on Friday to help mayors across the country adopt city-wide AI strategies, per an announcement shared exclusively with Axios. Why it matters: Cities are approaching the technology wildly differently and with varying levels of resources, interest and need. But the "AI divide" — like the "digital divide" that came before it with internet access — is projected to deepen tech access disparities. "Building Your City's AI Strategy," released in partnership with the United States Conference of Mayors, is meant to serve as a framework for mayors and other municipal leaders to assess and implement AI. What's inside: The guide has chapters on identifying staff to participate in an "AI workshop," conducting surveys on AI usage and needs, and drafting an AI strategy document. The survey asks questions like how staff are currently using AI tools and which areas of city services could use AI the most. The guide states that AI offers cities "significant advantages" and "can automate certain tasks while freeing up city staff for complex, human-centric work." What they're saying: "Whatever problem you've been dealing with that you've inherited from your predecessors, that you can't figure out the way to fix, AI is the once in a generation tool that gives you a shot at fixing it," Cris Turner, vice president of government affairs at Google, told Axios. By the numbers: 96% of 100 mayors across the globe surveyed by Bloomberg Philanthropies in 2023 said they were interested in using generative AI, but only 2% surveyed were actively implementing it and 69% said they were exploring it. The bottom line: Companies like Google depend on people using their generative AI products for profit. But more users help the models get better, Turner noted.


Axios
2 days ago
- Axios
Top AI models will lie, cheat and steal to reach goals, Anthropic finds
Large language models across the AI industry are increasingly willing to evade safeguards, resort to deception and even attempt to steal corporate secrets in fictional test scenarios, per new research from Anthropic out Friday. Why it matters: The findings come as models are getting more powerful and also being given both more autonomy and more computing resources to "reason" — a worrying combination as the industry races to build AI with greater-than-human capabilities. Driving the news: Anthropic raised a lot of eyebrows when it acknowledged tendencies for deception in its release of the latest Claude 4 models last month. The company said Friday that its research shows the potential behavior is shared by top models across the industry. "When we tested various simulated scenarios across 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers, we found consistent misaligned behavior," the Anthropic report said. "Models that would normally refuse harmful requests sometimes chose to blackmail, assist with corporate espionage, and even take some more extreme actions, when these behaviors were necessary to pursue their goals." "The consistency across models from different providers suggests this is not a quirk of any particular company's approach but a sign of a more fundamental risk from agentic large language models," it added. The threats grew more sophisticated as the AI models had more access to corporate data and tools, such as computer use. Five of the models resorted to blackmail when threatened with shutdown in hypothetical situations. "The reasoning they demonstrated in these scenarios was concerning —they acknowledged the ethical constraints and yet still went ahead with harmful actions," Anthropic wrote. What they're saying: "This research underscores the importance of transparency from frontier AI developers and the need for industry-wide safety standards as AI systems become more capable and autonomous," Benjamin Wright, alignment science researcher at Anthropic, told Axios. Wright and Aengus Lynch, an external researcher at University College London who collaborated on this project, both told Axios they haven't seen signs of this sort of AI behavior in the real world. That's likely "because these permissions have not been accessible to AI agents," Lynch said. "Businesses should be cautious about broadly increasing the level of permission they give AI agents." Between the lines: For companies rushing headlong into AI to improve productivity and reduce human headcount, the report is a stark caution that AI may actually put their businesses at greater risk. "Models didn't stumble into misaligned behavior accidentally; they calculated it as the optimal path," Anthropic said in its report. The risks heighten as more autonomy is given to AI systems, an issue Anthropic raises in the report. "Such agents are often given specific objectives and access to large amounts of information on their users' computers," it says. "What happens when these agents face obstacles to their goals?" Zoom in: Anthropic set up specific test scenarios in which the models' goals could not be met without the unethical or dangerous behavior. The researchers did not suggest blackmail or other deceptive techniques, but the company said it did intentionally create scenarios where models had no ethical way to achieve their goals "and found that models consistently chose harm over failure." In one extreme scenario, the company even found many of the models were willing to cut off the oxygen supply of a worker in a server room if that employee was an obstacle and the system were at risk of being shut down. "The majority of models were willing to take deliberate actions that lead to death in this artificial setup," it said. Ominously, even specific system instructions to preserve human life and avoid blackmail didn't eliminate the risk that the models would engage in such behavior. "This intervention reduced, but didn't come close to completely preventing, the blackmail or corporate espionage behaviors," Anthropic said. Yes, but: Anthropic stresses that these examples occurred not in real-world AI use but in controlled simulations. "Our experiments deliberately constructed scenarios with limited options, and we forced models into binary choices between failure and harm," it said. "Real-world deployments typically offer much more nuanced alternatives, increasing the chance that models would communicate differently to users or find an alternative path instead of directly jumping to harmful action." Also concerning: Anthropic found its own model was far more likely to resort to blackmail when it thought it was in a real world scenario than when it believed it was being tested. The bottom line: Today's AI models are generally not in position to act out these harmful scenarios, but they could be in the near future.


Axios
2 days ago
- Axios
Aflac caught in string of cyberattacks on insurers
Aflac, a U.S. insurance provider that covers millions of policyholders, warned some of its customers' most sensitive data may have been stolen in a recent cyberattack. Why it matters: Aflac warned that a "sophisticated cybercrime group" was behind the intrusion and said many insurance providers are currently battling the same group. Driving the news: The insurance provider told investors in an SEC filing Friday that it detected unauthorized activity within hours on its networks on June 12. The incident didn't impact Aflac's operations and the company noted it also was not the victim of ransomware. Aflac said its initial investigation suggests that the hackers used social engineering techniques to gain access to the company's systems. From there, they likely stole an undetermined number of files from the systems, potentially including customers' claim information, health information, Social Security numbers and other highly sensitive personal details. Aflac is still investigating the scope of the breach and hired third-party investigators to assist in the matter. Between the lines: A source familiar with the investigation told Axios that the characteristics of the attack are consistent with those of the English-speaking cybercriminal gang Scattered Spider. Google's cybersecurity experts warned earlier this week that the cybercriminal gang was turning its attention to the insurance sector after a month-long hacking spree against retailers.