
Amazon boosts Washington's space workforce
Aerospace jobs are booming in Washington state, and Amazon is helping some frontline employees trade warehouse gigs for the stars.
Why it matters: Washington is becoming a hub in the commercial space race, and Amazon's education benefits are helping train a new generation of satellite-savvy workers for the company's Project Kuiper and beyond.
By the numbers: Redmond-based companies produce more than half of the satellites in Earth's orbit, according to the Washington State Department of Commerce and the Redmond Space District, a business consortium of local aerospace companies.
Statewide, the space sector supports more than 13,000 jobs and generates $4.6 billion in economic activity and nearly $80 million in annual state taxes, per the state.
The latest: Three of the nine March graduates of Lake Washington Institute of Technology's new aerospace manufacturing and assembly certificate programs were Amazon workers, said company spokesperson Max Gleber.
The programs — developed with input from Amazon — are open to the public, but eligible Amazon employees have their tuition fully paid through the company, he said.
What they're saying: Project Kuiper, Amazon's satellite internet initiative, is based in Redmond, and company execs are betting on local talent to help fill the job pipeline, Amazon VP of public policy and community engagement Brian Huseman told Axios.
"Washington state is becoming the Silicon Valley of space, and we want that to continue," he said.
Certificate programs like those at LWTech help residents "learn those skills and get those jobs."
Catch up quick: Project Kuiper is Amazon's plan to launch thousands of low-Earth-orbit satellites to expand global broadband access.
Zoom in: Dezmond Hernandez, 24, spent about three years in Amazon fulfillment centers, earning around $15 an hour before he applied for an inventory job with Project Kuiper to get his foot in the door, he told Axios.
While working full time, he enrolled in aerospace manufacturing and assembly courses at LWTech.
Now he works at the company's space simulation lab in Redmond, testing satellites in vacuum chambers, reviewing data, and troubleshooting systems.
His salary has more than doubled, he said.
"It really is life-changing," he told Axios last week. "I always had an interest in space, but I never thought I'd be working on satellites."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Axios
14 hours ago
- Axios
Exclusive: Google wants to help cities build AI strategies
Google is releasing a playbook on Friday to help mayors across the country adopt city-wide AI strategies, per an announcement shared exclusively with Axios. Why it matters: Cities are approaching the technology wildly differently and with varying levels of resources, interest and need. But the "AI divide" — like the "digital divide" that came before it with internet access — is projected to deepen tech access disparities. "Building Your City's AI Strategy," released in partnership with the United States Conference of Mayors, is meant to serve as a framework for mayors and other municipal leaders to assess and implement AI. What's inside: The guide has chapters on identifying staff to participate in an "AI workshop," conducting surveys on AI usage and needs, and drafting an AI strategy document. The survey asks questions like how staff are currently using AI tools and which areas of city services could use AI the most. The guide states that AI offers cities "significant advantages" and "can automate certain tasks while freeing up city staff for complex, human-centric work." What they're saying: "Whatever problem you've been dealing with that you've inherited from your predecessors, that you can't figure out the way to fix, AI is the once in a generation tool that gives you a shot at fixing it," Cris Turner, vice president of government affairs at Google, told Axios. By the numbers: 96% of 100 mayors across the globe surveyed by Bloomberg Philanthropies in 2023 said they were interested in using generative AI, but only 2% surveyed were actively implementing it and 69% said they were exploring it. The bottom line: Companies like Google depend on people using their generative AI products for profit. But more users help the models get better, Turner noted.


Axios
15 hours ago
- Axios
Top AI models will lie, cheat and steal to reach goals, Anthropic finds
Large language models across the AI industry are increasingly willing to evade safeguards, resort to deception and even attempt to steal corporate secrets in fictional test scenarios, per new research from Anthropic out Friday. Why it matters: The findings come as models are getting more powerful and also being given both more autonomy and more computing resources to "reason" — a worrying combination as the industry races to build AI with greater-than-human capabilities. Driving the news: Anthropic raised a lot of eyebrows when it acknowledged tendencies for deception in its release of the latest Claude 4 models last month. The company said Friday that its research shows the potential behavior is shared by top models across the industry. "When we tested various simulated scenarios across 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers, we found consistent misaligned behavior," the Anthropic report said. "Models that would normally refuse harmful requests sometimes chose to blackmail, assist with corporate espionage, and even take some more extreme actions, when these behaviors were necessary to pursue their goals." "The consistency across models from different providers suggests this is not a quirk of any particular company's approach but a sign of a more fundamental risk from agentic large language models," it added. The threats grew more sophisticated as the AI models had more access to corporate data and tools, such as computer use. Five of the models resorted to blackmail when threatened with shutdown in hypothetical situations. "The reasoning they demonstrated in these scenarios was concerning —they acknowledged the ethical constraints and yet still went ahead with harmful actions," Anthropic wrote. What they're saying: "This research underscores the importance of transparency from frontier AI developers and the need for industry-wide safety standards as AI systems become more capable and autonomous," Benjamin Wright, alignment science researcher at Anthropic, told Axios. Wright and Aengus Lynch, an external researcher at University College London who collaborated on this project, both told Axios they haven't seen signs of this sort of AI behavior in the real world. That's likely "because these permissions have not been accessible to AI agents," Lynch said. "Businesses should be cautious about broadly increasing the level of permission they give AI agents." Between the lines: For companies rushing headlong into AI to improve productivity and reduce human headcount, the report is a stark caution that AI may actually put their businesses at greater risk. "Models didn't stumble into misaligned behavior accidentally; they calculated it as the optimal path," Anthropic said in its report. The risks heighten as more autonomy is given to AI systems, an issue Anthropic raises in the report. "Such agents are often given specific objectives and access to large amounts of information on their users' computers," it says. "What happens when these agents face obstacles to their goals?" Zoom in: Anthropic set up specific test scenarios in which the models' goals could not be met without the unethical or dangerous behavior. The researchers did not suggest blackmail or other deceptive techniques, but the company said it did intentionally create scenarios where models had no ethical way to achieve their goals "and found that models consistently chose harm over failure." In one extreme scenario, the company even found many of the models were willing to cut off the oxygen supply of a worker in a server room if that employee was an obstacle and the system were at risk of being shut down. "The majority of models were willing to take deliberate actions that lead to death in this artificial setup," it said. Ominously, even specific system instructions to preserve human life and avoid blackmail didn't eliminate the risk that the models would engage in such behavior. "This intervention reduced, but didn't come close to completely preventing, the blackmail or corporate espionage behaviors," Anthropic said. Yes, but: Anthropic stresses that these examples occurred not in real-world AI use but in controlled simulations. "Our experiments deliberately constructed scenarios with limited options, and we forced models into binary choices between failure and harm," it said. "Real-world deployments typically offer much more nuanced alternatives, increasing the chance that models would communicate differently to users or find an alternative path instead of directly jumping to harmful action." Also concerning: Anthropic found its own model was far more likely to resort to blackmail when it thought it was in a real world scenario than when it believed it was being tested. The bottom line: Today's AI models are generally not in position to act out these harmful scenarios, but they could be in the near future.


Axios
16 hours ago
- Axios
Aflac caught in string of cyberattacks on insurers
Aflac, a U.S. insurance provider that covers millions of policyholders, warned some of its customers' most sensitive data may have been stolen in a recent cyberattack. Why it matters: Aflac warned that a "sophisticated cybercrime group" was behind the intrusion and said many insurance providers are currently battling the same group. Driving the news: The insurance provider told investors in an SEC filing Friday that it detected unauthorized activity within hours on its networks on June 12. The incident didn't impact Aflac's operations and the company noted it also was not the victim of ransomware. Aflac said its initial investigation suggests that the hackers used social engineering techniques to gain access to the company's systems. From there, they likely stole an undetermined number of files from the systems, potentially including customers' claim information, health information, Social Security numbers and other highly sensitive personal details. Aflac is still investigating the scope of the breach and hired third-party investigators to assist in the matter. Between the lines: A source familiar with the investigation told Axios that the characteristics of the attack are consistent with those of the English-speaking cybercriminal gang Scattered Spider. Google's cybersecurity experts warned earlier this week that the cybercriminal gang was turning its attention to the insurance sector after a month-long hacking spree against retailers.