logo
#

Latest news with #XZUtils

The Physical AI, Autonomous Systems And Robotics (PAI-ASR) Security Posture Management (SPM) Gap
The Physical AI, Autonomous Systems And Robotics (PAI-ASR) Security Posture Management (SPM) Gap

Forbes

time13-06-2025

  • Business
  • Forbes

The Physical AI, Autonomous Systems And Robotics (PAI-ASR) Security Posture Management (SPM) Gap

Frank Jonas, Founder Fidelitas Defense (NVIDIA Inception & Microsoft Startups F.H.) | FBI (ret) | U.S. Marine Corps Vet getty In March 2024, the cybersecurity world was rattled when it was revealed that XZ Utils, a popular open-source software (OSS) compression utility used across Linux distributions, had been quietly backdoored by a sophisticated threat actor. Over two years, an attacker posing as a helpful contributor gained maintainership rights, gradually inserting malicious code designed to grant remote shell access to compromised systems. This wasn't just a supply chain breach; it was a proof of concept for a new era of cyber threat operations: long-term, not detected and buried deep in the dependencies that modern infrastructure relies on. Now imagine the same concept applied to the software stack of a surgical robot, an autonomous submarine or a port logistics AI system. In a world where Physical AI, Autonomous Systems and Robotics (PAI-ASR) often runs on stacks of OSS and pretrained models, the risks are greater than ever. We're no longer just talking about compromised servers—we're talking about compromised machines that make decisions in the physical world. In boardrooms across the Defense, Healthcare, Maritime, Manufacturing and Energy sectors, executives are rapidly considering, piloting or deploying PAI-ASR systems that promise revolutionary advancements in efficiencies. Yet many independent security teams are struggling with an uncomfortable truth: These sophisticated machines remain dangerously vulnerable to attacks that could transform innovations into significant business risk overnight. From automated cranes at global ports to select robotic procedures performed in operating rooms, we are witnessing a rapid and mass migration of AI into the physical world. PAI-ASRs are no longer niche or experimental. They're operational, essential and often invisible to the end user. Defense agencies rely on AI-enabled drones for intelligence, surveillance, reconnaissance (ISR) and precision strikes. Shipping giants use robotic systems to manage logistics throughout maritime and ports operations. Hospitals are increasingly integrating autonomous systems and robotics to enhance patient care and streamline operations. This is the promise of PAI-ASR: Machines that move, decide and scale. But the speed of innovation may be outpacing our ability to properly secure these systems from cyber and insider risks. PAI-ASR systems are often tested and built from a soup of vulnerable components: OSS libraries like OpenCV and Robot Operating System (ROS), low-level firmware, pretrained AI models scraped from the internet and sensors subject to spoofing. Each layer introduces unique threats: supply chain compromises, insider threats, model inversion attacks—even adversarial patches that trick AI vision systems into seeing stop signs as speed limits. A decade ago, in 2015, researchers at the University of Washington demonstrated how a surgical robot prototype could be compromised through network-based attacks, causing it to misbehave or shut down entirely. In real-world industrial environments, automation systems have been found exposed online, running unpatched Linux kernels with default credentials. In military settings, autonomous drones remain vulnerable to GPS spoofing and sensor manipulation. These aren't just IT risks; they're threats to operational integrity and physical safety. The OSS ecosystem has revolutionized robotics and AI, but not without risk. OSS libraries like OpenCV power everything from defect detection in manufacturing to perception in autonomous vehicles, medical imaging and surgical robotics. They're flexible, fast and free. But packages like OpenCV, at a reported 2-3 million lines of code, depending on the build, are sprawling with broad contributor access and are often poorly maintained and inconsistently secured. Worse, these open source packages are often deeply embedded in critical systems, where malicious code could cascade into real-world harm. Many PAI-ASR systems rely heavily on open source code written by volunteers or academic researchers who never thought their work would underpin military drones or surgical robots. There's often a lack of patch cadence and centralized oversight. Worse, many organizations don't understand or perform a risk assessment on the open source package's own software dependencies and imports. That's a hacker's dream: critical systems built on complex, unaudited code, operated by organizations unaware of their own dependencies, creating a perfect storm of exploitable vulnerabilities. Traditional IT security solutions weren't built for the unique challenges of PAI-ASR. When machines can move, make decisions and interact with the physical world, the SPM paradigm fundamentally changes. PAI-ASR SPM isn't just vulnerability scanning or regulatory and compliance auditing. It's a risk-driven, holistic, contextual understanding of PAI-ASR attack surfaces. PAI-ASR SPM methodologies, frameworks and platforms monitor and baseline the security state of PAI-ASR components, from low-level firmware to high-level decision logic. They identify drift in AI model performance. They detect anomalous behavior in PAI-ASR systems. They scan for source code vulnerabilities and dependency alerts in embedded code and verify that sensor inputs haven't been manipulated. Crucially, they do this continuously and not just once a year for a compliance checkbox. We're entering a decade of PAI-ASR critical infrastructure. Military and defense, healthcare and MedTech, maritime and Ports—all of them will depend on machines that make decisions humans don't directly control. If those machines are compromised, the results won't be confined to cyberspace. We're talking about hospital mishaps, disrupted logistics supply chains and negatively impacted defense capabilities. PAI-ASR SPM companies don't eliminate risk, but they can redefine how it's managed. These firms bring domain expertise, mission alignment, real-time visibility and operational resilience to one of the most complex engineering challenges of our time. We're engineering PAI-ASR systems at an unprecedented pace—machines that are faster and more autonomous than most could have imagined just a decade ago. But while their capabilities have evolved rapidly, our SPM paradigms haven't kept up. The next decade won't be defined by innovation alone but by whether we can properly secure and minimize risk to the confidentiality, integrity and availability of PAI-ASR systems. PAI-ASR SPM isn't a luxury. It is fundamentally necessary. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store