logo
#

Latest news with #workforcesecurity

Silverfort's Launch Signals The Start Of Agentic AI Security Arms Race
Silverfort's Launch Signals The Start Of Agentic AI Security Arms Race

Forbes

time3 days ago

  • Business
  • Forbes

Silverfort's Launch Signals The Start Of Agentic AI Security Arms Race

Agentic AI security is the next enterprise arms race. We're coming perilously close to having to either rename the HR department to Human-AI Resources or to give the CTO full custody over tomorrow's workforce. Either way, one thing is clear: the AI agents have arrived, and they're already reshaping work as we know it. What began with Devin in early 2024 has now snowballed into Salesforce's Agentforce, the rise of LangChain-based custom workflows, and enterprise-grade deployments like PwC's AgentOS. Agentic AI, autonomous or semi-autonomous AI systems acting on behalf of a user, is rapidly becoming the tip of the spear of AI adoption, and one can only imagine how quaintly outdated our views from June 2025 will look within just a year's time. While the Agentic AI curve is rising fast, one question is threatening to drag it all down lying right beneath the surface: how do we manage, govern, and secure these agents at scale? If the last tech wave brought SaaS sprawl and death by a thousand point solutions, this one is threatening us with a future of agentic anarchy unless we play our cards right. As a result, the defining enterprise challenge staring us in the face is this: what does workforce security look like when your employees don't sleep, learn faster than humans, and aren't even human? This shift has opened the door to a new kind of security frontier that involves protecting not just human employees, but AI agents acting autonomously across sensitive systems. We're seeing the category of agentic security emerge right in front of our eyes, and as Silverfort's recent product launch suggests, enterprises are increasingly focused on managing identity, access, and accountability for non-human actors. Paradigm shifts don't fit into quarterly roadmaps, even if your friendly neighborhood management consulting partner might insist upon it. Instead, they unravel and reweave entire assumptions about how businesses run. That's exactly what's happening with AI agents. Far from being just another robotic process automation tool, AI agents challenge the very structure of organizational workflows by the sheer breadth they bring to the mix. Where algorithms work on rails, AI agents respond to prompts, take initiative, interface APIs, make judgment calls, and increasingly, work alongside or instead of human teams just like a human colleague would. The earliest use cases have shown up where human labor is most strained and speed is a competitive edge, support desks, sales pipelines, email inboxes and even the marketing office where agentic marketplaces like Enso offer an entire team's worth of agentic replacements. The paradigm shift is reverberating deeper in the core of the organization as well, and enterprises are now experimenting with AI agents in finance, procurement, logistics, legal, and IT with a sense of urgency not seen since the dot-com boom. Despite the enthusiasm, AI adoption has hit a drag. Security concerns are a primary reason for why enterprises aren't going as far and as fast as they otherwise would. 'Organizations are trying to adopt AI rapidly due to its huge business potential, and expect their CIOs and CISOs to figure out in parallel how to keep it secure and prevent it from causing damage,' Mark Karyna, Founding Partner at Acrew Capital explains. Where the seniors in charge of security have concerns, the Agentic AI industry has adoption roadblocks. Karyna continues, noting how 'Organizations are still not sure what responsible adoption looks like. MCP is a good example, it makes AI implementations better by simplifying how AI interacts with corporate systems, but it has security gaps and often gives AI agents too much access, which can be dangerous.' Companies don't need a lecture on the theoretical risks or on the importance of guardrails. Instead, they are desperate for practical guardrails and solutions. And right now, most don't have them, which is spurring solutions like Silverfort's to fill in the gaps. 'We're investing heavily into building this dedicated security layer for AI agents because this is where our customers are feeling the most pain,' said Hed Kovetz, CEO and co-founder of Silverfort. The company recently announced its AI Agent Identity Security product as a direct response to the client pull. 'Our clients have embraced the promise of AI, but they're stuck without the controls to deploy it safely. Identity and access management (IAM) tools weren't designed for autonomous actors who take action at machine speed. This is the frontier now, and our clients are pushing us there,' Kovetz notes. Silverfort's bet joins those of many others who are arguing that the greatest value in the AI agent wave will come from giving enterprises the confidence to actually use them. This is why something like the security control layer becomes all but inevitable in the grand scheme of things. 'We see this not just as a feature request, but as a foundational enabler,' Kovetz continued. 'If you solve this problem, if you build trust at the identity level, then everything else accelerates. This is the unlock that turns AI agents from pilots into production systems, and from productivity boosters into strategic infrastructure.' In other words, the biggest leap in AI enablement might come not from the labs, but from the security stack. And that makes managing the agentic workforce not just a technical challenge, but a leadership one. The threat landscape around AI agents is no longer the playground of malicious actors alone. In addition to the red team, we see agents acting with good intentions but operating beyond their intended scope, moving too fast, or misinterpreting vague instructions, becoming entirely new threat vectors. As Aaron Shilts, CEO of NetSPI, puts it: 'The attack surface has multiplied with the advent of Agentic AI and every AI agent with access to internal systems becomes a new entry point.' 'It's like handing out admin credentials to enthusiastic interns who never sleep, don't ask questions, and can spin up a thousand API calls before you even notice. That's a red team's dream,' Shilts continues. To make things worse, agentic adoption has been as much of a bottom-up process as a top-down one, with savvy employees using tools like AutoGPT and LangGraph to solve real problems. BYO-Agent, if you will. But this means CIOs and CISOs are often unaware of what AI is running inside their perimeter until something breaks. In many ways, the threat is now internal much more than it is external. This visibility gap is a gift to attackers. 'Eventually, some of your users will get compromised, and somebody will get those credentials,' Kovetz warns. 'And if those credentials belong to an agent with privileged access, you have a serious problem.' What makes it worse is that most IAM systems aren't built to distinguish between static scripts and dynamic agents. The AI agents of 2025 don't simply boot up and run a task at 8:00 a.m. each Monday. Instead, they request additional data sources, escalate access when blocked, and route outputs based on context increasingly autonomously. They look and act like humans, but they operate on fundamentally different scales, giving rise to a different set of problems. Traditional IT governance moves in days or hours where AI agents act in milliseconds. This mismatch is a liability if not matched with real-time monitoring. This is why players like LangChain have moved toward observability platforms like LangSmith, and why Silverfort is betting on dynamic, identity-tied permissions that adapt in real-time. 'We're well beyond dealing with simply automation scripts anymore,' Kovetz explains. 'These agents behave in ways that resemble humans, but they act in milliseconds and often make decisions on the fly. And that requires an entirely new level of runtime control.' The industry is, ironically, using automation to secure automation. AI is both the problem and the solution. But even that is just the start. Even after we've figured out everything from runtime control to least-privilege access and dynamic policy enforcement, we'll still have a host of challenges ahead of us, not least the question of how to coordinate their work alongside their human collaborators. Amidst all this uncertainty it's tempting to frame AI agents as a risk to be managed. But that misses the point. They are fundamentally a capability to be unlocked. And the companies that learn to manage them well will outpace those that don't. Here are three truths to ground your strategy: 1. AI agents are here, and they are powerful What you're seeing are no longer demos. From support desks to DevOps pipelines, agents are doing real work and replacing real workflows. Enterprises must move past experimentation and prepare for scale, and that includes recognizing their power to wreak havoc as well as push outcomes. 2. Agents need new management paradigms They blur the lines between software, user, and employee. Managing them like APIs or treating them like junior staff won't cut it. They have an identity and require visibility, role definition, ownership mapping, and policy-based constraints. 3. Security is the unlock, not the blocker Agent-based automation won't go mainstream until organizations feel confident they can control, audit, and limit these systems. Tying agents to human identities, setting runtime guardrails, and enforcing least privilege is the shape of things to come. The companies that succeed won't be the ones who build the flashiest bots. They'll be the ones who manage their agentic workforce well.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store