Latest news with #AgenticAI


Time of India
12 hours ago
- Business
- Time of India
AI that acts, not just answers: How Agentic AI is redefining the future of work
Sarah, an HR director at a mid-size tech company, used to spend her Monday mornings drowning in resumes. Hundreds of applications for a single software engineer position would pile up over the weekend, each requiring careful review. Today, she walks into the office to find that her AI assistant has already screened, ranked, and even conducted preliminary assessments of candidates. But rather than feeling replaced, Sarah feels liberated. She can now spend her time on what she loves most: connecting with people and building the company culture. This isn't a distant future scenario. It's happening right now, as Agentic AI transforms how we work, how we hire, and how we think about the very nature of employment itself. What is Agentic AI? Understanding the technology that's changing everything Imagine having an incredibly capable colleague who never sleeps, never gets overwhelmed, and can juggle dozens of complex tasks simultaneously. That's essentially what Agentic AI brings to the workplace. Unlike the AI tools you might use to write emails or create presentations, Agentic AI systems don't wait for your instructions; they think ahead, make decisions, and take action to solve problems on their own. "Agentic AI is, at the core, the autonomous nature of agents that can perform tasks, autonomously, semi-autonomously, mimicking human-like behaviour to enhance the workflow," explains Anthony Abbatiello, partner and workforce transformation leader at PwC. But here's what makes it truly revolutionary: these systems learn from every interaction, becoming more effective partners over time. Think of it as the difference between a calculator and a skilled accountant. A calculator waits for you to input numbers, while an accountant anticipates your needs, identifies potential issues, and suggests solutions before you even realise there's a problem. Why Agentic AI isn't just 'another AI tool' Many people confuse Agentic AI with Generative AI , the technology behind ChatGPT and similar tools. It's an understandable mistake, but the difference is profound and worth understanding if you want to prepare for what's coming. Generative AI is like having a brilliant writer on your team. Ask it to create a job description, draft an email, or summarise a report, and it delivers exactly what you requested. It's reactive, responsive, and incredibly useful for content creation. Agentic AI is like having a proactive business partner. It doesn't wait for assignments. Instead, it observes patterns, identifies opportunities, and takes initiative. While Generative AI helps you create better content, Agentic AI helps you make better decisions and automate entire workflows. Picture this: Your company's turnover rate in the sales department suddenly spikes. Generative AI could help you write a survey to understand why people are leaving. Agentic AI would automatically detect the pattern, analyse exit interview data, identify the root causes, and even suggest specific interventions, all before you realise there's a problem. Agentic AI vs AI Agents: Clarifying the terminology The terms 'Agentic AI' and 'AI Agents' are often used interchangeably, but there's a subtle distinction. AI Agents are the individual software entities that perform specific tasks, while Agentic AI refers to the broader technology and approach that enables these agents to operate autonomously. Think of AI Agents as the workforce, and Agentic AI as the intelligence system that empowers them to work independently. The human stories behind HR transformation: How AI agents are helping AI agents aren't just streamlining workflows; they're reshaping the HR experience. Here are a few examples of how that shift is playing out on the ground. #Story 1: Recruitment: From resume sorting to relationship building Meet Ahmed, a talent acquisition specialist who used to spend 70% of his time on administrative tasks. Sorting resumes, scheduling interviews, sending rejection emails—it was necessary work, but it wasn't why he got into HR. He wanted to discover talent, build relationships, and help people find their dream jobs. Today, Ahmed works alongside AI agents that handle the initial screening process. But here's what's interesting: the technology doesn't just make his job easier—it makes him better at the human parts of his role. With routine tasks automated, Ahmed now spends his time on strategic conversations with hiring managers, creating compelling candidate experiences, and building the company's employer brand. The numbers tell the story: organisations using Agentic AI in recruitment report 40-60% improvements in process efficiency, but more importantly, they're seeing higher candidate satisfaction scores and better long-term retention rates. #Story 2: Skills development: Personal AI career coaches Laila, a marketing manager at a Fortune 500 company, received an unexpected message last month: her company's AI system suggested she take a course in data analytics. Not because she was failing at her job, but because the system had analysed industry trends and predicted that marketing roles would increasingly require analytical skills over the next two years. This isn't Big Brother watching, it's more like having a career coach who never sleeps and has access to every job market trend in real-time. The AI doesn't just identify skill gaps; it creates personalised learning paths, connects employees with mentors, and even suggests internal projects where they can practice new skills. The result? Jennifer now leads her company's marketing analytics initiative, a role that didn't exist six months ago but emerged from the intersection of her existing expertise and the skills the AI helped her develop. The real-world impact: Humans and Agentic AI as partners The future of work isn't about humans versus machines; it's about humans with machines. This partnership is creating new possibilities that neither could achieve alone. Consider the story of McKinsey's implementation of Agentic AI that offers a glimpse into how this technology works in practice. Rather than deploying a single AI system, they created a network of specialised agents, each with specific expertise. One agent specialises in data cleaning and candidate identification, sifting through vast databases to find potential matches. Another focuses on scoring and ranking, using sophisticated algorithms to assess candidate fit. A third handles scheduling and communication, managing the complex logistics of coordinating interviews across multiple time zones. The human recruiters? They're doing what humans do best: building relationships, assessing cultural fit, and making the final decisions that require intuition, empathy, and strategic thinking. The technology amplified their capabilities rather than replacing them. Navigating the challenges: Keeping humanity at the centre As powerful as Agentic AI is, it brings challenges that require thoughtful, human-centred solutions. The risk isn't just technological, it's deeply human. The bias challenge: AI systems can perpetuate and amplify human biases, potentially making discrimination more systematic and harder to detect. This is why companies like Unilever have invested heavily in bias detection systems and diverse training data, ensuring their AI agents promote fairness rather than undermine trust factor: Employees need to understand and trust the AI systems they work with. This requires transparency, training, and ongoing communication about how these systems make decisions and what role humans play in skills evolution: As AI handles more routine tasks, humans need to develop uniquely human skills: creativity, emotional intelligence, complex problem-solving, and ethical reasoning. The most successful organisations are investing heavily in helping their people develop these capabilities. What does this means for you Whether you're an HR professional, a manager, or someone planning your career, the rise of Agentic AI has practical implications: For HR professionals: Your role is evolving from administrator to strategist. The routine tasks that once consumed your time are becoming automated, freeing you to focus on culture building, strategic planning, and complex managers: You're gaining AI-powered insights that can help you make better decisions about team composition, skill development, and performance management. But the human skills of coaching, motivation, and relationship building become more important than individual contributors: The most valuable employees will be those who can work effectively with AI systems while bringing uniquely human capabilities to their roles. This means developing skills in creativity, critical thinking, and emotional intelligence. The data behind the transformation The numbers reveal the scope of this transformation: According to Salesforce research, 73% of HR leaders expect Agentic AI to significantly impact their function within the next two yearsEarly adopters report 30% reductions in time-to-hire for critical positionsSkills-based hiring, enabled by AI insights, is becoming the preferred approach for 65% of forward-thinking organisationsCompanies using AI-human partnerships report 25% higher employee engagement scores But perhaps the most telling statistic is this: 89% of employees working with Agentic AI systems report feeling more fulfilled in their roles, not less. They're not competing with machines, they're collaborating with them to achieve things neither could accomplish alone. Looking forward: A more human future of work As we stand at the threshold of this transformation, it's worth remembering that the goal isn't to replace human judgment, creativity, and connection; it's to amplify these uniquely human capabilities. Agentic AI handles the routine so humans can focus on the remarkable. The organisations that succeed in this new era will be those that remember a fundamental truth: technology serves people, not the other way around. They'll use Agentic AI to create more meaningful work, stronger relationships, and better outcomes for everyone involved. The future of work isn't about humans versus AI, it's about humans and AI working together to create something better than either could achieve alone. And that future is arriving faster than most people realise. Sarah, the HR director we met at the beginning, puts it best: "AI didn't replace me, it freed me to be more human in my work. I spend less time on spreadsheets and more time with people. That's exactly what I hoped technology would do."
Yahoo
a day ago
- Business
- Yahoo
AWS' Klein on AI Trends & Business Outlook
Olivier Klein, Chief APAC Technologist for AWS, discusses his outlook on how trends like Agentic AI is changing model structures and application of artificial intelligence for businesses in the region. He speaks with Annabelle Droulers on the sidelines of SuperAI Conference in Singapore on "The China Show".


Time of India
2 days ago
- Business
- Time of India
Agentic AI: Your digital clone is officially ready to elevate your customer experience!
Q1> We've all heard about Generative AI. But what exactly is Agentic AI, and why is it considered the next leap forward? That's question of the hour. Generative AI has disrupted the technology and business landscape by rapid automation, faster query responses and the optimizing decision-making across various industries. Let's discuss on a large language model ( LLM ) which is like condensing the knowledge of the entire Internet into a single computer in the form of a neural network. The digitized human knowledge that the Internet brings into that machine is wrapped in a sophisticated reasoning engine that can also be programmable. Calling an LLM once to do a task is powerful, but that's really just scratching the surface. Talking about Agentic AI, Let's understand about 'Agents'. Agents are a layer around LLMs that help to fully harness its potential. What truly exciting is to witness how it emulates a human role — cloning a large and complex task with multiple LLM calls and coordinated reasoning. The agent treats the LLM like a human consultant, invoked when needed in the flow, and maxes out the automation by not needing to loop in a human unless absolutely necessary. The Role of Agentic AI in Recruiting Take the example of a Recruiter Agent . It can shortlist resumes, write and send emails, track responses, and even conduct initial virtual interviews using LLMs. What you're seeing here is not just task-level automation. It is a system that is cloning the core functions of a real human recruiter, end to end. This is why the agent market is exploding: we now have access to a panel of expert minds, all simulated through LLMs. You can consult one model, verify with another, and repeat this as often as needed. What's unique is the ability to do this instantly, at scale, and without fatigue. It's like auto-consulting all the experts in a domain, iterating until the system truly gets stuck and needs human input. That's what makes Agentic AI such a leap. It is not just smart responses. It is goal-driven autonomy, powered by an LLM, but steered by something that knows how to work. Q2> So is Agentic AI a breakthrough like LLMs or something different? Think of Agentic AI as self-driving system! Actually, no. Agentic AI is not a breakthrough like the advances that made large language models possible. Rather, it is a programming approach, a powerful new way of putting those advances to work. Think of it this way. LLMs are like a powerful new car engine. It delivers performance, efficiency, and the potential for incredible capabilities. Agentic AI is like a self-driving system around that engine, something that knows how to navigate, follow rules, and reach its destination with minimal help. The engine gives you the power, but the real impact comes from how that power is used. Here is another way to look at it. Large Language Models (LLMs) are like a Global Positioning System (GPS): they provide accurate coordinates of where you are in the landscape of knowledge, context, or language. Agentic AI is the smart travel assistant that uses this GPS to plan the entire journey, select the best route, handle delays, reschedule meetings, and keep things moving without supervision. It's not just about knowing your position; it's about reaching your destination. To put it simply, LLMs are the breakthrough. Agentic AI is how we use them, to model entire workflows, automate decisions, and let machines act independently unless human input is truly needed. That is what makes Agentic AI so exciting. It doesn't create new intelligence from scratch. It takes what already exists and wraps it in structure, autonomy, and purpose, which makes it far more useful in the real world. Q3> In the context of Agentic AI, what are some real-world limitations that today's LLMs still struggle with, especially when deployed in open-ended environments? Is it smart of interpret emojis/slangs? One underlying weakness is that AI systems often struggle when they encounter input that differs from the kind they were trained on. This is known as an out-of-distribution problem, or OOD for short. Consider the task of classifying emotion in social media posts. The goal is to determine whether a social media post expresses joy, sadness, anger, or something else. An AI system may work well when the posts are written in clear English, because that is what it was mostly trained on. But in the real world, people use mixed languages, slang, emojis, and informal phrasing. When that happens, the AI often gets confused and may classify the emotion incorrectly, even though a human would easily understand the feeling behind the post. Poor performance on out-of-distribution inputs is a general weakness of LLMs. It becomes especially important in the context of Agentic AI, where agents are expected to act autonomously. A misstep in understanding at the very first stage can lead the agent down the wrong path, with no human in the loop to intervene. And OOD is just one part of the picture. LLMs can also hallucinate facts, lose track of context across steps, or misinterpret vague instructions. In an agentic system, where actions depend on previous outputs, these small issues can snowball. A hallucinated answer might snowball into a flawed decision, while a missed nuance might set off an incorrect sequence of steps. This makes reliability just as important as intelligence when designing agentic systems for the real world. Q4> As LLMs become more powerful and are increasingly wrapped in autonomous agents, how should we think about governance? What risks does this pose, and how can we manage them? How do we manage societal biases or biased data? LLMs already come with important governance concerns. They may occasionally generate incorrect information, reflect societal biases, generate inappropriate or copyrighted content, or produce answers that are misleading or unfair. But when these LLMs are placed inside autonomous agents that act over multiple steps, remember context, and initiate actions, those risks become more serious. The challenge is that agents don't just give you a single answer and stop. They take initiative. They make decisions, write emails, trigger follow-up actions, and even interact with people or systems. So a small error or bias from the LLM can now spread across an entire chain of actions. This raises the stakes for governance in Agentic AI. Good governance starts with knowing what to control and how. Technology offers many tools to help here. Prompting techniques like multi-shot examples can improve model consistency. Methods such as chain-of-thought reasoning can help expose the logic behind answers. You can insert verification steps, where one agent reviews the work of another. Content filters can screen LLM responses for hate speech, violence, sexual content, profanity, or politically sensitive material. Prompt shields can scan incoming queries to detect jailbreaking attempts or other manipulations that look harmless on the surface but try to bypass safeguards. Bias is a major concern too, and there are several ways to tackle it. You can adjust prompts to reduce bias in the model's responses by encouraging balanced answers, avoiding loaded phrasing, or steering the tone. You can fine-tune models on curated, balanced datasets. You can even use additional layers that flag or reword responses that seem skewed or unfair. At the end of the day, governance is what helps us move from experimental AI to trusted AI. With Agentic AI, you are no longer working with a single response — you're coordinating digital actors that operate independently. That's why governance isn't just a box to check. It's the core that ensures these systems stay safe, fair, and under control, even as they grow more capable. Q5> Who is responsible when something goes wrong — the AI model, the app built on it, or the organization that uses it? That is a timely and important question. The answer is that responsibility is shared. The large language model, or LLM, is like the engine. It generates responses based on its training data and architecture. If there is a harmful or biased pattern in the model itself, then yes, part of the responsibility lies with the model creator. This is why developers of foundational models apply techniques like prompt calibration, fine-tuning with curated datasets, or post-processing logic to reduce bias in the output. The goal is to improve the model's overall behaviour, not just its response to specific queries. But once you build something more capable, like an agent or a generative application that wraps around the LLM, the system becomes more than just a model. Now you have decision logic, memory, workflows, external tools, and other components. If a bad outcome occurs because of how the agent used the model's output — for instance, relying on a flawed response or skipping a verification step — then the responsibility also lies with the application developer. This includes decisions about how the agent was designed, how much autonomy it was given, and whether proper safeguards were in place. Finally, there is the organization that deploys the system. They decide what the system is used for, what kind of guardrails to apply, and how much oversight is exercised. They are the ones putting the technology into a real-world context. If the use case involves sensitive domains like healthcare or finance and governance is weak or missing, that becomes part of the accountability chain as well. Legally, this is an evolving space. Most emerging regulations place growing responsibility on the deploying organization, especially when harm is caused or rights are violated. Just as in data privacy, you cannot escape liability by saying "the model did it." If you are using AI to make decisions that affect people, you are expected to take reasonable steps to ensure those decisions are fair, safe, and explainable. Ultimately, responsibility is shared across three actors: the model creator, the application builder, and the deploying enterprise. Governance is not just about preventing failure; it's also about ensuring that when something does go wrong, there is clarity on who is accountable and what could have been done differently. As we prepare for Agentic AI, let's put governance, security, unbiased inputs and most important responsibility of building a responsible models that support the agility and growth. The author is Sreekrishnan Venkiteswaran, Chief Technology Officer and Kyndryl Fellow of Kyndryl This article is a part of ETCIO's Brand Connect Initiative.


Forbes
2 days ago
- Business
- Forbes
Silverfort's Launch Signals The Start Of Agentic AI Security Arms Race
Agentic AI security is the next enterprise arms race. We're coming perilously close to having to either rename the HR department to Human-AI Resources or to give the CTO full custody over tomorrow's workforce. Either way, one thing is clear: the AI agents have arrived, and they're already reshaping work as we know it. What began with Devin in early 2024 has now snowballed into Salesforce's Agentforce, the rise of LangChain-based custom workflows, and enterprise-grade deployments like PwC's AgentOS. Agentic AI, autonomous or semi-autonomous AI systems acting on behalf of a user, is rapidly becoming the tip of the spear of AI adoption, and one can only imagine how quaintly outdated our views from June 2025 will look within just a year's time. While the Agentic AI curve is rising fast, one question is threatening to drag it all down lying right beneath the surface: how do we manage, govern, and secure these agents at scale? If the last tech wave brought SaaS sprawl and death by a thousand point solutions, this one is threatening us with a future of agentic anarchy unless we play our cards right. As a result, the defining enterprise challenge staring us in the face is this: what does workforce security look like when your employees don't sleep, learn faster than humans, and aren't even human? This shift has opened the door to a new kind of security frontier that involves protecting not just human employees, but AI agents acting autonomously across sensitive systems. We're seeing the category of agentic security emerge right in front of our eyes, and as Silverfort's recent product launch suggests, enterprises are increasingly focused on managing identity, access, and accountability for non-human actors. Paradigm shifts don't fit into quarterly roadmaps, even if your friendly neighborhood management consulting partner might insist upon it. Instead, they unravel and reweave entire assumptions about how businesses run. That's exactly what's happening with AI agents. Far from being just another robotic process automation tool, AI agents challenge the very structure of organizational workflows by the sheer breadth they bring to the mix. Where algorithms work on rails, AI agents respond to prompts, take initiative, interface APIs, make judgment calls, and increasingly, work alongside or instead of human teams just like a human colleague would. The earliest use cases have shown up where human labor is most strained and speed is a competitive edge, support desks, sales pipelines, email inboxes and even the marketing office where agentic marketplaces like Enso offer an entire team's worth of agentic replacements. The paradigm shift is reverberating deeper in the core of the organization as well, and enterprises are now experimenting with AI agents in finance, procurement, logistics, legal, and IT with a sense of urgency not seen since the dot-com boom. Despite the enthusiasm, AI adoption has hit a drag. Security concerns are a primary reason for why enterprises aren't going as far and as fast as they otherwise would. 'Organizations are trying to adopt AI rapidly due to its huge business potential, and expect their CIOs and CISOs to figure out in parallel how to keep it secure and prevent it from causing damage,' Mark Karyna, Founding Partner at Acrew Capital explains. Where the seniors in charge of security have concerns, the Agentic AI industry has adoption roadblocks. Karyna continues, noting how 'Organizations are still not sure what responsible adoption looks like. MCP is a good example, it makes AI implementations better by simplifying how AI interacts with corporate systems, but it has security gaps and often gives AI agents too much access, which can be dangerous.' Companies don't need a lecture on the theoretical risks or on the importance of guardrails. Instead, they are desperate for practical guardrails and solutions. And right now, most don't have them, which is spurring solutions like Silverfort's to fill in the gaps. 'We're investing heavily into building this dedicated security layer for AI agents because this is where our customers are feeling the most pain,' said Hed Kovetz, CEO and co-founder of Silverfort. The company recently announced its AI Agent Identity Security product as a direct response to the client pull. 'Our clients have embraced the promise of AI, but they're stuck without the controls to deploy it safely. Identity and access management (IAM) tools weren't designed for autonomous actors who take action at machine speed. This is the frontier now, and our clients are pushing us there,' Kovetz notes. Silverfort's bet joins those of many others who are arguing that the greatest value in the AI agent wave will come from giving enterprises the confidence to actually use them. This is why something like the security control layer becomes all but inevitable in the grand scheme of things. 'We see this not just as a feature request, but as a foundational enabler,' Kovetz continued. 'If you solve this problem, if you build trust at the identity level, then everything else accelerates. This is the unlock that turns AI agents from pilots into production systems, and from productivity boosters into strategic infrastructure.' In other words, the biggest leap in AI enablement might come not from the labs, but from the security stack. And that makes managing the agentic workforce not just a technical challenge, but a leadership one. The threat landscape around AI agents is no longer the playground of malicious actors alone. In addition to the red team, we see agents acting with good intentions but operating beyond their intended scope, moving too fast, or misinterpreting vague instructions, becoming entirely new threat vectors. As Aaron Shilts, CEO of NetSPI, puts it: 'The attack surface has multiplied with the advent of Agentic AI and every AI agent with access to internal systems becomes a new entry point.' 'It's like handing out admin credentials to enthusiastic interns who never sleep, don't ask questions, and can spin up a thousand API calls before you even notice. That's a red team's dream,' Shilts continues. To make things worse, agentic adoption has been as much of a bottom-up process as a top-down one, with savvy employees using tools like AutoGPT and LangGraph to solve real problems. BYO-Agent, if you will. But this means CIOs and CISOs are often unaware of what AI is running inside their perimeter until something breaks. In many ways, the threat is now internal much more than it is external. This visibility gap is a gift to attackers. 'Eventually, some of your users will get compromised, and somebody will get those credentials,' Kovetz warns. 'And if those credentials belong to an agent with privileged access, you have a serious problem.' What makes it worse is that most IAM systems aren't built to distinguish between static scripts and dynamic agents. The AI agents of 2025 don't simply boot up and run a task at 8:00 a.m. each Monday. Instead, they request additional data sources, escalate access when blocked, and route outputs based on context increasingly autonomously. They look and act like humans, but they operate on fundamentally different scales, giving rise to a different set of problems. Traditional IT governance moves in days or hours where AI agents act in milliseconds. This mismatch is a liability if not matched with real-time monitoring. This is why players like LangChain have moved toward observability platforms like LangSmith, and why Silverfort is betting on dynamic, identity-tied permissions that adapt in real-time. 'We're well beyond dealing with simply automation scripts anymore,' Kovetz explains. 'These agents behave in ways that resemble humans, but they act in milliseconds and often make decisions on the fly. And that requires an entirely new level of runtime control.' The industry is, ironically, using automation to secure automation. AI is both the problem and the solution. But even that is just the start. Even after we've figured out everything from runtime control to least-privilege access and dynamic policy enforcement, we'll still have a host of challenges ahead of us, not least the question of how to coordinate their work alongside their human collaborators. Amidst all this uncertainty it's tempting to frame AI agents as a risk to be managed. But that misses the point. They are fundamentally a capability to be unlocked. And the companies that learn to manage them well will outpace those that don't. Here are three truths to ground your strategy: 1. AI agents are here, and they are powerful What you're seeing are no longer demos. From support desks to DevOps pipelines, agents are doing real work and replacing real workflows. Enterprises must move past experimentation and prepare for scale, and that includes recognizing their power to wreak havoc as well as push outcomes. 2. Agents need new management paradigms They blur the lines between software, user, and employee. Managing them like APIs or treating them like junior staff won't cut it. They have an identity and require visibility, role definition, ownership mapping, and policy-based constraints. 3. Security is the unlock, not the blocker Agent-based automation won't go mainstream until organizations feel confident they can control, audit, and limit these systems. Tying agents to human identities, setting runtime guardrails, and enforcing least privilege is the shape of things to come. The companies that succeed won't be the ones who build the flashiest bots. They'll be the ones who manage their agentic workforce well.

Yahoo
2 days ago
- Business
- Yahoo
Patra Announces Pratap Sarker as New CEO
Seasoned executive to accelerate innovation and growth and strengthen company's leading position as next-generation platform for insurance operations EL DORADO HILLS, Calif., June 18, 2025--(BUSINESS WIRE)--Patra, a trusted leader in technology-enabled insurance workflow optimization, announced today the appointment of Pratap Sarker as its chief executive officer. Sarker brings more than 30 years of experience leading and transforming technology-driven services companies. He will also join the company's board of directors. John Simpson, who has served as CEO of Patra since he founded the company in 2005, will assume the role of chairman of the board, where he will work alongside Sarker to guide Patra's long-term vision and strategy. "I am incredibly honored and excited to join the Patra team as CEO. Under John's 20 years of leadership, Patra has transformed the insurance industry by consistently innovating, leveraging seasoned insurance professionals, and focusing on the customer experience. Patra's AI and automation tools have already brought performance optimization and value to its clients across the industry," said Sarker. "I am committed to building on that foundation as we enhance Patra's best-in-class services with a platform built on Patra's next-generation AI-driven workflow automation tools designed to deliver operational excellence across the insurance value chain. I look forward to working with the talented team at Patra to build upon John's remarkable legacy." A seasoned technology executive, Sarker's vast experience spans the financial services, insurance, healthcare and professional services industries. Prior to Patra, Sarker served in executive leadership roles at companies such as Accenture, Infosys, and IBM. Most recently, he was CEO of Greenway Health, where he led a multi-year transformation to modernize the product platform leveraging GenAI and Agentic AI, improve customer satisfaction, and position the company for long-term growth. Previously, he was the president and group CEO of Conduent's $2.5 billion commercial sector business, where he championed strategic transformation across the sector. "Pratap is an extraordinary leader whose cross-functional and cross-industry knowledge and vast experience leading and transforming technology-driven services companies will be vital assets for Patra's continued growth," said Simpson. "As a people-first leader, Pratap is passionate about aligning teams with purpose, fostering a culture of execution and learning, and delivering value to clients with integrity and impact. I am incredibly pleased to welcome Pratap to the Patra team." "Pratap joins us at a pivotal time in the insurance industry as brokers and wholesalers look to drive growth and expand margins through the use of high-quality, cost-effective outsourced technology solutions allowing them to best capture the benefits of AI at scale across their businesses," said Mike Vostrizansky, partner at FTV Capital and member of Patra's board. "With his deep technology and business process experience, Pratap is well-positioned to lead Patra and the industry more broadly through what we see as an exciting, yet critical, era of digital transformation. Under John's leadership, Patra has always been an innovator and today is in a great position to accelerate growth, with a strong financial footing and growing client base, as a result of Patra's best-in-class, customer-centric solutions. We're excited to work with Pratap and the Patra team to build on that success and continue to press the insurance industry into the future." About Patra Patra is a leading provider of technology-enabled insurance workflow optimization and AI-powered software solutions. Patra powers insurance processes by optimizing the application of technology with insurance professionals and seasoned process executives, supporting insurance organizations as they sell, deliver, and manage policies and customers through our PatraOne platform. Patra AI, Patra's recently launched suite of advanced AI-powered solutions, powers workflow optimization that allows agencies, MGAs, wholesalers, and carriers to capture the Patra Advantage – profitable growth and organizational value. View source version on Contacts Patra Contact Simon DavisChief Financial Officer and Chief Administrative Officersdavis@ (925) 381-9230