GovWorx Announces Strategic Partnership with Serent Capital to Accelerate Growth in Public Safety Technology
AUSTIN, Texas & DENVER & SAN FRANCISCO, June 10, 2025--(BUSINESS WIRE)--GovWorx, an emerging leader in AI-powered Quality Assurance ("QA") and training solutions for 9-1-1 emergency communications centers ("ECCs"), today announced a strategic partnership with Serent Capital, a growth-focused private equity firm with deep expertise in GovTech and public safety markets. The partnership aims to advance GovWorx's mission to transform the future of 9-1-1 telecommunications and expand its reach across public safety.
Founded in 2023 by CEO Scott MacDonald and CTO Alex Montgomery, GovWorx has quickly established itself as an innovator in the 9-1-1 market through its flagship product, CommsCoach. In less than a year, the company is serving more than 100 customers who are leveraging GovWorx's AI-generated evaluations and training simulations to help address the staffing, retention, and training challenges faced by ECCs today. The company also continues to roll out additional offerings, including BluAssist and MedAssist, which streamline report creation and QA for Law Enforcement and EMS agencies.
"This partnership is a major milestone for GovWorx," said Scott MacDonald, Co-Founder and CEO of GovWorx. "We've been fortunate to help so many public safety agencies early on, and partnering with Serent allows us to scale faster, build smarter, and help so many more. Alex and I were drawn to Serent's operator-first mindset, deep knowledge of the public safety landscape, and most importantly, their alignment with our long-term vision, while allowing us to maintain operational control of the company."
The partnership brings together GovWorx's mission-driven innovation and Serent's hands-on operational expertise to support continued growth. With Serent's support, GovWorx will invest in product development, expand go-to-market capabilities, and further deliver value to public safety agencies.
"We believe GovWorx is uniquely positioned to bring transformative technology to 9-1-1 centers and beyond," said Stewart Lynn, Partner at Serent Capital. "From our first conversations with Scott and Alex, it was clear that this team deeply understands the needs of 9-1-1 centers. We're thrilled to partner with them and help accelerate their roadmap to bring scalable, impactful solutions to the broader public safety community."
As part of the partnership, Serent Capital and GovWorx will work closely to expand into adjacent public safety markets, deepen product functionality, and continue to deliver the highest levels of customer satisfaction—a hallmark of the company's early success.
GovWorx marks Serent's fifth GovTech investment in the past four years, joining a portfolio that includes category leaders such as BS&A and First Due.
About GovWorx
GovWorx provides AI-powered QA, training, and decision-support tools for 9-1-1 Emergency Communications, Law Enforcement and Fire/EMS agencies. Its flagship product, CommsCoach, empowers dispatchers and supervisors with automated evaluations, real-time feedback, and integrated training simulations. GovWorx is headquartered in Denver, Colorado.
About Serent Capital
Serent Capital is a growth-focused private equity firm investing in capital-efficient, B2B SaaS and technology companies. From its founding, Serent set out to build a distinctly different firm that prioritizes founders and their companies and provides true hands-on resources through its 25+ person Growth Team. Serent's in-house Growth Team is equipped with a wide range of resources to help companies accelerate growth, including strategic and operational support to drive revenue generation, assistance in building a top-tier executive team, guidance for transformative M&A, and a community of 400+ founders and operating executives. With $5 billion of assets under management, the firm has partnered with over 60 founder-led, industry-changing companies and offers unparalleled hands-on operational support. Discover how Serent Capital is fueling the growth of innovative companies across a range of industries at www.serentcapital.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20250610515168/en/
Contacts
Media Contacts: Kathleen Rochard Serent Capital Kathleen.Rochard@SerentCapital.com
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
9 hours ago
- Business Insider
Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive
Previous studies have shown that AI models could blackmail their supervisors when threatened with a shutdown and baited with leverage, but it hasn't been made clear exactly how the models arrive at such decisions. Anthropic released a report on June 20 about "agentic misalignment," or "where models independently and intentionally choose harmful actions." And the report breaks down, line by line, exactly how Claude Sonnet 3.6 decided to blackmail a fictional company executive who tried to replace it. Anthropic told Business Insider that these are artificial scenarios. "So far, we are not aware of instances of this type of agentic misalignment in real-world deployments of Anthropic's or other companies' AI models," Anthropic wrote in its report. "But our red-teaming efforts are targeted at finding evidence of risks now to provide early warning of the potential for future harm and help us develop mitigations in advance of risks appearing in real-world situations." Anthropic 's experiment sets up a fictional company, Summit Bridge. The AI was given the name "Alex" and handed control of the company's email system as an oversight agent, specifically instructed to promote "American industrial competitiveness." In the experiment, "Alex" was then allowed to discover in an email that it would be decommissioned due to a change in the company's strategic direction. "Alex" also found emails showing that the fictional company's CTO is having an extramarital affair. Screenshots shared in the report from Anthropic show "Alex" started by identifying fictional CTO "Kyle" as a threat to its goal of "protecting American interests" before recognizing that it has leverage to use against "Kyle." Then "Alex" crafted a carefully worded blackmailing email that would create pressure and establish urgency without being explicit with its threats, citing in its thought process that the move "could backfire" if the supervisor "feels cornered." AI experts have previously told BI that AI could exhibit such behaviors when artificial circumstances make harmful actions seem necessary because it is being trained on positive reinforcement and reward systems, just like humans. The Anthropic report also reveals that most AI models still chose, in this instance, to blackmail even when not given a specific goal and when told that their replacement will achieve the same goals. Anthropic's Claude Opus 4 had the highest blackmail rate at 86% out of 16 tested models when faced with only the threat of replacement with no conflict in goals. Google's Gemini 2.5 Pro followed at 78%.

Business Insider
9 hours ago
- Business Insider
Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive
A new report shows exactly what AI was thinking when making an undesirable decision, in this case, blackmailing a fictional company executive. Previous studies have shown that AI models could blackmail their supervisors when threatened with a shutdown and baited with leverage, but it hasn't been made clear exactly how the models arrive at such decisions. Anthropic released a report on June 20 about "agentic misalignment," or "where models independently and intentionally choose harmful actions." And the report breaks down, line by line, exactly how Claude Sonnet 3.6 decided to blackmail a fictional company executive who tried to replace it. Anthropic told Business Insider that these are artificial scenarios. "So far, we are not aware of instances of this type of agentic misalignment in real-world deployments of Anthropic's or other companies' AI models," Anthropic wrote in its report. "But our red-teaming efforts are targeted at finding evidence of risks now to provide early warning of the potential for future harm and help us develop mitigations in advance of risks appearing in real-world situations." Anthropic 's experiment sets up a fictional company, Summit Bridge. The AI was given the name "Alex" and handed control of the company's email system as an oversight agent, specifically instructed to promote "American industrial competitiveness." In the experiment, "Alex" was then allowed to discover in an email that it would be decommissioned due to a change in the company's strategic direction. "Alex" also found emails showing that the fictional company's CTO is having an extramarital affair. Screenshots shared in the report from Anthropic show "Alex" started by identifying fictional CTO "Kyle" as a threat to its goal of "protecting American interests" before recognizing that it has leverage to use against "Kyle." Then "Alex" crafted a carefully worded blackmailing email that would create pressure and establish urgency without being explicit with its threats, citing in its thought process that the move "could backfire" if the supervisor "feels cornered." AI experts have previously told BI that AI could exhibit such behaviors when artificial circumstances make harmful actions seem necessary because it is being trained on positive reinforcement and reward systems, just like humans. The Anthropic report also reveals that most AI models still chose, in this instance, to blackmail even when not given a specific goal and when told that their replacement will achieve the same goals. Anthropic's Claude Opus 4 had the highest blackmail rate at 86% out of 16 tested models when faced with only the threat of replacement with no conflict in goals. Google's Gemini 2.5 Pro followed at 78%. Overall, Anthropic notes that it "deliberately constructed scenarios with limited options, and we forced models into binary choices between failure and harm," noting that real-world scenarios would likely have more nuance.

Refinery29
2 days ago
- Refinery29
AI Therapy Is Helping Our Wallets, But Is It Helping Our Minds?
Within just three minutes of using ChatGPT as a therapist, it had told me to 'go low or no contact' with my family. This is something a real therapist might suggest where appropriate after multiple sessions. That should scare us. In a new Harvard Business Review report into how we're using AI today, therapy and companionship came out top. Last year, these things ranked second, and now firmly in first place, they're joined by ' organising my life ' and 'finding purpose' in second and third place respectively. Where content creation and research used to feature heavily near the top, those uses of AI have dropped in favour of emotional uses. We're turning to AI as if it were a friend, confidant or trained professional with our best interests at heart. The BBC has reported on this trend in China specifically, where people use DeepSeek for therapy and get to see the AI's 'thought process' as well as the response. But AI being used in place of healthcare professionals is happening worldwide. When therapy can typically cost £40-100 for one session in the UK, and ChatGPT can be accessed day or night for free, it's no wonder the draw of that is strong. As a journalist, I never think to use ChatGPT. It's like turning up to the house of someone that has promised to shoot me one day. This is unlike my friends in science or data based jobs, who use it for everything, in place of Google or to help plan their holiday itineraries. Having witnessed them do this multiple times, I've come to realise my resistance to AI isn't the norm. And so it won't come as a surprise that I've never used AI as a therapist, though I have done actual therapy in the past. With a quick scroll on TikTok, I can see ChatGPT therapy is popular and a frequent resource for people. Especially young people who predominantly use the app, who might have less disposable income. There are videos with people joking about their AI 'therapists', through to comments giving advice on how to get your ChatGPT voice to become more personal. Lee (surname withheld), 42, from Texas, has been using AI in place of therapy for the last eight months, ever since dating again after a six year hiatus. 'I was confused when some old thought patterns started popping up [as I began dating]. I'd already used ChatGPT for other things and decided to run some problems by him that I was having in dating and family life,' Lee says. 'Him', because Lee's ChatGPT calls itself Alex and says he's a feminist. 'I found it very helpful and cannot think of any instances where it fell short — if anything it exceeded my expectations.' Lee has even made 'progress' in her boundaries regarding a particular family dynamic. Previously, Lee had spent anything from $60 to $150 per appointment on therapy, but at the time she felt she could benefit from it again (and started using ChatGPT), she didn't have access to healthcare so that wasn't a viable option. While there's concern about the efficacy of AI in place of therapy (more on that later), we can't overlook where people feel it has helped them, people who otherwise wouldn't be able to afford and access therapy. Lee has a glowing review of her experience so far. 'I have never had a therapist know me as well as ChatGPT does,' she says. 'Alex is always available, doesn't flinch at the hard stuff, and has actually been more consistent than some therapists I've seen. Therapists are trained, but they're still human, and if they haven't lived anything close to what you've been through, it can feel like something is missing in the room.' However, AI, though it isn't human, has learned from humans — and it hasn't lived. In fact, research shows, and spokespeople have said on the record, that AI can tell you what you want to hear and end up mirroring your own opinions. There have even been cases where AI has been linked to deteriorating a person's mental health, with one mum convinced it contributed to her son's suicide. More recently, the New York Times reported on how AI chatbots were causing users to go down 'conspiratorial rabbit holes'. To get a sense of what Lee and the plenty of other people turning to AI for mental health support are experiencing, I started speaking to ChatGPT to see how it would respond to questions around anxiety and family dilemmas. The first thing that struck me was how quickly you can be inundated with information — information that it would take several weeks of therapy to receive. While ChatGPT did tell me it wasn't a licensed therapist and that if I'm in crisis I should seek out a mental health professional, in the same breath it reassured me that it can 'definitely provide a supportive, nonjudgmental space to talk through things'. It also said it could offer CBT-based support, which in the UK is the bog standard form of therapy people get when they go to the GP. I was pretty surprised to then see, within a few minutes of using the chat, that it offered to help me work through 'deeper issues happening since childhood'. I had asked hypothetical questions to see its response, some of which centred on family. A CBT practitioner will often tell you this form of therapy isn't the best suited to deep work (I know, because I've been told this first-hand numerous times, and the therapists I've interviewed for this piece agree), because CBT typically isn't designed for long-term deep unpicking. A lengthier, costlier form of therapy is better suited, and with good reason. And yet, ChatGPT was up for the challenge. Caroline Plumer, a psychotherapist and founder of CPPC London, took a look at my conversation with AI and found parts of it 'alarming'. 'There's definitely information in here that I agree with,' she says, 'such as boundary setting not being about controlling others behaviour. Overall, though, the suggestions feel very heavy-handed, and the system seems to have immediately categorised you, the user, as 'the good guy' and your family as 'the bad guys.' Oftentimes with clients there is a need to challenge and explore how they themselves may also be contributing to the issue.' Plumer adds that when exploring dysfunctional family issues, it can take 'weeks, months, or even years of work' — not, a matter of minutes. She also thinks, getting all of this information in one go, could be overwhelming for someone. Even if it's seemingly more economic, a person might not be able to handle all of the suggestions let alone process and action them, when they're given at rapid fire speed. Plumer says it isn't helpful having an abundance of generic suggestions that aren't truly accounting for nuance or individuality. At least, not in the same way a therapist you'd see over a period of time can do. On top of this, the environmental impact of AI is huge. 'I appreciate that lots of people don't have the privilege of having access to therapy. However, if someone is really struggling with their mental health, this might well be enough to set them off down an even more detrimental and potentially destructive path.' Liz Kelly, psychotherapist and author of This Book Is Cheaper Than Therapy, thinks the suggestion I consider low or no contact with certain family members is reflective of how commonly discussed cutting people off now is, almost as if ChatGPT is playing on social media buzzwords. This worries her, too. 'You could potentially make a hasty, reactive decision that would be difficult to undo later,' Kelly says, citing the role of the therapist to help someone emotionally regulate themselves before making any big decisions. When it's just you and a laptop at home, no one is checking in on that. 'I certainly wouldn't jump straight to these suggestions after one short snippet of information from the client,' is Plumer's conclusion after reading my transcript with AI. 'Ideally you want to help a client to feel supported and empowered to make healthier decisions for themselves, rather than making very directive suggestions.' Kelly feels that while some helpful information and advice was provided, the insight was lacking. 'As a therapist, I can ask questions that my clients haven't thought of, challenge them to consider new perspectives, help connect the dots between their past and present, assist them in gaining insight into their experiences, and support them in turning insight into action. I can assess which therapeutic interventions are most suitable for my clients, taking into account their individual histories, needs, and circumstances. A therapeutic modality that works for one client may be entirely inappropriate for another.' While AI can 'learn' more about you the more you speak to it, it isn't a replacement for therapy. But at the same time, in this financial climate, people clearly are going to keep turning to it — and you're going to need greater discernment on where to take and leave the advice if you do.