logo
#

Latest news with #experimentation

Why Product Analytics And Experimentation Must Converge
Why Product Analytics And Experimentation Must Converge

Forbes

time5 days ago

  • Business
  • Forbes

Why Product Analytics And Experimentation Must Converge

Dan Rogers, CEO of LaunchDarkly. getty For too long, software teams have been forced to choose between knowing what's happening and understanding why. Product analytics told us where users dropped off, but not what would have worked better. Experimentation lets us test new features, but often with little context about where to start or which users to target. That separation has created blind spots, bottlenecks and bad decisions. Here's the reality: Observing user behavior without testing hypotheses is passive. Testing ideas without grounding them in real data is reckless. And in today's fast-moving software economy, where AI is reshaping everything from feature behavior to user expectations, neither approach on its own is enough. That's why the smartest companies aren't just experimenting—they're converging experimentation with product analytics to form a single, continuous learning loop. Product analytics and experimentation were never meant to operate in isolation. Yet in many companies, they still do. Analytics teams study dashboards and funnel reports, trying to extract insights weeks after a release. Meanwhile, product and engineering teams run A/B tests that aren't always informed by behavioral data or worse, aren't measured rigorously post-launch. It's a disconnected process that leads to slow iteration, guesswork and features that underperform. This siloed model might have worked a decade ago. It doesn't anymore. In today's environment, where user expectations shift rapidly and AI models behave unpredictably, the only way to build confidently is to create a real-time loop between insight and action. When analytics and experimentation converge, every behavior pattern becomes a hypothesis to test. Every test becomes a data point to analyze. Every decision becomes more grounded, targeted and measurable. Take a familiar example. Let's say your analytics show users abandoning the checkout flow at the payment stage. Without experimentation, you might guess it's the form layout, rewrite some code and hope conversion improves. But when you unify analytics and experimentation, you can design an experiment with different form layouts, deliver those layouts to specific user segments (like first-time buyers versus returning customers) and track conversion alongside other behavioral signals. In a matter of days, you're not just identifying what's broken—you're discovering how to fix it, who it affects most and what the downstream impact will be. Savage X Fenty (a client of LaunchDarkly) offers an example of how some companies are integrating experimentation into their day-to-day operations. By embedding testing directly into workflows, they've been able to move more quickly and identify useful insights earlier in the process. This same model is proving critical in AI-powered products, which are inherently unpredictable. With traditional development, teams can test deterministic logic. But with AI, you're managing variables like prompt structure, model drift and real-time learning. Unified experimentation and analytics allow AI teams to iterate on models and parameters in real time while monitoring performance, user satisfaction and potential risks. It starts with identifying where users struggle. Instead of guessing, teams can use analytics to reveal friction points like areas of drop-off, hesitation or confusion. From there, they form hypotheses rooted in actual behavior, not hunches. Experiments are then crafted to target those behaviors, often delivered to different user segments to see how responses vary. Once experiments are live, results flow directly into the same analytics infrastructure that tracks overall product usage, ensuring teams aren't evaluating changes in a vacuum. Over time, this becomes a habit. Teams observe, test, learn and refine. Not once, but continuously. Unifying product analytics and experimentation isn't just a more efficient way to work—it fundamentally changes the way teams build. Product managers, engineers and data scientists begin to operate from a shared reality. Instead of siloed reports and speculative ideas, they have a common, evolving source of truth. This is how modern software development should function. Continuous delivery needs continuous learning. Anything less is leaving value and velocity on the table. The companies that get this right won't just build faster. They'll build smarter. They'll ship products that are tuned to their users, backed by evidence and constantly improving. They'll foster a culture of curiosity, rigor and resilience. And in a world of constant change, that mindset becomes the true competitive advantage. Because today, the winning teams aren't just the ones who move quickly—they're the ones who learn even faster. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Datadog Expands LLM Observability with New Capabilities to Monitor Agentic AI, Accelerate Development and Improve Model Performance
Datadog Expands LLM Observability with New Capabilities to Monitor Agentic AI, Accelerate Development and Improve Model Performance

Associated Press

time10-06-2025

  • Business
  • Associated Press

Datadog Expands LLM Observability with New Capabilities to Monitor Agentic AI, Accelerate Development and Improve Model Performance

AI Agent Monitoring, LLM Experiments and AI Agents Console help organizations measure and justify agentic AI investments New York, New York--(Newsfile Corp. - June 10, 2025) - Datadog, Inc. (NASDAQ: DDOG), the monitoring and security platform for cloud applications, today announced new agentic AI monitoring and experimentation capabilities to give organizations end-to-end visibility, rigorous testing capabilities, and centralized governance of both in-house and third-party AI agents. Presented at DASH, Datadog's annual observability conference, the new capabilities include AI Agent Monitoring, LLM Experiments and AI Agents Console. The rise of generative AI and autonomous agents is transforming how companies build and deliver software. But with this innovation comes complexity. As companies race to integrate AI into their products and workflows, they face a critical gap. Most organizations lack visibility into how their AI systems behave, what agents are doing and whether they are delivering real business value. Datadog is addressing this gap by bringing observability best practices to the AI stack. Part of Datadog's LLM Observability product, these new capabilities allow companies to monitor agentic systems, run structured LLM experiments, and evaluate usage patterns and the impact of both custom and third-party agents. This enables teams to deploy quickly and safely, accelerate iteration and improvements to their LLM applications, and prove impact. 'A recent study found only 25 percent of AI initiatives are currently delivering on their promised ROI—a troubling stat given the sheer volume of AI projects companies are pursuing globally,' said Yrieix Garnier, VP of Product at Datadog. 'Today's launches aim to help improve that number by providing accountability for companies pushing huge budgets toward AI projects. The addition of AI Agent Monitoring, LLM Experiments and AI Agents Console to our LLM Observability suite gives our customers the tools to understand, optimize and scale their AI investments.' Now generally available, Datadog's AI Agent Monitoring instantly maps each agent's decision path—inputs, tool invocations, calls to other agents and outputs—in an interactive graph. Engineers can drill down into latency spikes, incorrect tool calls or unexpected behaviors like infinite agent loops, and correlate them with quality, security and cost metrics. This simplifies the debugging of complex, distributed and non-deterministic agent systems, resulting in optimized performance. 'Agents represent the evolution beyond chat assistants, unlocking the potential of generative AI. As we equip these agents with more tools, comprehensive observability is essential to confidently transition use cases into production. Our partnership with Datadog ensures teams have the visibility and insights needed to deploy agentic solutions at scale,' said Timothée Lacroix, Co-founder & CTO at Mistral AI. In preview, Datadog launched LLM Experiments to test and validate the impact of prompt changes, model swaps or application changes on the performance of LLM applications. The tool works by running and comparing experiments against datasets created from real production traces (input/output pairs) or uploaded by customers. This allows users to quantify improvements in response accuracy, throughput and cost—and guard against regressions. 'AI agents are quickly graduating from concept to production. Applications powered by Claude 4 are already helping teams handle real-world tasks in many domains, from customer support to software development and R&D,' said Michael Gerstenhaber, VP of Product at Anthropic. 'As these agents take on more responsibility, observability becomes key to ensuring they behave safely, deliver value, and stay aligned with user and business goals. We're very excited about Datadog's new LLM Observability capabilities that provide the visibility needed to scale these systems with confidence.' Moreover, as organizations embed external AI agents—such as OpenAI's Operator, Salesforce's Agentforce, Anthropic's Claude-powered assistants or IDE copilots—into critical workflows, they need to understand their behavior, how they're being used, and what permissions they have across multiple systems to better optimize their agent deployments. To overcome this, Datadog unveiled AI Agents Console in preview, which allows organizations to establish and maintain visibility into in-house and third-party agent behavior, measure agent usage, impact and ROI, and proactively check for security and compliance risks. To learn more about Datadog's latest AI Observability capabilities, please visit: AI Agent Monitoring, LLM Experiments and AI Agents Console were announced during the keynote at DASH, Datadog's annual conference. The replay of the keynote is available here. During DASH, Datadog also announced launches in Applied AI, AI Security, Log Management and released its Internal Developer Portal. About Datadog Datadog is the observability and security platform for cloud applications. Our SaaS platform integrates and automates infrastructure monitoring, application performance monitoring, log management, user experience monitoring, cloud security and many other capabilities to provide unified, real-time observability and security for our customers' entire technology stack. Datadog is used by organizations of all sizes and across a wide range of industries to enable digital transformation and cloud migration, drive collaboration among development, operations, security and business teams, accelerate time to market for applications, reduce time to problem resolution, secure applications and infrastructure, understand user behavior and track key business metrics. Forward-Looking Statements This press release may include certain 'forward-looking statements' within the meaning of Section 27A of the Securities Act of 1933, as amended, or the Securities Act, and Section 21E of the Securities Exchange Act of 1934, as amended including statements on the benefits of new products and features. These forward-looking statements reflect our current views about our plans, intentions, expectations, strategies and prospects, which are based on the information currently available to us and on assumptions we have made. Actual results may differ materially from those described in the forward-looking statements and are subject to a variety of assumptions, uncertainties, risks and factors that are beyond our control, including those risks detailed under the caption 'Risk Factors' and elsewhere in our Securities and Exchange Commission filings and reports, including the Annual Report on Form 10-K filed with the Securities and Exchange Commission on May 6, 2025, as well as future filings and reports by us. Except as required by law, we undertake no duty or obligation to update any forward-looking statements contained in this release as a result of new information, future events, changes in expectations or otherwise. Contact Dan Haggerty [email protected] To view the source version of this press release, please visit

Survivors of MK-Ultra brainwashing experiments want judge to approve class-action lawsuit
Survivors of MK-Ultra brainwashing experiments want judge to approve class-action lawsuit

CTV News

time10-06-2025

  • Health
  • CTV News

Survivors of MK-Ultra brainwashing experiments want judge to approve class-action lawsuit

It was called the MK Ultra project, meant to experiment on mind control using patients as guinea pigs. Lana Dean Ponting remembers her parents having her hospitalized at the Allan Memorial Institute, because she was a troublesome teen who often ran away. 'I was drugged up so bad I can't remember half of what they did to me,' explains the woman, who is about to turn 84. The abuse wasn't just medical. 'I bore a son when i was at the Allan Memorial and I got pregnant without ever knowing who the father was.' Ponting says she suffered from the debilitating effects of the treatments all her life. The experiments were sponsored by the CIA, funded by the Canadian government, and handled by a McGill University independent researcher named Donald Ewen Cameron between the 1940s and 1960s. It's reported the medical team used electroshocks, and experimental drugs on patients, including LSD. Ponting and several other survivors and their families were in court Monday as their lawyer is trying to get authorization for a class-action lawsuit filed in 2019. It's the first step before the case can move ahead. 'I think there is no question no one has ever taken responsibility. No one has ever apologized. There was some modest compensation in 1992 without any admission of liability,' said lawyer Jeff Orenstein, who's taking on the case on behalf of the consumer law group. He said that in the early 1990s, some survivors were offered settlements, without anyone taking responsibility for what happened. The courts already prevented the group from suing the U.S government. The CIA successfully argued the courts here have no jurisdiction. The other parties, such as the McGill University Health Centre (MUHC) and the Canadian government, argue they can't be sued because the plaintiffs waited too long. 'There are many psychological reasons of blockages that just don't allow people to take action,' Orenstein said, liking it to women who wait decades to denounce sexual aggressors because of fear and stigma. Julie Tanny remembers how her father, Charles Tanny, was admitted over a neurological pain issue in his face. The doctors thought he had psychiatric issues, and began treating him. His daughter says he came out with permanent mental health damages from which he never recovered. 'He didn't know me or my two siblings. He remembered my mother, but he didn't remember he had children or that he had a business or anything. And he was very detached. That never changed. He never came back to the person he was before,' Tanny said. It could take a few months for the court to decide if the class-action can be authorized. If the case moves forward, the plaintiffs may finally have a shot at getting some closure that has eluded them for seven decades.

How to make chemistry fun for kids
How to make chemistry fun for kids

Times

time07-06-2025

  • Science
  • Times

How to make chemistry fun for kids

Why is breakfast cereal magnetic? How does sand help you to see? Why did that German guy try to turn wee into gold and what did he find instead? Elements of the Day doesn't just blast off soon-to-be-forgotten facts about the periodic table. It tells the story of chemical elements in our everyday lives by fusing them with the daily, domestic moments that will be familiar to every child. It's a clever formula — and an effective one. From wake up to bedtime, we are introduced to many of the scientific miracles that are happening all around us. 'Elements — and all the stuff you can build from them — make every part of your day possible in unexpected and fascinating ways, though most people don't give them much thought. However, once you open your eyes, you'll start seeing lots of these elements in your daily life,' the author Samantha Lewis says as she urges young readers to experiment with their breakfast.

How to Build an AI-Driven Company Culture
How to Build an AI-Driven Company Culture

Entrepreneur

time28-05-2025

  • Business
  • Entrepreneur

How to Build an AI-Driven Company Culture

A practical guide for business leaders on how to build a company culture that embraces AI through curiosity, experimentation and hands-on learning. Opinions expressed by Entrepreneur contributors are their own. In the early 1900s, as the automotive revolution reshaped industries, blacksmiths and carriage-makers struggled to adapt. More than a century later, we face a similar inflection point with AI. Just as horse-drawn carriages gave way to automobiles, entire industries are being redefined by algorithms today. The question isn't whether your company will adopt AI, but how. And the answer hinges on one critical factor: culture. Related: How to Create a Workplace Culture That Supports Digital Transformation (and Why It's Important) What does an "AI culture" look like? Building an AI-driven culture isn't always about buying tools or hiring machine learning scientists. It's about fostering a mindset where experimentation, learning and human-AI collaboration are core to your company's DNA. Here's how to start: Model curiosity to dispel fear: Leadership must champion AI, but grassroots innovation is what embeds it into real workflows. At CodeSignal, our engineering team doesn't just use AI — they build with it. From leveraging GitHub Copilot for complex refactoring to fine-tuning custom LLM agents for internal tools, AI is part of their daily toolkit. And it's not just engineering. Our marketers, for instance, prototype campaign ideas in Claude and validate messaging variations with Gemini. The key? Leaders must model curiosity. Share your own AI experiments — and failures — with your team. CodeSignal has a Slack channel dedicated to experimentation with LLMs, where team members share how they've been using AI and what they're learning ("productivity hacks" are a team favorite). I have been studying AI technology and building AI-native products for over a decade, but this doesn't stop me from continuing to learn. I regularly share my learnings, from using the latest LLM models for everything from code writing to email writing to image generation, and debate with my colleagues on how different models perform on complex math challenges. The point of me doing this is to set the example that incorporating AI into your daily workflow doesn't have to be intimidating, and in fact it can be quite enjoyable. It also reinforces that we're all learning this new technology and figuring out how best to use it to do our work together. Provide access to the right AI tools: Today, tools like ChatGPT and Midjourney are free, yet many companies still gatekeep access. That's a big mistake. We give every team member a ChatGPT Teams subscription, with the expectation that they'll play around with it and even create their own GPTs to augment their workflow. In the past year, our employees have created over 50 custom GPTs that help them draft sales emails, gather market insights, extract data, answer HR questions and more. Make AI literacy a core expectation — then build on it: Giving people access to AI tools is necessary, but it's just the first step. To create a meaningful impact, leaders must pair access to tools with training. CodeSignal does this by asking every team member to complete AI literacy training, where they build skills in using and interacting with LLMs with hands-on practice. Our team recently finished a "spring training" in generative AI literacy, where everyone at the company (even me!) completed a series of experiential learning courses online and shared our learnings, questions and ah-ha moments in a Slack channel. We boosted motivation for completing the training by setting up a goal of 95% participation — rewarded by cool new swag when we met the goal. Next, we're building on this foundation of AI literacy by running an AI hackathon at our next in-person meetup. Here, team members will break into teams based on how they use AI and their depth of knowledge. Some teams will explore using LLMs to draft creative campaigns and set project timelines, for example, while others will be building custom GPTs to automate actual parts of their job. The machine learning experts on our team, meanwhile, will be working on building innovative new AI applications from the ground up. The goal here is to set the expectation that everyone uses AI, yes — but more than that, to give team members ownership of what they do with it and the freedom to choose which parts of their job can best be complemented by AI. Related: AI is the Coworker of the Future — 3 Ways Employers Can Get Ready The stakes have never been higher For some organizations and teams, adopting AI will be uncomfortable at first. AI tools raise a range of new technical, regulatory and ethical questions. Many employees fear that AI will displace them from their jobs. That discomfort is real — and it deserves our attention. As leaders, our responsibility is to guide our teams through uncertainty with integrity and transparency by showing how embracing AI can help them become even more impactful in their jobs. I do this by modeling AI use in my everyday work and openly sharing my learnings with my team. This gives team members permission to experiment on their own and helps move them from a mindset of fear to curiosity about how AI can be a partner to them in their jobs. To return to the analogy of the automotive revolution: We're teaching our carriage-makers how to build self-driving cars. If you're a business leader, ask yourself: Am I modeling what it looks like to learn and take risks? Am I giving my team the tools and training they need to build AI literacy? Am I fostering a culture of exploration and experimentation on my team? The AI revolution is already here, and the future isn't going to wait for companies to catch up. Neither should we.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store