
Subscription management platform RevenueCat raises $50 million in Series C funding
By Zaheer Kachwala
(Reuters) -Subscription management platform RevenueCat on Thursday raised $50 million in Series C funding, led by Bain Capital Ventures, along with participation from returning investors Index Ventures, Y Combinator, and Volo Ventures among others.
The rise of tools and software that make it easier to create apps has boosted demand for in-app monetization platforms like RevenueCat, which simplify managing pricing and subscriptions.
The San Francisco, California-based company counts OpenAI as a customer and worked with the AI firm to deploy ChatGPT on mobile following its remarkable debut in 2022.
CEO Jacob Eiting told Reuters that 20% of RevenueCat's top 20 apps are AI-based, as these apps can charge higher fees and achieve better conversion rates.
The surge in generative artificial intelligence has led to numerous AI startups requiring platforms to manage their subscription tiers as users increasingly turn to conversational chatbots for daily tasks.
With the new funding, RevenueCat plans to expand its workforce and pursue acquisitions.
The company is also making a significant push into the mobile gaming market, developing a virtual currency feature aimed at players who readily spend on in-game currency.
"We eventually hope to be as important in the game market as we are in the app market," Eiting said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
26 minutes ago
- Yahoo
Intel to outsource marketing to Accenture and AI, resulting in more layoffs
When you buy through links on our articles, Future and its syndication partners may earn a commission. Employees at Intel's marketing division were informed that many of their roles will be handed over to Accenture, which will use AI to handle tasks traditionally done by Intel staff, reports OregonLive. The decision is part of a company-wide restructuring plan that includes job cuts, automation, and streamlining of execution. The marketing division has been one of Intel's key strengths since the company began communicating directly with end users with the launch of its "Intel Inside" campaign in 1991. However, it looks like the company will drastically cut its human-driven marketing efforts going forward, as it plans to lay off many of its marketing employees, believing that Accenture's AI will do a better job connecting Intel with customers. The number of positions affected was not disclosed, but Intel confirmed changes will significantly alter team structures, with only 'lean' teams remaining. Workers will be told by July 11 whether they will remain with the company. Among other things, the aim of the restructuring is to free up internal teams to focus on strategic, creative, and high-value projects, rather than routine functions. Therefore, Intel intends to use Accenture's AI in various aspects of marketing, including information processing, task automation, and personalized communications. Intel has acknowledged the shift to Accenture and explained that this will not only cut costs but will modernize its capabilities and strengthen its brand. How exactly the usage of AI instead of real people can reinforce the brand hasn't been explained yet. "As we announced earlier this year, we are taking steps to become a leaner, faster and more efficient company," a statement by Intel published by OregonLive reads. "As part of this, we are focused on modernizing our digital capabilities to serve our customers better and strengthen our brand. Accenture is a longtime partner and trusted leader in these areas and we look forward to expanding our work together." In messages to staff published by OregonLive, Intel indicated that part of the restructuring may involve existing employees training Accenture contractors by explaining how Intel's operations work. This knowledge transfer would occur during the transitional phase of the outsourcing plan, although it is unclear how long this phase will take. Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.


Entrepreneur
an hour ago
- Entrepreneur
Engineering the Future with AI
"The biggest challenge isn't just deploying AI, it's embedding it into complex workflows while ensuring compliance, traceability, and IP protection," says K. A. Prabhakaran, Senior Vice President and Chief Technology Officer, Cyient Opinions expressed by Entrepreneur contributors are their own. You're reading Entrepreneur India, an international franchise of Entrepreneur Media. In a rapidly evolving industrial landscape, where the boundaries of design, manufacturing, and aftersales are increasingly blurred by digitalisation, Indian multinational Cyient is placing intelligent engineering at the heart of transformation. Established in 1991 and now employing over 17,000 people globally, Cyient has evolved into a leader in blending deep domain expertise with next-generation technologies like Artificial Intelligence (AI), Generative AI (GenAI), and simulation tools to drive industry-wide innovation. "Our technology portfolio is designed to embed intelligence across the product, plant, and asset lifecycle," shares K. A. Prabhakaran, Senior Vice President and Chief Technology Officer, Cyient. "We are using AI to accelerate design cycles, optimise manufacturing, increase supply chain visibility, and drive predictive maintenance and aftermarket intelligence." From aerospace and railways to energy, healthcare, and telecom, Cyient's technology is delivering measurable outcomes. Their AI- powered tools have shortened product development timelines, improved asset uptime, and enhanced customer service through GenAI- driven diagnostics and contextual assistants. In telecommunications, Cyient automates network planning and fibre deployments, while in healthcare, their CyNet platform aids in precise fetal diagnostics. One notable case is the company's Plant Advisor solution, which has demonstrated a 67 per cent accuracy in recommending efficiency improvements, underscoring the real-world value of AI in operational environments. What keeps Cyient ahead in the game is its strong culture of learning, strategic partnerships, and co-innovation with customers. "We've trained over 5,000 associates in AI, cloud, and platform technologies," says Prabhakaran. "Our Centres of Excellence, especially the GenAI CoE, serve as catalysts for continuous innovation." Cyient's collabration with Microsoft under the 'EnGeneer' initiative is another step forward in transforming engineering lifecycles through AI-led automation. The company also actively engages with analyst communities and customers to align its offerings with evolving market needs. Yet, integrating AI into traditional engineering ecosystems isn't without its hurdles especially in highly regulated sectors like aerospace and healthcare. "The biggest challenge isn't just deploying AI, it's embedding it into complex workflows while ensuring compliance, traceability, and IP protection," explains Prabhakaran. To tackle this, Cyient has established a robust governance framework that includes human- in-the-loop systems, modular deployments for secure data handling, and domain-specific validation gates. "It's this precision and rigour that makes our AI trustworthy and acts as a natural barrier to entry for others," he concludes.


CNBC
an hour ago
- CNBC
Encountered a problematic response from an AI model? More standards and tests are needed, say researchers
As the usage of artificial intelligence — benign and adversarial — increases at breakneck speed, more cases of potentially harmful responses are being uncovered. These include hate speech, copyright infringements or sexual content. The emergence of these undesirable behaviors is compounded by a lack of regulations and insufficient testing of AI models, researchers told CNBC. Getting machine learning models to behave the way it was intended to do so is also a tall order, said Javier Rando, a researcher in AI. "The answer, after almost 15 years of research, is, no, we don't know how to do this, and it doesn't look like we are getting better," Rando, who focuses on adversarial machine learning, told CNBC. However, there are some ways to evaluate risks in AI, such as red teaming. The practice involves individuals testing and probing artificial intelligence systems to uncover and identify any potential harm — a modus operandi common in cybersecurity circles. Shayne Longpre, a researcher in AI and policy and lead of the Data Provenance Initiative, noted that there are currently insufficient people working in red teams. While AI startups are now using first-party evaluators or contracted second parties to test their models, opening the testing to third parties such as normal users, journalists, researchers, and ethical hackers would lead to a more robust evaluation, according to a paper published by Longpre and researchers. "Some of the flaws in the systems that people were finding required lawyers, medical doctors to actually vet, actual scientists who are specialized subject matter experts to figure out if this was a flaw or not, because the common person probably couldn't or wouldn't have sufficient expertise," Longpre said. Adopting standardized 'AI flaw' reports, incentives and ways to disseminate information on these 'flaws' in AI systems are some of the recommendations put forth in the paper. With this practice having been successfully adopted in other sectors such as software security, "we need that in AI now," Longpre added. Marrying this user-centred practice with governance, policy and other tools would ensure a better understanding of the risks posed by AI tools and users, said Rando. Project Moonshot is one such approach, combining technical solutions with policy mechanisms. Launched by Singapore's Infocomm Media Development Authority, Project Moonshot is a large language model evaluation toolkit developed with industry players such as IBM and Boston-based DataRobot. The toolkit integrates benchmarking, red teaming and testing baselines. There is also an evaluation mechanism which allows AI startups to ensure that their models can be trusted and do no harm to users, Anup Kumar, head of client engineering for data and AI at IBM Asia Pacific, told CNBC. Evaluation is a continuous process that should be done both prior to and following the deployment of models, said Kumar, who noted that the response to the toolkit has been mixed. "A lot of startups took this as a platform because it was open source, and they started leveraging that. But I think, you know, we can do a lot more." Moving forward, Project Moonshot aims to include customization for specific industry use cases and enable multilingual and multicultural red teaming. Pierre Alquier, Professor of Statistics at the ESSEC Business School, Asia-Pacific, said that tech companies are currently rushing to release their latest AI models without proper evaluation. "When a pharmaceutical company designs a new drug, they need months of tests and very serious proof that it is useful and not harmful before they get approved by the government," he noted, adding that a similar process is in place in the aviation sector. AI models need to meet a strict set of conditions before they are approved, Alquier added. A shift away from broad AI tools to developing ones that are designed for more specific tasks would make it easier to anticipate and control their misuse, said Alquier. "LLMs can do too many things, but they are not targeted at tasks that are specific enough," he said. As a result, "the number of possible misuses is too big for the developers to anticipate all of them." Such broad models make defining what counts as safe and secure difficult, according to a research that Rando was involved in. Tech companies should therefore avoid overclaiming that "their defenses are better than they are," said Rando.