logo
Yum China launches AI assistant for store managers, a groundbreaking advance in its tech-driven growth strategy

Yum China launches AI assistant for store managers, a groundbreaking advance in its tech-driven growth strategy

'Q-Smart' assistant will provide restaurant managers with broad-based support for day-to-day restaurant operations, allowing them to enhance their focus on delivering excellent customer service
SHANGHAI, June 20, 2025 /PRNewswire/ -- Yum China Holdings, Inc. (NYSE: YUMC and HKEX: 9987, 'Yum China' or the 'Company') today announced the pilot launch of 'Q-Smart', a new hands-free AI-enabled assistant for restaurant managers. Q-Smart helps frontline managers effectively and efficiently manage a wide range of day-to-day tasks, such as labor scheduling, inventory management, and food quality and safety inspection – providing intelligent support for decision-making across a broad spectrum of restaurant operations.
Q-Smart allows managers to interact with the system hands-free using wearable devices such as wireless earphones and smart watches, enhancing operational efficiencies. This is different from traditional restaurant systems where employees often rely on touch screens or PCs to complete tasks, which occupy their hands. Using natural language, Q-Smart can interact directly with restaurant managers to help them better manage operational tasks throughout the day.
For example, Q-Smart continuously monitors a restaurant's inventory and compares it with upcoming sales forecasting, reminding managers to make timely ordering and replenishment decisions that can help the store to optimize inventory use. Q-Smart can understand and respond to managers' voice commands, helping them to quickly and accurately conduct hands-free equipment inspections and inventory counts. At the same time, drawing from Yum China's extensive knowledge base, the system can provide real-time support and solutions for managers to effectively handle urgent operational issues.
Having passed the initial development and testing phase, Q-Smart is now being piloted at select KFC stores. Following this pilot phase, further user feedback will be incorporated, paving the way for a larger-scale rollout in the future.
Leila Zhang, Chief Technology Officer, Yum China commented: 'Q-Smart is not just an AI tool —it is a potential game-changer for how restaurants can be managed. We believe that Q-Smart will not only help Yum China improve its operational efficiency, but can also serve as an example for the digital transformation and smart development of the catering industry.'
The launch of Q-Smart marks a significant milestone in Yum China's end-to-end digitalization journey over the past decade. KFC China first enabled customers to pay digitally as early as 2015, followed by the launch of KFC's Super App in early 2016. As of March 2025, Yum China's digital loyalty program membership (for KFC and Pizza Hut) exceeded 540 million members. Yum China was also one of the earliest companies in China to launch an enterprise cloud platform. Its platform, Yum China Cloud, supports agile iterations of systems and products with a high server stability rate, helping ensure a seamless user experience online.
Yum China began integrating AI-assisted store management and scheduling tools as early as 2019. In 2021, Yum China rolled out its comprehensive AI-powered 'Super Brain' tool, leveraging operational data from Yum China's store network to aid store managers' decision-making. In 2022, the Company introduced handheld Pocket Managers, allowing managers to track operational metrics in real-time. Yum China began exploring applications for Generative AI (AIGC) in its systems in 2023; and by 2024, the Company began integrating AIGC into various business scenarios, including logistics and supply chain, customer service, and various back-office functions.
Q-Smart was officially launched at Yum China's first-ever AI Day event held on June 20 in Shanghai. The day culminated with the announcement of winners of the Company's inaugural 'All-Staff Hackathon', an initiative launched in March 2025 to encourage Yum China employees to develop technology-based solutions to address business problems and pain points, which drew participation from nearly 200 teams in roughly 30 markets across the country.
At the AI Day opening ceremony, Yum China CEO Joey Wat announced the establishment of a 100 million yuan (US$13.9 million) Frontline Innovation Fund to provide a variety of new resources to further support frontline restaurant employees. The Fund will further bolster the Company's homegrown innovation in technology applications, including making the All-Staff Hackathon a regular annual event.
Wat remarked: 'Yum China has always believed that true innovation must originate from frontline needs and serve frontline scenarios. AI is not only a technical tool to improve efficiency, but also a core partner to stimulate employee creativity.'
Forward-Looking Statements
This press release contains 'forward-looking statements' within the meaning of Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934. We intend all forward-looking statements to be covered by the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements generally can be identified by the fact that they do not relate strictly to historical or current facts and by the use of forward-looking words such as 'expect,' 'expectation,' 'believe,' 'anticipate,' 'may,' 'could,' 'intend,' 'belief,' 'plan,' 'estimate,' 'target,' 'predict,' 'project,' 'likely,' 'will,' 'continue,' 'should,' 'forecast,' 'outlook,' 'commit' or similar terminology. These statements are based on current estimates and assumptions made by us in light of our experience and perception of historical trends, current conditions and expected future developments, as well as other factors that we believe are appropriate and reasonable under the circumstances, but there can be no assurance that such estimates and assumptions will prove to be correct. Forward-looking statements are not guarantees of performance and are inherently subject to known and unknown risks and uncertainties that are difficult to predict and could cause our actual results or events to differ materially from those indicated by those statements. We cannot assure you that any of our expectations, estimates or assumptions will be achieved. The forward-looking statements included in this press release are only made as of the date of this press release, and we disclaim any obligation to publicly update any forward-looking statement to reflect subsequent events or circumstances, except as required by law. Numerous factors could cause our actual results or events to differ materially from those expressed or implied by forward-looking statements. In addition, other risks and uncertainties not presently known to us or that we currently believe to be immaterial could affect the accuracy of any such forward-looking statements. All forward-looking statements should be evaluated with the understanding of their inherent uncertainty. You should consult our filings with the Securities and Exchange Commission (including the information set forth under the captions 'Risk Factors' and 'Management's Discussion and Analysis of Financial Condition and Results of Operations' in our Annual Report on Form 10-K and subsequent Quarterly Reports on Form 10-Q) for additional detail about factors that could affect our financial and other results.
About Yum China Holdings, Inc.
Yum China is the largest restaurant company in China with a mission to make every life taste beautiful. The Company operates over 16,000 restaurants under six brands across over 2,300 cities in China. KFC and Pizza Hut are the leading brands in the quick-service and casual dining restaurant spaces in China, respectively. In addition, Yum China has partnered with Lavazza to develop the Lavazza coffee concept in China. Little Sheep and Huang Ji Huang specialize in Chinese cuisine. Taco Bell offers innovative Mexican-inspired food. Yum China has a world-class, digitalized supply chain, which includes an extensive network of logistics centers nationwide and an in-house supply chain management system. Its strong digital capabilities and loyalty program enable the Company to reach customers faster and serve them better. Yum China is a Fortune 500 company with the vision to be the world's most innovative pioneer in the restaurant industry. For more information, please visit https://ir.yumchina.com/.
View original content: https://www.prnewswire.com/news-releases/yum-china-launches-ai-assistant-for-store-managers-a-groundbreaking-advance-in-its-tech-driven-growth-strategy-302486933.html
SOURCE Yum China Holdings, Inc.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sword Health Now Valued At $4 Billion, Announces Expansion Into Mental Health Services
Sword Health Now Valued At $4 Billion, Announces Expansion Into Mental Health Services

Yahoo

time10 minutes ago

  • Yahoo

Sword Health Now Valued At $4 Billion, Announces Expansion Into Mental Health Services

Sword Health announced Tuesday that it had raised $40 million in a recent funding round, giving it a $4 billion valuation. Founded in 2015, the healthcare startup has focused on helping people manage chronic pain at home. Using AI tools, the platform connects users with expert clinicians who then provide patients with tools for digital physical therapy, pelvic health, and overall mobility health. However, the company says this new round of funding will largely go towards developing a mental health arm of its program called Mind. Don't Miss: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — Peter Thiel turned $1,700 into $5 billion—now accredited investors are eyeing this software company with similar breakout potential. Learn how you can "Today, nearly 1 billion people worldwide live with a mental health condition. Yet care remains fragmented, reactive, and inaccessible," Sword said in the announcement. "Mind redefines mental health care delivery with a proactive, 24/7 model that integrates cutting-edge AI with licensed, Ph.D-level mental health specialists. Together, they provide seamless, contextual, and responsive support any time people need it, not just when they have an appointment." Sword CEO Virgílio Bento told CNBC, "[Mind] really a breakthrough in terms of how we address mental health, and this is only possible because we have AI." Users will be equipped with a wearable device called an M-band, which will measure their environmental and physiological signals so that experts can reach out proactively as needed. The program will also offer access to services like traditional talk therapy. Bento told CNBC that a human is "always involved" in patients care in each of its programs, and that AI is not making any clinical decisions. Trending: Maximize saving for your retirement and cut down on taxes: . For example, if a Sword patient has an anxiety attack, AI will identify it through the wearable and bring it to the attention of a clinician, who can then provide an appropriate care plan. "You have an anxiety issue today, and the way you're going to manage is to talk about it one week from now? That just doesn't work," Bento told CNBC. "Mental health should be always on, where you have a problem now, and you can have immediate help in the moment." According to Bento, Sword Mind already has a waiting list, and is being tested by some of its partners who appreciate it's "personalized approach and convenience." "We believe that it is really the future of how mental health is going to be delivered in the future, by us and by other companies," he told CNBC. "AI plays a very important role, but the use of AI — and I think this is very important — needs to be used in a very smart way." The rest of the cash raised in the funding round, which was led by General Catalyst, will go towards acquisitions, global expansion, and AI development, Sword Health says. Read Next: Here's what Americans think you need to be considered Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article Sword Health Now Valued At $4 Billion, Announces Expansion Into Mental Health Services originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Top economist who previously sounded the alarm on tariffs sees a possible scenario where Trump ‘outsmarted all of us'
Top economist who previously sounded the alarm on tariffs sees a possible scenario where Trump ‘outsmarted all of us'

Yahoo

time16 minutes ago

  • Yahoo

Top economist who previously sounded the alarm on tariffs sees a possible scenario where Trump ‘outsmarted all of us'

Torsten Sløk, chief economist at Apollo Global Management, laid out a potential scenario where President Donald Trump's tariffs are extended long enough to ease economic uncertainty while also providing a significant bump to federal revenue. That comes as the 90-day pause on Trump's 'reciprocal tariffs' is nearing an end. Businesses and consumers remain in limbo over what will happen next with President Donald Trump's tariffs, but a top economist sees a way to leave them in place and still deliver a 'victory for the world.' In a note on Saturday titled 'Has Trump Outsmarted Everyone on Tariffs?', Apollo Global Management Chief Economist Torsten Sløk laid out a scenario that keeps tariffs well below Trump's most aggressive rates long enough to ease uncertainty and avoid the economic harm that comes with it. 'Maybe the strategy is to maintain 30% tariffs on China and 10% tariffs on all other countries and then give all countries 12 months to lower non-tariff barriers and open up their economies to trade,' he speculated. That comes as the 90-day pause on Trump's 'reciprocal tariffs,' which triggered a massive selloff on global markets in April, is nearing an end early next month. The temporary reprieve was meant to give the U.S. and its trade partners time to negotiate deals. But aside from an agreement with the U.K. and another short-term deal with China to step back from prohibitively high tariffs, few others have been announced. Meanwhile, negotiations are ongoing with other top trading partners. Trump administration officials have been saying for weeks that the U.S. is close to reaching deals. On Saturday, Sløk said extending the deadline one year would give other countries and U.S. businesses more time to adjust to a 'new world with permanently higher tariffs.' An extension would also immediately reduce uncertainty, giving a boost to business planning, employment, and financial markets. 'This would seem like a victory for the world and yet would produce $400 billion of annual revenue for US taxpayers,' he added. 'Trade partners will be happy with only 10% tariffs and US tax revenue will go up. Maybe the administration has outsmarted all of us.' Sløk's speculation is notable as he previously sounded the alarm on Trump's tariffs. In April, he warned tariffs have the potential to trigger a recession by this summer. Also in April, before the U.S. and China reached a deal to temporarily halt triple-digit tariffs, he said the trade war between the two countries would pummel American small businesses. More certainty on tariffs would give the Federal Reserve a clearer view on inflation as well. For now, most policymakers are in wait-and-see mode, as tariffs are expected to have stagflationary effects. But a split has emerged. Fed Governor Christopher Waller said Friday that economic data could justify lower interest rates as early as next month, expecting only a one-off impact from tariffs. But San Francisco Fed President Mary Daly also said Friday a rate cut in the fall looks more appropriate, rather than a cut in July. Still, Sløk isn't alone in wondering whether Trump's tariffs may not be as harmful to the economy and financial markets as feared. Chris Harvey, Wells Fargo Securities' head of equity strategy, expects tariffs to settle in the 10%-12% range, low enough to have a minimal impact, and sees the S&P 500 soaring to 7,007, making him Wall Street's biggest bull. He added that it's still necessary to make progress on trade and reach deals with big economies like India, Japan and the EU. That way, markets can focus on next year, rather near-term tariff impacts. 'Then you can start to extrapolate out,' he told CNBC last month. 'Then the market starts looking through things. They start looking through any sort of economic slowdown or weakness, and then we start looking to '26 not at '25.' This story was originally featured on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Why is AI halllucinating more frequently, and how can we stop it?
Why is AI halllucinating more frequently, and how can we stop it?

Yahoo

timean hour ago

  • Yahoo

Why is AI halllucinating more frequently, and how can we stop it?

When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store