logo
Microsoft's AI Coach for Gamers Is Starting Tests Next Month

Microsoft's AI Coach for Gamers Is Starting Tests Next Month

Yahoo05-04-2025

Xbox players will soon be getting an optional AI-powered gaming assistant to help them with game recommendations or to improve their skills, Microsoft revealed on the Official Xbox Podcast on Thursday.
Announced last year, Copilot for Gaming is powered by Microsoft's AI assistant and is meant to help players save time and better experience games. This can include lessening the headache when downloading and updating titles or giving hints as to side quests they might miss out on. Copilot for Gaming will first hit mobile in April, and those interested can sign up for early access via the Xbox Insider program. It'll initially act as a second-screen companion via the Xbox mobile app.
"It has to be personalized to you the way that you like to play and it should be able to help you get further in gaming, be your companion, and help connect you with families and communities," said Fatima Kardar, Xbox corporate vice president of Gaming AI on the podcast.
For Kardar, who is fairly new to gaming, Copilot helps her with game recommendations, which is handy for someone not tuned into the latest releases.
Jason Ronald, vice president of next generation at Xbox, added that Copilot can recommend the types of cars to drive in a racing game that better fits his play style, for example. In a demo shown during the podcast, Copilot assisted in Overwatch 2 by recommending which heroes to pick to counter others.
Kardar notes that gaming is the only form of media that can leave people stuck. This is where Copilot can help gamers get through games. At the same time, she doesn't want Copilot for Gaming to be intrusive, meaning the AI will adapt itself to be personalized for how a player likes to play.
Microsoft deferred to its blog post when asked for comment.
The upcoming test is happening as Microsoft continues going all-in on AI. With the launch of ChatGPT in late 2022, Microsoft made a multibillion-dollar deal with OpenAI. That deal led to the development of Copilot, Microsoft's AI assistant in Windows. We've since seen AI enter all parts of Microsoft's business, from PowerPoint to Azure.
However, at the same time, the video game industry has been hit with layoffs throughout the last few years, including ones at Microsoft. Concerns have been raised of AI slowly replacing software developers. Last month, Microsoft revealed Muse, an AI model for gameplay ideation. Some developers are less keen on embracing it, however, suggesting that the technology is more of a cost-cutting measure than something developers are actually asking for.
Xbox was careful to say that Copilot for Gaming would leave control to the player and any AI assistance would only be additive. The podcast also detailed that Xbox Play Anywhere, a program that allows gamers to pick up their games on either console or PC, has been expanded to include 1,000 titles.
For more on AI in gaming, check out how developers are using the tech or how PlayStation is creating AI-generated characters in games.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google Confirms Most Gmail Users Must Upgrade All Their Accounts
Google Confirms Most Gmail Users Must Upgrade All Their Accounts

Forbes

timean hour ago

  • Forbes

Google Confirms Most Gmail Users Must Upgrade All Their Accounts

Most accoiunts need an upgrade, says Google. Republished on June 21 with new advice after 'record breaking' security alert. Google has confirmed another atack on Gmail users this week. Yet again, its own infrastructure has been exploited to compromise user accounts. And yet again, it comes with another warning for users to upgrade their accounts — this is now a must. Earlier this month, I covered Google's warning that most of its users still only use basic password security and are wide open to data breaches and attacks. 'We want to move beyond passwords altogether," Google said, pushing users to replace them. Passkeys, it says, "are phishing-resistant and can log you in simply with the method you use to unlock your device (like your fingerprint or face ID) — no password required.' Put simply, this links account security to hardware security, and means there are no passwords to steal or two-factor authentication (2FA) codes to bypass or intercept. While that is critical for Gmail users, it's actually much wider. Google reached out to me after that article, to emphasize that the benefits are more significant for users: Adding a passkey to a Google account protects all the services and accounts that can be accessed by that sign in. Conversely, not doing so leaves all those other accounts at risk. Even if most user accounts were secured by passwords and 2FA codes, there would still be a push to passkeys. And while Google, Microsoft and others make 2FA mandatory, the reality is that there's still a risk that codes can be shared even if they can't be stolen. That was the crux of the latest Gmail attack, tricking users into sharing codes. Scams and Protections (June 2025) The raft of headlines around a new 16 billion record data breach should focus minds, even if 'this is not a new data breach, or a breach at all,' says Bleeping Computer. 'The websites involved were not recently compromised to steal these credentials.' Mashable agrees. 'Some commentators were quick to call it the largest password leak in history, and in terms of raw records exposed, that's mostly, technically true. However, these records did not come from a single breach — or even a new breach. Instead, they came from many smaller ones," with 'the end result more a 'greatest hits' rather than a new, noteworthy hack.' Albeit that doesn't change the fact the data is out there. Kaspersky says 'the journalists haven't provided any evidence of existence of this database. Therefore, neither Kaspersky's experts nor anyone else has managed to analyze it. Therefore, we cannot say whether yours – or anyone else's – data is in there.' But, regardless, Google's latest survey still paints a bleak picture. Although '60% of U.S. consumers say they 'use strong, unique passwords,' less than 50% 'enable 2FA.' The truth is that the only form of simple 2FA is SMS codes, which are sent quickly without having to exit the app or click or tap. They even autofill and often auto-delete. But SMS is woefully insecure, it's the worst possible 2FA option. And anything else — authenticator apps, physical keys, even trusted device or app sign-ins — is more painful. Passkeys are the opposite. They're even easier than passwords and SMS 2FA. The code (which you never see) combines your login ID, password and 2FA into a simple sign-in process authenticated by your device security — ideally biometrics. And because there is no code you can see or copy, you can't share the passkey even if you want to. Even if any of the underlying code is stolen, it only works on your actual device. Google is right — this is about much more than Gmail, even if those email account attacks generate headline after headline. While there are some misgivings about the dominance and data overreach in big tech using its span of control to sign you into multiple services, even those they don't own or control, it is more secure. As Kaspersky suggests, 'let's set skepticism aside. Yes, we don't reliably know what exactly this leak is, or whose data is in it. But that doesn't mean you should do nothing. The first and best recommendation is to change your passwords,' which is an obvious immediate step. But it doesn't solve the problem. 'Use passkeys wherever possible,' Kaspersky also tells users. 'This is the modern passwordless method of logging into accounts, which is already supported by Google, iCloud, Microsoft, Meta and others.' As Google says, 'when you pair the ease and safety of passkeys with your Google Account, you can then use Sign in with Google to log in to your favorite websites and apps — limiting the number of accounts you have to maintain.'

Using AI bots like ChatGPTcould be causing cognitive decline, new study shows
Using AI bots like ChatGPTcould be causing cognitive decline, new study shows

Yahoo

timean hour ago

  • Yahoo

Using AI bots like ChatGPTcould be causing cognitive decline, new study shows

A new pre-print study from the US-based Massachusetts Institute of Technology (MIT) found that using OpenAI's ChatGPT could lead to cognitive decline. Researchers with the MIT Media lab broke participants into three groups and asked them to write essays only using ChatGPT, a search engine, or using no tools. Brain scans were taken during the essay writing with an electroencephalogram (EEG) during the task. Then, the essays were evaluated by both humans and artificial intelligence (AI) tools. The study showed that the ChatGPT-only group had the lowest neural activation in parts of the brain and had a hard time recalling or recognising their writing. The brain-only group that used no technology was the most engaged, showing both cognitive engagement and memory retention. Related Can ChatGPT be an alternative to psychotherapy and help with emotional growth? The researchers then did a second session where the ChatGPT group were asked to do the task without assistance. In that session, those who used ChatGPT in the first group performed worse than their peers with writing that was 'biased and superficial'. The study found that repeated GPT use can come with 'cognitive debt' that reduces long-term learning performance in independent thinking. In the long run, people with cognitive debt could be more susceptible to 'diminished critical inquiry, increased vulnerability to manipulation and decreased creativity,' as well as a 'likely decrease' in learning skills. 'When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalising shallow or biased perspectives,' the study continued. Related 'Our GPUs are melting': OpenAI puts restrictions on new ChatGPT image generation tool The study also found higher rates of satisfaction and brain connectivity in the participants who wrote all essays with just their minds compared to the other groups. Those from the other groups felt less connected to their writing and were not able to provide a quote from their essays when asked to by the researchers. The authors recommend that more studies be done about how any AI tool impacts the brain 'before LLMs are recognised as something that is net positive for humans.'

OpenAI supremo Sam Altman says he 'doesn't know how' he would have taken care of his baby without the help of ChatGPT
OpenAI supremo Sam Altman says he 'doesn't know how' he would have taken care of his baby without the help of ChatGPT

Yahoo

time2 hours ago

  • Yahoo

OpenAI supremo Sam Altman says he 'doesn't know how' he would have taken care of his baby without the help of ChatGPT

When you buy through links on our articles, Future and its syndication partners may earn a commission. For a chap atop one of the most high profile tech organisations on the planet, OpenAI CEO Sam Altman's propensity, shall we say, to expatiate but not excogitate, is, well, remarkable. Sometimes, he really doesn't seem to think before he speaks. The latest example involves his status as a "new parent," something which he apparently doesn't consider viable without help from his very own chatbot (via Techcrunch). "Clearly, people have been able to take care of babies without ChatGPT for a long time,' Altman initially and astutely observes on the official OpenAI podcast, only to concede, "I don't know how I would've done that." "Those first few weeks it was constantly," he says of his tendency to consult ChatGPT on childcare. Apparently, books, consulting friends and family, even a good old fashioned Google search would not have occurred to this colossus astride the field of artificial, er, intelligence. If all that's a touch arch, forgive me. But the Altman is in absolute AI evangelism overdrive mode in this interview. "I spend a lot of time thinking about how my kid will use AI in the future," he says, "my kids will never be smarter than AI. But they will grow up vastly more capable than we grew up and able to do things that we cannot imagine, they'll be really good at using AI." There are countless immediate and obvious objections to that world view. For sure, people will be better at using AI. But will they themselves be more capable? Maybe most people won't be able to write coherent prose if AI does it for them from day one. Will having AI write everything make everyone more capable? Not that this is a major revelation, but this podcast makes it clear just how signed up Altman is to the AI revolution. "They will look back on this as a very prehistoric time period," he says of today's children. That's a slightly odd claim, given "prehistory" means before human activities and endeavours were recorded for posterity. And, of course, the very existence of the large language models that OpenAI creates entirely relies on the countless gigabytes of pre-AI data on which those LLMs were originally trained. Indeed, one of the greatest challenges currently facing AI is the notion of chatbot contamination. The idea is that, since the release of ChatGPT into the wild in 2022, the data on which LLMs are now being trained is increasing polluted with the synthetic output of prior chatbots. As more and more chatbots inject more and more synthetic data into the overall shared pool, subsequent generations of AI models will thus become ever more polluted and less reliable, eventually leading to a state known as AI model collapse. Indeed, some observers believe this is already happening, as evidenced by the increasing propensity to hallucinate by some of the latest models. Cleaning that problem up is going to be "prohibitively expensive, probably impossible" by some accounts. Anyway, if there's a issue with Altman's unfailingly optimistic utterances, it's probably a lack of nuance. Everything before AI is hopeless and clunky, to the point where it's hard to imagine how you'd look after a newborn baby without ChatGPT. Everything after AI is bright and clean and perfect. Of course, anyone who's used a current chatbot for more than a few moments will be very familiar with their immediately obvious limitations, let alone the broader problems they may pose even if issues like hallucination are overcome. At the very least, it would be a lot easier to empathise with the likes of Altman if there was some sense of those challenges to balance his one-sided narrative. Anywho, fire up the podcast and decide for yourself just what you make of Altman's everything-AI attitudes.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store