logo
PlayStation's Fairgame$ Reportedly Delayed As Studio Head Leaves

PlayStation's Fairgame$ Reportedly Delayed As Studio Head Leaves

Yahoo15-05-2025

Sony's slate of live service PlayStation games continues to crack. The multiplayer heist shooter Fairgame$ has reportedly been delayed after concerns following an external test and the head of the first-party studio making it, Jade Raymond, has left the company.
Bloomberg reports that the first-party PlayStation 5 game was originally aiming to come out in fall of 2025 but is now coming sometime in 2026. That delay is apparently the result of internal concerns about the game's development following outside testing. Fairgame$ is being made by Haven Studios, a Montreal-based team formed of ex-Google Stadia devs that Sony acquired in 2022. It was being run by Raymond, a veteran of Assassin's Creed and EA Motive. Not anymore.
'Jade Raymond has been an incredible partner and visionary force in founding Haven Studios,' a spokesperson for Sony told Bloomberg in a statement. 'We are deeply grateful for her leadership and contributions, and we wish her all the best in her next chapter.'
Fairgame$ was revealed back in a 2023 PlayStation showcase with little detail about what the game would consist of. It was situated around class warfare and 'emergent sandbox gameplay,' with shades of Robin Hood meets Payday 2 if it were made by Ubisoft. The studio behind it was formed in 2021 shortly after Google shut down its internal development teams, previously led by Raymond, following the failure of its cloud gaming platform Stadia.
Sony is sticking by Fairgame$ for now and has appointed Marie-Eve Danis and Pierre-François Sapinski as co-studio heads. But the console maker's once-ambitious live service strategy continues to look like a mess following the 2023 cancellation of The Last of Us Online and the unprecedented un-releasing of hero shooter Concord last year. Sony also canned an online God of War spin-off and a co-op shooter at Bend Studio earlier this year.
.
For the latest news, Facebook, Twitter and Instagram.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Exclusive: I tested Viture's next-gen AR glasses, and my eyes couldn't believe what they saw
Exclusive: I tested Viture's next-gen AR glasses, and my eyes couldn't believe what they saw

Tom's Guide

time39 minutes ago

  • Tom's Guide

Exclusive: I tested Viture's next-gen AR glasses, and my eyes couldn't believe what they saw

Viture is close to revealing all about its next-gen AR glasses, and I got to go hands-on with them. I've been told to not talk about them until the announcement in July, but thanks to a little gift of gab, I can share a little more about my time with them. I've seen the Reddit hype when I gave these specs Best of show at AWE 2025, so I'll do my best to navigate what I can and can't say to answer some of your questions — demonstrating why if you're in the market for a new pair of specs, you should wait for just a few more weeks. Because honestly? These are some of the best AR glasses I've ever seen. Let's get into the main reason most of us buy AR specs — the picture quality. In this area, the likes of Viture and Xreal have been moving forward step-by-step in offering a bigger screen, a higher quality picture and a wider field of view. And with these upcoming specs, you'll see the biggest, brightest, sharpest, most vivid and widest screen yet. Using Sony's newest micro-OLED tech, I could definitely see an improvement in fidelity over other glasses. No confirmation on what the resolution is, but to my eyes, something is sharper here. Then of course you've got the Viture-style color calibration that ensures a real immersively accurate warmth to every picture with the smooth refresh rate, alongside a HDR-ish inky depth to the darker moments. As to fringing around the edges that you can see in some glasses that tout a wide field of view, there isn't any! Whatever size screen you simulate to completely fill out that display space, the corners and sides of it remain crystal clear in your near-peripheral vision. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Plus, shout-out to the new built-in functionalities (that I can't talk about yet, but get hyped for them), and the dynamic tint control across those lenses creating a near-perfect blindfold across the front. I saw this question a little bit on the subreddits too. This isn't the first time I was exposed to 60 degrees. I saw a prototype a while back, and oh my word the bird bath prisms to pull this off were insane. But with the new glass in these Viture specs, the company managed to nail it without adding any immediately noticeable size or weight to them on the face. They felt comfortable to wear, didn't create any strain on the nose or the tops of my ears, and were easy enough to adjust for getting the picture just right. I can envision these continuing to be nice to wear on long journeys without any fatigue on the face, while not getting many double takes on public transport. And that is as much as I can tell you right now before Viture's big announcement. There's no word on price of full launch yet, so you'll have to keep it locked for when the reveal happens, and I can share some more details (along with uncensored pictures) of what these specs look like! The Xreal One Pros are the best AR glasses you can buy right now, and if you do get them, you won't be disappointed. However, I have to admit the timing is awkward. Virtue's got a chance here, and provided the price is right, these look set to be something special!

Apple needs an AI magic pill, but I'm not desperate for it on macOS
Apple needs an AI magic pill, but I'm not desperate for it on macOS

Digital Trends

timean hour ago

  • Digital Trends

Apple needs an AI magic pill, but I'm not desperate for it on macOS

Over the past few months, all eyes have been fixated on Apple and what the company is going to do with AI. The pressure is palpable and well deserved. Google has demonstrated some really compelling AI tools, especially with Project Astra and Mariner, that turn your phone into something like an all-knowing, forever-present digital companion. The likes of Microsoft, OpenAI, Claude, and even Amazon have shown some next-gen AI chops that make Siri feel like an old prototype. But there is a fine distinction between using AI on phones and how they flesh out on a computing machine, like a MacBook Air. Recommended Videos You don't really talk to an assistant like Siri on a desktop I often run into scenarios where AI is useful on a phone, like Visual Intelligence, which can make sense of the world around you based on what you see through the camera feed. The Mac doesn't really need it, primarily because it lacks a world-facing camera. And second, you can't ergonomically point the Mac's webcam at an object — especially in a public place — like you would with a phone in your hand. But the problem with the whole 'Apple must do AI better' is suited well for mobile devices, and not really Macs, which rely on a fundamentally different mode of input-output, and how we get work done in apps and software. I've used my fair share of AI-first Copilot+ laptops running Windows, and I feel strongly that Apple's AI efforts don't need an urgent focus on macOS, as much as they do on mobile devices, for a few reasons. The Mac is already well fed Bloomberg's Mark Gurman, in the latest edition of his PowerOn newsletter, argued that Perplexity is a nice target for Apple to scoop up an AI lab of its own and get its hands on a ready-made AI stack. Perplexity's answering engine is pretty rewarding, it's not too expensive (by Apple standards), and it works beautifully on iPhones. Over the past couple of quarters, the company has launched a whole bunch of integrations across Telegram and WhatsApp, Deep Research mode, a reasoning AI model, a shopping hub in partnership with Amazon, media generation and image uploads, search through audio and video files, among others. There are just two problems, especially with accessing Perplexity on a Mac. First, it can already do everything in its role via the Mac app and web dashboard, so an integration at a deeper level with Mac won't be solving too many computing problems. Second, ChatGPT is already integrated deeply within Siri and the Apple stack, and it's only a matter of time before both of them step up. Let's be honest here. Perplexity is a cool product, but not exactly revolutionary in the sense that it can elevate the macOS experience significantly. Enterprise AI is a different beast, but for an average user, every AI tool out there — Gemini, ChatGPT, Copilot, Claude, or Perplexity — exists as its own web tool (or app) where you truly get the best out of it. So, what about integrations? Well, they would depend on the tools at hand. A huge chunk of the computing market either relies on Microsoft and its Office tools or Google's Workspace products, such as Docs, Drive, Sheets, and more. From Windows to Office, Copilot is now everywhere. Similar is the situation with Gemini and Google software. Now, millions of Mac users actually use these tools on a daily basis, and Apple doesn't offer a viable replacement of its own. Moreover, there isn't a chance that Google will allow Apple's AI to penetrate deeper into its Workspace than Gemini. Microsoft won't do any different with Copilot and Office. Plus, it's hard to imagine an external AI working better in Docs or PowerPoint than Gemini and Copilot, respectively. The space is already tight, but more importantly, well-fed. And let's not forget, OpenAI and its GPT stack are very much baked at the heart of macOS. If Apple wanted to build integrations, OpenAI offers arguably the most advanced AI tech stack out there. Adding any more AI at the system level would only add to the confusion for an average Mac user, without solving any real problems. The space of an extra AI player on the Mac is tighter for another reason: Apple's Foundation Model framework, which works on-device as well as in cloud-linked format, but with utmost privacy. Apple says it will allow developers to build a 'personal intelligence system that is integrated deeply into iPhone, iPad, and Mac, and enables powerful capabilities across language, images, actions, and personal context.' In a nutshell, Apple's own foundation models are available to developers so that they can build AI experiences in their apps. The best part? It's free. It's not nearly as powerful as the models from OpenAI or Google, but for getting work done locally — like cross-app workflow, intelligent file search, and more — they should come in handy without any privacy scares. The productivity question The M4 MacBook Air is my daily driver these days, and it's a fantastic machine. And I use AI tools heavily on a daily basis. Yet, I have never felt macOS to be an AI bottleneck for me. Every AI tool that I rely on is either already integrated within the software of my choice or available as its dedicated app or website. Yet, the whole notion of turning a product into an AI product baffles me. It makes sense for a phone, like the Pixel 9, but not so much for a laptop. I have tested five Copilot+ Windows machines so far. Yet, the core benefits they offer — snappy performance, instant wake, and long battery life — have little to do with user-facing AI. I was able to use Gemini or Copilot just as fine on a regular Windows laptop as I was able to extract their benefits on a Copilot+ machine with a minimum 45 TOPS AI capability. The Mac is no slouch, and interestingly, all the AI tools in my productivity workflow can be accessed just fine on macOS as they are available on Windows. There are a few exclusive perks, like Windows Recall, but they are not a must-have for the average computer user out there. And let's not forget that Apple already has the foundations ready, and we are going to see the results next year. When Apple introduced the M4 MacBook Air, the company focused on its AI chops, but what flew under the radar was Apple's App Intents Framework, which integrates effortlessly with Apple Intelligence. In simple terms, any app — whether AI or not — can embrace the benefits of on-device AI processing, such as awareness of on-screen content, in a native macOS environment. Now, it's valid to criticize Apple for its AI missteps. I am at a stage where I use Gemini everywhere on my iPhone, from the lock screen widgets to the dedicated app, instead of Siri. But that's not the situation with Macs. For my workflow, and a whole bunch of Mac users' out there, they're not gasping for a next-gen Apple AI. What they need is a reliable machine to run the AI of their choice. Even the cheapest Mac can meet those requirements.

Samsung may never charge you for using Galaxy AI on your phone
Samsung may never charge you for using Galaxy AI on your phone

Yahoo

timean hour ago

  • Yahoo

Samsung may never charge you for using Galaxy AI on your phone

Ever since the launch of the Galaxy S24 series in 2024, Samsung has bet big on Galaxy AI. With each flagship release, the company has introduced a number of new AI features, including some of our favorites. However, Samsung has consistently stated that Galaxy AI features would remain free only for a limited time. For the Galaxy S24 series, Samsung announced that Galaxy AI would be free until the end of 2025. Many assumed this would change with the Galaxy S25, but the same end-of-2025 window still applies. Samsung hasn't clarified what will happen after that — whether it will start charging for AI features or what the pricing might be. But now, it seems the company might just keep these features free indefinitely. Reputed Samsung leaker PandaFlash Pro on X revealed that Samsung may offer all of its in-house Galaxy AI features for free permanently. This excludes Gemini-powered features, which are under Google's control. But core Galaxy AI tools like Live Translator, Generative Edits, and Writing Tools will reportedly remain free. While Samsung hasn't officially confirmed this, considering the leaker's solid track record, the rumor seems quite credible. There's a chance Samsung could confirm it during the next-generation Galaxy foldable launch on July 9, 2025. Speaking of the new foldables, the leaker also shared that Samsung's upcoming most affordable foldable, the Galaxy Z Flip 7 FE, will support the same AI features as the pricier Galaxy Z Fold 7 and Galaxy Z Flip 7. It'll also include a 6-month Gemini Advanced subscription out of the box, meaning the budget foldable may deliver just as much AI power as its flagship siblings.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store