
OnePlus 13s launching on 5 June in India: Expected price, specs, and all you need to kow
OnePlus's latest compact flagship, OnePlus 13s is launching in India on June 5 with the new OnePlus AI features and the Plus Key. The upcoming smartphone will be powered by the new Qualcomm Snapdragon 8 Elite processor and is likey to be a rebranded version of the OnePlus 13T that launched in China last month.
OnePlus 13s has already been confirmed to come in three colourways: Black Velvet, Pink Satin, and Green Silk. The phone gives up the traditional circular camera module seen on the OnePlus 13 and OnePlus 13R in favour of a large rectangular setup that also houses the camera flash.
The alert slider that has long been a staple on OnePlus devices is being replaced with the Plus Key - an iPhone like customizable key that can trigger various tasks including switching ring profiles, launching camera, starting translation and even recordings. However, the standout feature of this key will be to trigger the OnePlus AI Plus Mind which will capture all the on-screen content and analyze it for fore future reference.
Apart from Plus Key, OnePlus also bringing a new suite of AI features with the OnePlus 13s which includes AI VoiceScribe, AI Translation, AI Search, AI Reframe and AI Best Face 2.0. The company says it has also deepened integration with Gemini to make sure that its native apps like OnePlus Notes and Clock are compatible with Google's AI assistant.
As for the other specs of the phone, the OnePlus 13s is likely to have much of the similar features as the OnePlus 13T barring the selfie shooter which has been confirmed to feature a 32MP auto-focus shooter instead of the 16MP shooter on the Chinese variant.
If the phone does turn out to be a rebranded OnePlus 13T, it could feature a 6.32-inch display. Leaks suggest that it could be a 1.5K 8T LTPO AMOLED panel with a refresh rate of 120 Hz and a peak brightness of 1600 nits. The device may also support LPDDR5x RAM and UFS 4.0 storage.
Unlike the OnePlus 13, the 13s may feature an optical fingerprint sensor. Furthermore, the phone may have an IP65 water and dust resistance rating, meaning it may not be fully waterproof like its elder sibling.
The phone is expected to come with OxygenOS 15 based on Android 15, much like other OnePlus phones launched this year.
As for optics, the OnePlus 13s could come with a dual camera setup with a 50MP IMX906 primary setup with OIS and a 50MP 2x telephoto lens.
While the official price of the OnePlus 13s will only be revealed during the company's launch event on 5 June, if leaks are to be believed the phone could be priced around the ₹ 55,000 price bracket in India. Believing that to be true, the OnePlus 13s would sit right at the middle of the OnePlus 13R and the OnePlus 13.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
18 minutes ago
- Mint
Google's Audio Overview can turn those boring documents into engaging podcasts
The New Normal: The world is at an inflexion point. Artificial Intelligence is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life. The first time I heard an article I had written being discussed, I sat up and listened in utter surprise. Two people I had never come across before were deep in conversation about what I'd written. This man and woman team went through everything, making up a slick podcast. These were AI voices that sounded totally natural and pleasant. This kind of conversation is generated by a feature called Audio Overview. To experience it immediately, download the Gemini app on your phone. Tap the plus sign at the bottom and navigate to one of your documents. Once uploaded, see the tab on top of it, click - and go make yourself a cup of coffee. By the time you get back with your streaming cup, the Audio Overview should be ready. Click, as indicated, and sit back to listen. The two AI hosts will now talk about your content. And they do so with impressive clarity and skill. It's no gimmick or party trick. Also read: Why India is so far behind in the fight for AI supremacy Listening to content can be a great way of absorbing it. Anyone can get tired of reading, since we have to do so much of it each day. As long as you have content that is in a Word file, plain text, a PDF, or Google doc, you can feed it to Gemini to turn it into an Audio Overview. I was putting off going through an 83-page document, when I figured I could quickly get the general gist of it with an Audio Overview. At work this can really help productivity. It's also great for just giving your eyes a rest. If you happen to have a visual impairment, the feature is a relief as you can get so much more done. NotebookLLM: podcasts from anything Audio Overview can be even more magical in its original home, Google's NotebookLM. To find that, go to your browser on any device and type NotebookLM in the search bar. Sign in with your Google account and you're in. Add up to 50 items of content including articles, notes, YouTube videos, presentations and more, to make up a notebook. All of these will be combined into an Audio Overview or a more full-fledged Deep Dive conversation through the Chat and Studio tabs. This does take a few minutes, so find something else to do for a bit. Once the conversation is ready you can listen in the browser, or download for later. Or even share it. This amazing audio feature gives you more control in NotebookLM than it does in Gemini. NotebookLM does have an app, but that doesn't seem to have all the features. You can select the playback speed, the length of the conversation, and incredibly even the language the AI hosts should speak. And yes, Hindi is on the list, making it possible to reach a wider audience with that content. It's easy enough to imagine the feature being used for training and education, making it so much more widely useful. Also read: AI didn't take the job. It changed what the job is. As if all this weren't impressive enough already, here's another way you can control the conversation. In NotebookLM you'll also find a Customise tab for the Deep Dive audio. Here, you can actually describe what you want the hosts to focus on. Request a focus on some selected aspect of the content, or ask to keep the language simple or technical. You have the option of deleting the conversation and re-generating it with fresh instructions. You can easily create a conversation in multiple languages for use with different audiences, or change the difficulty level. If you visit aistudio via the browser, you'll see that Google is experimenting with users being able to change the accent or style of speaking in a feature called Native Speech Generation. There's no announcement to the effect but one can easily see how this could be added to Audio Overview sometime. It works very well and is fascinating to try out. Join the conversation Another impressive but experimental feature lets you actually 'join' the podcast, by tapping a button. Interrupt the hosts and ask a question or make them change focus or ask for a comment on your opinion on the subject. This is a little slow and you'll be left wondering if the hosts heard you at all, but I fully expect it to become more fluid in the future as Google adds new features quite frequently. Also read | Mary Meeker's AI report: Decoding what it signals for India's tech future Audio Overview isn't flawless, but chances of getting things wrong are minimised because it's you giving the content. The feature has worked well enough for Google to have brought it to Search, where it will give you AI Overview in audio form – being tried out in the US first. Mala Bhargava is most often described as a 'veteran' writer who has contributed to several publications in India since 1995. Her domain is personal tech and she writes to simplify and demystify technology for a non-techie audience.


News18
2 hours ago
- News18
Google's Gemini Spent 800 Hours Beating Pokémon And Then It Panicked And Failed
Last Updated: Google's newest AI chatbot struggles to stay calm while playing a game designed for children. Artificial intelligence (AI) has come a long way, but even advanced systems can struggle sometimes. According to a report from Google DeepMind, their top AI model, Gemini 2.5 Pro, had a tough time while playing the classic video game Pokémon Blue—a game that many kids find easy. The AI reportedly showed signs of confusion and stress during the game. The results came from a Twitch channel called Gemini_Plays_Pokemon, where an independent engineer named Joel Zhang tested Gemini. Although Gemini is known for its strong reasoning and coding skills, the way it behaved during the game revealed some surprising and unusual reactions. The DeepMind team reported that Gemini started showing signs of what they called 'Agent Panic." In their findings, they explained, 'Throughout the playthrough, Gemini 2.5 Pro gets into various situations which cause the model to simulate 'panic'. For example, when the Pokémon in the party's health or power points are low, the model's thoughts repeatedly reiterate the need to heal the party immediately or escape the current dungeon." This behaviour caught the attention of viewers on Twitch. People watching the live stream reportedly started recognising the moments when the AI seemed to be panicking. DeepMind pointed out, 'This behaviour has occurred in enough separate instances that the members of the Twitch chat have actively noticed when it is occurring." Even though AI doesn't feel stress or emotions like humans do, the way Gemini reacted in tense moments looked very similar to how people respond under pressure—by making quick, sometimes poor or inefficient decisions. In its first full attempt at playing Pokémon Blue, Gemini took a total of 813 hours to complete the game. After Joel Zhang made some adjustments, the AI managed to finish a second run in 406.5 hours. However, even with those changes, the time it took was still very slow, especially when compared to how quickly a child could beat the same game. People on social media didn't hold back from poking fun at the AI's nervous playing style. A viewer commented, 'If you read its thoughts while it's reasoning, it seems to panic anytime you slightly change how something is worded." Another user made a joke by combining 'LLM" (large language model) with 'anxiety," calling it: 'LLANXIETY." Interestingly, this news comes just a few weeks after Apple shared a study claiming that most AI models don't actually 'reason" in the way people think. According to the study, these models mostly depend on spotting patterns, and they often struggle or fail when the task is changed slightly or made more difficult.


Time of India
2 hours ago
- Time of India
Apple executives held internal talks about buying perplexity: Report
Apple executives have held internal talks about potentially bidding for artificial intelligence startup Perplexity, Bloomberg News reported on Friday, citing people with knowledge of the matter. The discussions are at an early stage and may not lead to an offer, the report said, adding that the tech behemoth's executives have not discussed a bid with Perplexity's management. "We have no knowledge of any current or future M&A discussions involving Perplexity," Perplexity said in response to a Reuters' request for comment. Apple did not immediately respond to a Reuters' request for comment. Big tech companies are doubling down on investments to enhance AI capabilities and support growing demand for AI-powered services to maintain competitive leadership in the rapidly evolving tech landscape. Bloomberg News also reported on Friday that Meta Platforms tried to buy Perplexity earlier this year. Meta announced a $14.8 billion investment in Scale AI last week and hired Scale AI CEO Alexandr Wang to lead its new superintelligence unit. Adrian Perica, Apple's head of mergers and acquisitions, has weighed the idea with services chief Eddy Cue and top AI decision-makers, as per the report. The iPhone maker reportedly plans to integrate AI-driven search capabilities - such as Perplexity AI - into its Safari browser, potentially moving away from its longstanding partnership with Alphabet's Google . Banning Google from paying companies to make it their default search engine is one of the remedies proposed by the U.S. Department of Justice to break up its dominance in online search. While traditional search engines such as Google still dominate global market share, AI-powered search options including Perplexity and ChatGPT are gaining prominence and seeing rising user adoption, especially among younger generations. Perplexity recently completed a funding round that valued it at $14 billion, Bloomberg News reported. A deal close to that would be Apple's largest acquisition so far. The Nvidia-backed startup provides AI search tools that deliver information summaries to users, similar to OpenAI's ChatGPT and Google's Gemini.