logo
iQOO Neo 10 with 7000 mAh battery to be launched on May 26: What to expect

iQOO Neo 10 with 7000 mAh battery to be launched on May 26: What to expect

Chinese smartphone brand iQOO is set to launch the Neo 10 smartphone in India on May 26. Ahead of the launch, the company has disclosed key features and specifications through a microsite on ecommerce platform Amazon India. This includes details of the smartphone's performance, battery, and more.
iQOO Neo 10: What to expect
iQOO has confirmed that its upcoming Neo 10 smartphone will be powered by the Qualcomm Snapdragon 8s Gen 4 system on chip (SoC). iQOO also said that the smartphone will feature the company's proprietary secondary chip 'Q1,' which will enable support for up to 144 frames per second (fps) gameplay. Additionally, the Neo 10 smartphone will pack a 7,000mAh battery with support for 120W wired charging, confirmed iQOO. The brand has also confirmed that the Neo 10 will be offered Titanium Chrome and Inferno Red colours.
Reportedly, the smartphone will sport a 6.78-inch 1.5K AMOLED of 144Hz refresh rate and up to 5500 nits of peak brightness. The upcoming device could have a thickness of 8.9mm.
For imaging, the iQOO Neo 10 will likely feature a dual-camera setup featuring a 50MP Sony LYT 600 primary camera sensor with optical image stabilisation (OIS) and an 8MP ultra-wide lens. At the front, the smartphone may feature a 16MP sensor.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Will India's AI Action Summit redefine global AI governance?
Will India's AI Action Summit redefine global AI governance?

Economic Times

time5 hours ago

  • Economic Times

Will India's AI Action Summit redefine global AI governance?

Then there's the tech in between After Britain, South Korea and France, it's India's turn to host the next AI Action Summit. GoI has invited public comments until June 30 to shape the summit, which sets the tone for AI governance. India is expected to bring global majority perspectives from the margins to the mainstream and exhibit a unique approach to the delicate balancing acts involved in AI there is the question of whether to regulate, and if so, how. The recent US proposal to ban state AI laws for 10 years is seen by many as pro-innovation. By contrast, the EU's AI Act takes a more precautionary, product-safety approach. China's approach tends to tailor regulation to authoritarian state control. Beyond this dichotomy, India is often seen as capable of offering a third way. The summit presents an opportunity for India to showcase elements of this approach and take on the equally thorny question of how open or closed AI development should be. On openness, India can push beyond the binary of 'open or closed' approaches to releasing AI base models. Some argue that AI models must be kept under the control of a small number of people. Others argue that base models should be released with no restrictions. India has no interest in a future where a handful of US and Chinese companies hold the key to advanced AI models and can arbitrarily restrict their use. At the same time, however, openness should not be understood in a purely libertarian way where people can do whatever they want with these we need is a truly open approach that enables independent evaluation of how the foundation models work so that they can be used to innovate without inadvertently importing the latest US political, one-upmanship-driven ideas or Chinese state censorship. Demanding this openness and transparency, followed by independent testing and evaluation, should be a key goal for India with its new AI Safety Institute (ASI). Additionally, ASI must take the lead in ensuring that systems, particularly in high-impact domains such as public services, are secure and reliable. With its 'Safe and Trusted AI' pillar, the IndiaAI mission is encouraging projects on bias mitigation, privacy enhancement and governance testing-themes that should reflect in the summit's agenda-and affirming the stance taken by the EU to push for 'Trustworthy AI'. It is key here, however, that trustworthiness, privacy and safety are not merely demanded of AI systems but rather achieved through effective governance frameworks. Many of the purported benefits of AI are undermined if the data is not secure, if the system responses are unreliable or biased, and if the public turns against the technology due to high-profile scandals. A telling case is that of the 'Child Benefits Scandal' in the Netherlands, where an opaque and discriminatory system mistakenly flagged thousands of families for benefits-related fraud. In response, the Netherlands is working towards improving AI accountability through human rights impact assessments and public databases of government AI systems. Public trust in AI systems can only be achieved through robust transparency and accountability practices. Centring global conversations and policy imperatives on open, transparent and rights-protecting AI development reduces uncertainty and offers a level playing field for smaller players, even if it is not enshrined in dedicated legislation but through an ecosystem of institutional oversight via ASI and adaptation of existing laws, as is the preference of the IndiaAI logic is straightforward-when a tech is built to be rights-respecting and safe, more people trust it, and therefore more people use it, particularly when it can be independently verified. It's a win-win for commerce, rights and the global majority, such frameworks are necessary because without close attention to the impact of AI models, the region risks becoming a testing ground for nascent and half-baked technology that is developed elsewhere. Their absence could result in 'innovation arbitrage,' a term used to refer to the exploitation of regulatory gaps to deploy questionable technology. The harms of AI-driven systems without oversight are well-documented-be it opaque, unaccountable data collection practices that give people no real choice, or flawed algorithmic decisions that impact people's education, employment and healthcare. In demanding openness, transparency, and security, India has an opportunity to work with the global majority countries to develop shared approaches and demands. Demanding such inclusion and space for leadership would allow us to leverage our collective expertise to ensure 'access for all'-a key goal of GoI. The AI Impact Summit is the moment to bring like-minded countries together and lay out a roadmap for how AI development can be driven in a way that benefits the global majority and allows for individual and regional autonomy, instead of cementing hegemony. (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Looking for quick buck in unlisted shares? Better think twice! Small finance banks struggle with perception. Will numbers turn the tide? Aadit Palicha on Zepto dark store raid, dark patterns, and IPO China rare earths blockade: Will electric vehicles assembly lines fall silent? Flames below deck: The silent threat lurking in cargo holds Is India ready to hit the aspirational 8% growth mark? For medium- to long-term investors with moderate risk appetite: 6 large-cap stocks with an upside potential of up to 40% Sin goods, but not sin stocks from a long-term perspective: 6 stocks from liquor industry with an upside potential of up to 34%

Will India's AI Action Summit redefine global AI governance?
Will India's AI Action Summit redefine global AI governance?

Time of India

time5 hours ago

  • Time of India

Will India's AI Action Summit redefine global AI governance?

Academy Empower your mind, elevate your skills After Britain, South Korea and France, it's India's turn to host the next AI Action Summit . GoI has invited public comments until June 30 to shape the summit, which sets the tone for AI governance . India is expected to bring global majority perspectives from the margins to the mainstream and exhibit a unique approach to the delicate balancing acts involved in AI there is the question of whether to regulate, and if so, how. The recent US proposal to ban state AI laws for 10 years is seen by many as pro-innovation. By contrast, the EU's AI Act takes a more precautionary, product-safety approach. China's approach tends to tailor regulation to authoritarian state control. Beyond this dichotomy, India is often seen as capable of offering a third way. The summit presents an opportunity for India to showcase elements of this approach and take on the equally thorny question of how open or closed AI development should openness, India can push beyond the binary of 'open or closed' approaches to releasing AI base models. Some argue that AI models must be kept under the control of a small number of people. Others argue that base models should be released with no has no interest in a future where a handful of US and Chinese companies hold the key to advanced AI models and can arbitrarily restrict their use. At the same time, however, openness should not be understood in a purely libertarian way where people can do whatever they want with these we need is a truly open approach that enables independent evaluation of how the foundation models work so that they can be used to innovate without inadvertently importing the latest US political, one-upmanship-driven ideas or Chinese state censorship. Demanding this openness and transparency, followed by independent testing and evaluation, should be a key goal for India with its new AI Safety Institute (ASI).Additionally, ASI must take the lead in ensuring that systems, particularly in high-impact domains such as public services, are secure and reliable. With its 'Safe and Trusted AI' pillar, the IndiaAI mission is encouraging projects on bias mitigation, privacy enhancement and governance testing-themes that should reflect in the summit's agenda-and affirming the stance taken by the EU to push for ' Trustworthy AI '.It is key here, however, that trustworthiness, privacy and safety are not merely demanded of AI systems but rather achieved through effective governance frameworks. Many of the purported benefits of AI are undermined if the data is not secure, if the system responses are unreliable or biased, and if the public turns against the technology due to high-profile scandals.A telling case is that of the 'Child Benefits Scandal' in the Netherlands, where an opaque and discriminatory system mistakenly flagged thousands of families for benefits-related fraud. In response, the Netherlands is working towards improving AI accountability through human rights impact assessments and public databases of government AI systems. Public trust in AI systems can only be achieved through robust transparency and accountability global conversations and policy imperatives on open, transparent and rights-protecting AI development reduces uncertainty and offers a level playing field for smaller players, even if it is not enshrined in dedicated legislation but through an ecosystem of institutional oversight via ASI and adaptation of existing laws, as is the preference of the IndiaAI logic is straightforward-when a tech is built to be rights-respecting and safe, more people trust it, and therefore more people use it, particularly when it can be independently verified. It's a win-win for commerce, rights and the global majority, such frameworks are necessary because without close attention to the impact of AI models, the region risks becoming a testing ground for nascent and half-baked technology that is developed elsewhere. Their absence could result in 'innovation arbitrage,' a term used to refer to the exploitation of regulatory gaps to deploy questionable technology. The harms of AI-driven systems without oversight are well-documented-be it opaque, unaccountable data collection practices that give people no real choice, or flawed algorithmic decisions that impact people's education, employment and demanding openness, transparency, and security, India has an opportunity to work with the global majority countries to develop shared approaches and demands. Demanding such inclusion and space for leadership would allow us to leverage our collective expertise to ensure 'access for all'-a key goal of GoI. The AI Impact Summit is the moment to bring like-minded countries together and lay out a roadmap for how AI development can be driven in a way that benefits the global majority and allows for individual and regional autonomy, instead of cementing hegemony.

FBI and Canadian cybersecurity agency warns: Chinese hackers attacking telecom services in Canada
FBI and Canadian cybersecurity agency warns: Chinese hackers attacking telecom services in Canada

Time of India

time7 hours ago

  • Time of India

FBI and Canadian cybersecurity agency warns: Chinese hackers attacking telecom services in Canada

Representative Image Canada's cybersecurity agency , the Canadian Centre for Cyber Security, has issued a warning that Chinese-backed hackers are likely responsible for a recent attack that compromised telecommunications infrastructure in the country. The agency confirmed that three network devices registered to a Canadian company were compromised in these attacks. In a joint bulletin (as seen by Bloomberg) released this week with the US Federal Bureau of Investigation ( FBI ), the Canadian Centre for Cyber Security urged Canadian organisations to strengthen their networks against the threat posed by Salt Typhoon , a hacking group with documented links to the Chinese government. The warning emphasises the ongoing risk and the need for immediate action to protect critical infrastructure. What Canada's cybersecurity agency said about the recent hacking incident 'The Cyber Centre is aware of malicious cyber activities currently targeting Canadian telecommunications companies. The responsible actors are almost certainly PRC state-sponsored actors, specifically Salt Typhoon,' the agency said, referring to the People's Republic of China, reports Bloomberg. The agency also noted that separate investigations showing overlaps with indicators linked to Salt Typhoon indicate the cyber campaign 'is broader than just the telecommunications sector.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch CFD với công nghệ và tốc độ tốt hơn IC Markets Đăng ký Undo According to the agency, the hackers will 'almost certainly' continue attempting to infiltrate Canadian organisations — particularly telecom providers — over the next two years, the report adds. Beijing has consistently rejected US claims linking it to Salt Typhoon, a group first reported by The Wall Street Journal last year. In January, the US imposed sanctions on a Chinese company for allegedly being 'directly involved' in the cyber intrusions , along with China's Ministry of State Security. 6 Awesome New Features Coming in Android 16!

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store