logo
Android 16 released ahead of schedule, available to select Google Pixel users: Here are all the exciting new features

Android 16 released ahead of schedule, available to select Google Pixel users: Here are all the exciting new features

Express Tribune11-06-2025

Android 16 has officially launched, arriving ahead of schedule this year with several innovative features that are set to enhance the user experience.
The new operating system version debuted on select Google Pixel models this week, following a public beta release in January.
Android enthusiasts can expect more devices to receive the update in the coming months.
This year's early June release comes several months ahead of the usual August-September launch window, with Google aiming to make waves in the tech industry, particularly by competing with Apple's iOS 26 beta.
✨New✨ features that will give you(r device) main character energy.
Learn more about the updates on Android: 👇👇👇 https://t.co/QOTL0xf8L6 — Android (@Android) June 10, 2025
Live Updates and Improved Notifications
Among the highlights of Android 16 is the introduction of Live Updates, a feature akin to Apple's Live Activities.
This allows app developers to incorporate notifications with live progress bars, enabling users to track activities like Uber trips or food deliveries in real-time.
While the feature is now available for developers, full functionality, such as expanded notifications on the always-on display, will roll out in a future update.
Another improvement is the automatic grouping of notifications from a single app, making it easier for users to stay organised without feeling overwhelmed by a barrage of alerts.
Aesthetic Updates and App Changes
Android 16 also brings aesthetic changes with a focus on user interface design.
Google is pushing for apps to adapt to edge-to-edge screens, eliminating the previous option for developers to opt out.
This update lays the groundwork for a future Material 3 Expressive update, which will further refine the visual experience.
Predictive Back and Scam Protection
Android 16 makes Predictive Back the default setting, reintroducing a feature first seen in Android 13.
This allows users to preview where the back button will take them, helping to streamline navigation and make the system more intuitive.
Security is another area where Android 16 shines, particularly with its new Activated Protection features.
This includes a Scam Detection AI tool that warns users about common scams like crypto fraud, technical support deceptions, and toll road scams.
In addition, Android 16 introduces in-call protections to prevent users from accidentally granting malicious access to their devices.
Rolling Out Across Devices
While Android 16 is initially available on Google Pixel devices, the broader rollout to other Android phones will take place over the coming months.
Android users can expect to see these exciting new features, which aim to improve both the functionality and security of their smartphones, becoming standard on their devices soon.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Adobe Project Indigo: A free-to-use iPhone app for SLR-style photos
Adobe Project Indigo: A free-to-use iPhone app for SLR-style photos

Express Tribune

time6 hours ago

  • Express Tribune

Adobe Project Indigo: A free-to-use iPhone app for SLR-style photos

Adobe has launched a new computational photography camera app for iPhones, offering users a powerful tool to capture high-quality, natural-looking photos. The app, named Project Indigo, is free to download and currently available for iPhone 12 Pro models and newer, with Adobe recommending optimal use on an iPhone 15 Pro or later. The app was developed in part by Marc Levoy, a renowned figure in mobile imaging who previously helped transform the Google Pixel camera's capabilities. Now an Adobe Fellow, Levoy worked alongside senior scientist Florian Kainz to build the app under Adobe Labs. The project was announced on Adobe's website through a technical blog. Unlike most Adobe products, Project Indigo does not require users to log into an Adobe account, allowing immediate access to its features. Indigo leverages computational photography to improve image quality by capturing a burst of photos and combining them to produce a final image with enhanced dynamic range and reduced noise. The app aims to deliver a 'natural, SLR-like' aesthetic and includes full manual controls for focus, ISO, shutter speed, and white balance — features aimed at enthusiasts and professionals alike. Adobe Labs releases an experimental digital photography app Project Indigo ( to showcase breakthrough innovations, including reflection removal, which is being published at CVPR this week. Check out this blog: — Adobe Research (@AdobeResearch) June 13, 2025 In the technical blog post, Levoy and Kainz outlined how the app processes images to retain a natural look, explaining key elements of its image pipeline. They noted that Project Indigo is intended as both a standalone tool and a testbed for features that may appear in other Adobe products. Experimental tools under consideration include a reflection removal button, portrait mode enhancements, and eventually, video recording capabilities. 'This is the beginning of a journey for Adobe – towards an integrated mobile camera and editing experience that takes advantage of the latest advances in computational photography and AI,' wrote Levoy and Kainz. The team's vision is to bridge the gap between casual mobile shooters and advanced photographers, offering an app that balances accessibility with powerful photographic control. An Android version is also in development. For now, Project Indigo marks Adobe's most significant foray into mobile camera software, reflecting the growing importance of AI-driven photography tools in both consumer and professional imaging.

AI and the environment
AI and the environment

Express Tribune

time19 hours ago

  • Express Tribune

AI and the environment

The writer is an academic and researcher. He is also the author of Development, Poverty, and Power in Pakistan, available from Routledge Listen to article For Gen X people like me, who are trying to get used to the new world of AI, like we learnt using the computer, and then the Internet many years ago, it is intriguing to see how AI is becoming integrated into our lives. For researchers like me, AI is making it easier to navigate Internet searches, and to synthesise relevant literature. Besides such novice applications of AI, however, this evolving technology is going to start playing an increasingly prominent role in more salient aspects of our lives ranging from healthcare, education, manufacturing, agriculture, and even warfare. There are also legitimate reasons to be wary of AI's power. AI is making it much easier to spread disinformation, enable fraud, and cause conflicts to become deadlier. Moreover, AI, like many other technologies that we have become so dependent on in our consumerist world, ranging from cars to cell phone, has significant environmental impacts. This heavy ecological footprint of AI is more concerning to me than speculations about AI dominating or replacing humans. AI has a much larger environmental impact than many of the other innovations we now depend on, due to the exorbitant amount of energy needed to operate and train AI systems, and because of the e-waste produced by the hardware used to run AI. Training and operationalising large language models such as ChatGPT depend on energy still being generated via fossil fuels, which is leading to more carbon emissions, and increased global warming. Each ChatGPT question is estimated to use around 10 times more electricity than a traditional Google search. Producing and disposing of AI hardware also generates a lot of e-waste comprised of harmful chemicals. Running AI models need a lot of water too, to cool the data centres which house massive servers, and to cool thermoelectric or hydroelectric plants which supply electricity for these data centers. The race to produce AI is also compelling major tech giants to walk back on their earlier environmental pledges. Consider, for instance, the case of Google. A few years ago, Google set an ambitious target to address climate change by becoming 'net zero' emissions, but now the company's emissions are growing due to Goggle's bid to become a leader in AI. As the AI industry continues to grow, its environmental impact will grow too. However, as is the case of ecological destruction caused by over consumption of other products, the environmental impacts of AI will not be evenly distributed across different regions or socio-economic classes. The benefits of AI will not be evenly spread either. Higher income countries are better poised to capture economic value from AI because they already have better digital infrastructure, more AI development resources, and advanced data systems. Better off households will be able to enjoy the benefits of AI, while having more resilience in terms of shielding themselves from its adverse impacts. Conversely, the quest to produce more AI may cause exploitation in poorer countries that provide the critical resources needed for AI. This is not a speculative statement, but one based on ground realities. Consider, for instance, the dismal condition of miners, including children, in poor African countries like Congo, who are toiling away to produce cobalt to power batteries used to run electric cars, and our phones. Al will require many more of these critical resources, potentially leading to even more exploitation of people and natural environments in resource-rich but poor countries. It is important to improve the energy efficiency of AI models and data centers, and to use renewable energy sources to power AI data centres. Moreover, it is also vital to promote more sustainable mining and manufacturing practices and improve e-waste management to reduce the amount of harmful chemicals entering the environment. However, whether these efforts will be paid more attention than maximising profits within this highly unregulated new domain of human innovation remains to be seen.

Veo 3 set to be integrated with YouTube Shorts over the summer
Veo 3 set to be integrated with YouTube Shorts over the summer

Express Tribune

timea day ago

  • Express Tribune

Veo 3 set to be integrated with YouTube Shorts over the summer

Veo 3 set to be integrated with YouTube Shorts over the summer YouTube CEO Neal Mohan has announced Google will integrate its latest AI-powered video generation model, Veo 3, into YouTube Shorts later this summer. The tool will allow users to create short-form videos entirely from text prompts, significantly lowering the barrier to content creation. Mohan introduced the update during Cannes Lions 2025 event, describing Veo 3 as a means of empowering storytellers and democratising content production. 'The possibilities with AI are limitless,' Mohan said, noting that the feature could help 'anyone with a voice' reach an audience and build a brand. YouTube is plugging Veo 3 AI videos directly into Shorts — The Verge (@verge) June 18, 2025 Veo 3 marks a major evolution from Google's earlier Dream Screen initiative, which allowed users to generate backgrounds using AI. The new model goes further, producing complete videos — with both visuals and audio — from a few lines of written input. The update comes as more than a quarter of YouTube Partner Programme creators now earn income from Shorts. However, Veo 3's rollout has prompted debate over how the platform will balance innovation with content quality and creator sustainability. Critics have raised concerns over the increasing prevalence of AI-generated content, which some have dubbed 'AI slop.' These concerns include the potential for misinformation, declining originality, and the rise of deepfakes or low-quality spam that could crowd out human-made videos. In response, YouTube is developing a likeness protection tool in collaboration with Creative Artists Agency (CAA) and content creators. The aim is to safeguard public figures from unauthorised replication via AI. For advertisers and brands, the integration offers a new way to produce targeted video campaigns without requiring costly production resources. However, the surge in automated content may also make it harder for individual campaigns to stand out. While Veo 3 expands access to video creation, it may also intensify competition for visibility on the platform. Traditional creators may feel squeezed as algorithmic systems prioritise AI-generated Shorts — or may choose to shift their efforts elsewhere. YouTube and Google Labs continue to refine the feature as part of the broader Gemini AI ecosystem, which includes experimental tools like Search Live and Gemini Live. Veo 3's inclusion in Shorts signals a wider push to bring generative AI into the hands of mainstream users. The company has yet to confirm a global release date beyond its initial US rollout.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store