
Apple WWDC 2025 Confirmed For June: Here's The Full Event Schedule
Last Updated:
Apple WWDC 2025 keynote will talk about the new iOS 19 version, Apple AI features and some surprising updates.
Apple WWDC 2025 is next on the agenda for the tech circuit this year and the company has a big lineup scheduled for the developer conference next month. Apple WWDC 2025 dates were already confirmed, starting from June 9 when Tim Cook will be part of the WWDC 2025 keynote session.
This is the event where we yearly see the new iOS, iPadOS and macOS versions announced and also their features are given a demo in front of the world. The WWDC 2025 is unlikely to be any different barring some few surprises if Apple plans to have them in store.
The usual entries of the WWDC 2025 have been detailed and now Apple is giving us more in-depth information about the content that will be part of the event this year.
The company will have a Platforms State of Union session after the big WWDC 2025 keynote, where the developers will be given a deep-dive about the new features and upgrades coming to its platforms. Apple is confirmed to have over 100 sessions with its experts and offering developers further insight into the new versions.
Apple is also offering special access to the Developer Program and Developer Enterprise members who can reach out to Apple experts directly and even get one-on-one sessions to understand topics like Apple AI, Swift and more.
Apple WWDC 2025 will be taking place from June 9 to June 13, 2025. The company has also confirmed the WWDC 2025 edition will be livestreamed online but giving a select group of people the opportunity to attend the event in person at the Apple Park in California. Apple WWDC 2025 will be available on the official Apple YouTube page and Events page.
For this, macOS, iPadOS and watchOS will be equally important and their feature upgrades need to be in line with the iPhone additions. Apple AI is likely to grab the headlines but Siri AI is not expected to be part of the lineup this year as it struggles internally to compete with Google and OpenAI.
First Published:
May 22, 2025, 10:45 IST

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
an hour ago
- Indian Express
Google used YouTube's video library to train its most powerful AI tool yet: Report
Google used thousands of YouTube videos to train its latest Gemini and Veo 3 models, even as most creators remain unaware of their content being used for AI training purposes. Veo 3 is the tech giant's most advanced AI video generation model that was unveiled at this year's I/O developer conference. It is capable of generating realistic, cinematic-level videos with complete sound and even dialogue. And Google leveraged a subset of the 20-billion catalogue of YouTube videos to train these cutting-edge AI tools, according to a report by CNBC. While it is unclear which of the 20 billion videos on YouTube were used for AI training, Google said that it honours agreements with creators and media companies. 'We've always used YouTube content to make our products better, and this hasn't changed with the advent of AI. We also recognize the need for guardrails, which is why we've invested in robust protections that allow creators to protect their image and likeness in the AI era — something we're committed to continuing.' a company spokesperson was quoted as saying. Creators have the option to block companies like Amazon, Nvidia, and Apple from using their content for AI training. But they do not have the choice to opt out when it comes to Google. While YouTube has previously shared all of this information, many creators and media organisations are yet to fully understand that Google is allowed to train its AI models on YouTube's video library. YouTube's Terms of Service state that 'by providing Content to the Service, you grant to YouTube a worldwide, non-exclusive, royalty-free, sublicensable and transferable license to use that Content.' YouTube content could be used to 'improve the product experience … including through machine learning and AI applications,' the company further said in a blog post published in September 2024. Independent creators have raised concerns that their content is being used to train AI models that could eventually compete with or replace them. AI-generated content also leads to the rise of other models that could compete with human creators who have said that they are neither credited nor compensated for their contributions. Last week, The Walt Disney Company and Comcast's Universal said that they have filed a copyright lawsuit against Midjourney, accusing the AI image generator of unlawfully copying and distributing their most iconic characters. Describing the tool as a 'bottomless pit of plagiarism,' the studios alleged that Midjourney recreated and monetised copyrighted figures without permission. Days later, the AI research lab rolled out its first-ever text-to-video generation model called V1. According to Midjourney, V1 can be used to convert images into five-second AI-generated video clips. Users can also upload images or use an AI-generated image by Midjourney to animate the image.


India.com
an hour ago
- India.com
Most expensive iPhone is made for just Rs 42000 but Apple sells it for Rs 1.32 lakh due to...
iPhone price in India New Delhi: American tech giant Apple sells its iPhones in various models at premium prices, but did you know that the actual manufacturing cost of these devices is significantly lower? Last year, the most expensive models were iPhone 16 series and iPhone 16 Pro Max. But have you ever wondered how much it actually costs to make this phone that sells for lakhs? In this article, we will tell you the cost of making these handsets. When the actual cost is so low, you might wonder why Apple charges more than double the price from customers. Today, we're going to tell you about the manufacturing cost of the iPhone 16 Pro Max. In fact, shortly after this phone was launched last year, a report was released revealing details about its manufacturing cost. Manufacturing Cost of iPhone 16 Pro Max The Bill of Materials (BOM) cost of the iPhone 16 Pro Max is USD 485 (approximately Rs 41,992 or Rs 42,000), according to market research firm TD Cowen. The report also stated that this is slightly higher than the cost of the iPhone 15 Pro Max, which was USD 453 (around ₹39,222). Why does a phone made for Rs 41,000 sell for over a lakh? It's important to note that the BOM only includes the cost of raw materials and assembly. The final retail price also factors in expenses like software development, marketing, and logistics, which significantly increase the overall cost. Currently, the 256GB variant of the iPhone 16 Pro Max is being sold on Flipkart for Rs 1,32,900. Check Key Details Here: The higher cost of the iPhone 16 Pro Max compared to the iPhone 15 Pro Max is due to the upgraded hardware components used in the handset. The display and rear camera system of the iPhone 16 Pro Max are the two most expensive parts, costing around ₹6,700. In comparison, these parts in the iPhone 15 Pro Max cost Rs 6,300 and Rs 5,900 respectively. The introduction of new LPDDR5X RAM technology has also added to the total cost With the RAM in the iPhone 16 Pro Max priced at Rs 1,400, whereas the older LPDDR5 RAM in the iPhone 15 Pro Max cost only Rs 1,000. The A18 Pro chipset and storage in the iPhone 16 Pro Max cost Rs 3,400 and Rs 1,900 respectively. Even after accounting for logistics and software development, Apple maintains a healthy gross margin and earns a significant profit on each model of the iPhone 16 Pro Max.


Mint
2 hours ago
- Mint
AI meets adult content: THIS platform is a ‘lovechild between OnlyFans and OpenAI'
Ever since OpenAI introduced the general world to the many possibilities of artificial intelligence (AI), developers have been experimenting with ways the technology can change the overall user experience. In one such experiment, a start-up with over 2,00,000 users in the United States, brought together the endlessness of AI and fame, and merged it with the "spicy fantasies" of OnlyFans users. OhChat, a platform its creator described as the 'lovechild between OnlyFans and OpenAI,' uses artificial intelligence to build lifelike digital duplicates of public figures. These AI avatars of adult content celebrities don't eat, sleep or breathe, but 'remember you, desire you and never log off'. In an interview with CNN, OhChat CEO Nic Young said goes a step further than platforms such as OnlyFans, where users pay to gain access to adult content from content creators. Once activated, the avatars run autonomously, offering 'infinite personalised content' for subscribers. OhChat 'is an incredibly powerful tool, and tools can be used however the human behind it wants to be used,' he said. 'We could use this in a really scary way, but we're using it in a really, I think, good, exciting way.' Young told CNN that OhChat works on a tiered subscription model wherein a user pays $4.99 ( ₹ 430) per month for unlimited texts on demand, $9.99 ( ₹ 865) for capped access to voice notes and images, or $29.99 ( ₹ 2,600) for unlimited VIP interaction. According to Young, platform creators receive an 80 per cent cut from the revenue their AI avatar generates. OhChat keeps the remaining 20 per cent. 'You have literally unlimited passive income without having to do anything again,' Young told CNN. Since launching OhChat in October 2024, the company has signed 20 creators, including 'Baywatch' actress Carmen Electra, and former British glamour model Katie Price – Jordan. Some of the creators are already earning thousands of dollars per month, Young said. Nic Young said that to build a digital twin, OhChat asks its creators to submit 30 images of themselves and speak to a bot for 30 minutes. The platform can then generate the digital replica 'within hours' using Meta's large language model. For example, the AI avatar of Jordan is trained to mimic her voice, appearance and mannerisms. She can 'sext' users, send voice notes and images, and provide on-demand intimacy at scale – all without her lifting a finger. The platform was categorised with their AI avatars on an internal scale to rank the intensity and explicitness of their interactions. Creators contributing to the platform decide which level their avatar will be.