
OpenAI removes mentions of Jony Ive's startup ‘io' amid trademark dispute; says ‘We don't agree with…'
Sam Altman, CEO, OpenAI
Sam Altman-led
OpenAI
has removed all references to 'io,' the hardware startup co-founded by former Apple design chief
Jony Ive
, from its website and social media. The move comes shortly after OpenAI announced a $6.5 billion deal to acquire the startup and build dedicated AI hardware. Sharing the news on microblogging platform X (formerly Twitter) with a link to the announcement blog post, the company said 'This page is temporarily down due to a court order following a trademark complaint from iyO about our use of the name 'io.' We don't agree with the complaint and are reviewing our options.'
Following the removal, the original blog post and a nine-minute video featuring Jony Ive and OpenAI CEO
Sam Altman
are no longer available online. In the deleted post, Altman and Ive had stated: 'The io team, focused on developing products that inspire, empower and enable, will now merge with OpenAI to work more intimately with the research, engineering and product teams in San Francisco.'
OpenAI has not commented further on the status of the
trademark dispute
or when the content might be restored. But in a statement to The Verge, OpenAI confirmed that the deal is still in place.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Villas For Sale in Dubai Might Surprise You
Dubai villas | search ads
Get Deals
Undo
On May 21, 2025, OpenAI formally announced it would acquire io, a relatively new
AI devices company
founded by Jony Ive, the former Chief Design Officer of Apple. The acquisition is valued at $6.4 billion, paid entirely in equity. Importantly, this amount includes OpenAI's earlier investment in io, effectively consolidating its prior financial and strategic interest into full ownership. This deal represents OpenAI's largest acquisition to date, dwarfing previous deals such as the $3 billion acquisition of coding assistant platform Windsurf and the purchase of Rockset, a real-time analytics startup.
6 Awesome New Features Coming in Android 16!
AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
31 minutes ago
- Mint
Colleagues or overlords? The debate over AI bots has been raging but needn't
There's the Terminator school of perceiving artificial intelligence (AI) risks, in which we'll all be killed by our robot overlords. And then there's one where, if not friends exactly, the machines serve as valued colleagues. A Japanese tech researcher is arguing that our global AI safety approach hinges on reframing efforts to achieve this benign partnership. In 2023, as the world was shaken by the release of ChatGPT, a pair of successive warnings came from Silicon Valley of existential threats from powerful AI tools. Elon Musk led a group of experts and industry executives in calling for a six-month pause in developing advanced systems until we figured out how to manage risks. Then hundreds of AI leaders—including Sam Altman of OpenAI and Demis Hassabis of Alphabet's DeepMind—sent shockwaves with a statement that warned: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war." Also Read: AI didn't take the job. It changed what the job is. Despite all the attention paid to the potentially catastrophic dangers, the years since have been marked by AI 'accelerationists' largely drowning out AI doomers. Companies and countries have raced towards being the first to achieve superhuman AI, brushing off the early calls to prioritise safety. And it has all left the public very confused. But maybe we've been viewing this all wrong. Hiroshi Yamakawa, a prominent AI scholar from the University of Tokyo who has spent the past three decades studying the technology, is now arguing that the most promising route to a sustainable future is to let humans and AIs 'live in symbiosis and flourish together, protecting each other's well-being and averting catastrophic risks." Yamakawa hit a nerve because while he recognizes the threats noted in 2023, he argues for a working path toward coexistence with super-intelligent machines—especially at a time when nobody is halting development over fears of falling behind. In other words, if we can't beat AI from becoming smarter than us, we're better off joining it as an equal partner. 'Equality' is the sensitive part. Humans want to keep believing they are superior, not equal to machines. Also Read: Rahul Matthan: AI models aren't copycats but learners just like us His statement has generated a lot of buzz in Japanese academic circles, receiving dozens of signatories so far, including from some influential AI safety researchers overseas. In an interview with Nikkei Asia, he argued that cultural differences in Asia are more likely to enable seeing machines as peers instead of as adversaries. While the United States has produced AI-inspired characters like the Terminator from the eponymous Hollywood movie, the Japanese have envisioned friendlier companions like Astro Boy or Doraemon, he told the news outlet. Beyond pop culture, there's some truth to this cultural embrace. At just 25%, Japanese people had the lowest share of respondents who say products using AI make them nervous, according to a global Ipsos survey last June, compared to 64% of Americans. It's likely his comments will fall on deaf ears, though, like so many of the other AI risk warnings. Development has its own momentum. And whether the machines will ever get to a point where they could spur 'civilization extinction' remains an extremely heated debate. It's fair to say that some of the industry's focus on far-off, science-fiction scenarios is meant to distract from the more immediate harm that the technology could bring—whether that's job displacement, allegations of copyright infringement or reneging on climate change goals. Still, Yamakawa's proposal is a timely re-up on an AI safety debate that has languished in recent years. These discussions can't just rely on eyebrow-raising warnings and the absence of governance. Also Read: You're absolutely right, as the AI chatbot says With the exception of Europe, most jurisdictions have focused on loosening regulations in the hope of not falling behind. Policymakers can't afford to turn a blind eye until it's too late. It also shows the need for more safety research beyond just the companies trying to create and sell these products, like in the social-media era. These platforms were obviously less incentivized to share their findings with the public. Governments and universities must prioritise independent analysis on large-scale AI risks. Meanwhile, as the global tech industry has been caught up in a race to create computer systems that are smarter than humans, it's yet to be determined whether we'll ever get there. But setting godlike AI as the goalpost has created a lot of counter-productive fear-mongering. There might be merit in seeing these machines as colleagues and not overlords. ©Bloomberg The author is a Bloomberg Opinion columnist covering Asia tech.


Mint
32 minutes ago
- Mint
Zuckerberg leads AI recruitment blitz armed with $100 million pay packages
Next Story Meghan Bobrowsky , Berber Jin , Ben Cohen , The Wall Street Journal In a bid to address an AI crisis at his company, the Meta CEO has gotten personally involved in recruiting top talent. Meta CEO Mark Zuckerberg likes to be part of every step of recruiting AI talent. Gift this article The smartest AI researchers and engineers have spent the past few months getting hit up by one of the richest men in the world. The smartest AI researchers and engineers have spent the past few months getting hit up by one of the richest men in the world. Mark Zuckerberg is spending his days firing off emails and WhatsApp messages to the sharpest minds in artificial intelligence in a frenzied effort to play catch-up. He has personally reached out to hundreds of researchers, scientists, infrastructure engineers, product stars and entrepreneurs to try to get them to join a new Superintelligence lab he's putting together. Some of the people who have received the messages were so surprised they didn't believe it was really Zuckerberg. One person assumed it was a hoax and didn't respond for several days. And Meta's chief executive isn't just sending them cold emails. Zuckerberg is also offering hundreds of millions of dollars, sums of money that would make them some of the most expensive hires the tech industry has ever seen. In at least one case, he discussed buying a startup outright. While the financial incentives have been mouthwatering, some potential candidates have been hesitant to join Meta Platforms' efforts because of the challenges that its AI efforts have faced this year, as well as a series of restructures that have left prospects uncertain about who is in charge of what, people familiar with their views said. Meta's struggles to develop cutting-edge artificial-intelligence technology reached a head in April, when critics accused the company of gaming a leaderboard to make a recently released AI model look better than it was. They also delayed the unveiling of a new, flagship AI model, raising questions about the company's ability to continue advancing quickly in an industrywide AI arms race. To remedy Meta's AI malaise, Zuckerberg has become the company's recruiter-in-chief. He has tried to recruit OpenAI co-founder John Schulman and Bill Peebles, the co-creator of OpenAI's Sora video generator, according to people familiar with the matter. Neither of them have joined. Zuckerberg also has tried to recruit OpenAI co-founder Ilya Sutskever, according to people familiar with the matter. Meta invested in Sutskever's new AI startup called Safe Superintelligence earlier this year, and is in talks to hire Daniel Gross, SSI's CEO, and Nat Friedman, a former GitHub CEO and Microsoft executive. As part of those discussions, Meta is offering to buy out portions of their venture fund. At Meta, the two would help develop new AI products. Zuckerberg also held discussions with Perplexity and offered to buy the AI search startup, according to people familiar with the matter. The Information and Bloomberg previously reported details of Zuckerberg's efforts. Zuckerberg has offered $100 million packages to some people. He doled out $14 billion for a stake in AI startup Scale and its CEO Alexandr Wang, who is slated to run the new AI team that Zuckerberg is assembling. At that price, he essentially made the 28-year-old one of the most lucrative hires in history. Beyond the Scale deal and a few other hires, it is unclear how successful his efforts will ultimately be. OpenAI CEO Sam Altman says his best people remain at his company. OpenAI has given counteroffers to people Meta has tried to poach, promising them more money and scope in their jobs if they stay, according to a person familiar with the matter. Meanwhile, Altman has been on a spending spree of his own, paying billions for former Apple designer Jony Ive's startup. For those who have turned him down, Zuckerberg's stated vision for his new AI superteam was also a concern. He has tasked the team, which will consist of about 50 people, with achieving tremendous advances with AI models, including reaching a point of 'superintelligence." Some found the concept vague or without a specific enough execution plan beyond the hiring blitz, the people said. Potential hires and Meta employees working on its AI teams also pointed to a major point of tension: Meta's chief AI scientist is a skeptic of the fundamental approach that his company and others are currently taking to advance AI technology. Yann LeCun, whom Zuckerberg convinced to come run a newly started AI research division at the time in 2013 using the same tactics, doesn't believe that the large language models that the company and others in the industry are currently building will get the world to artificial intelligence that is smarter than human beings. For Zuckerberg, the turning point came this past spring. After the model release fell flat, he sprung into action. These days, people inside the company say they have never seen him so focused on recruiting. Zuckerberg is in a WhatsApp chat called 'Recruiting Party 🎉" with Ruta Singh, a Meta executive in charge of recruiting, and Janelle Gale, the company's head of people. Zuckerberg is also in the weeds of wonky AI research papers, digging into the tech and trying to find out about who is actually building it. He also believes there is a flywheel effect to recruiting: By talking to the smartest person he can find, they will introduce him to the smartest people in their networks. When the Recruiting Party 🎉 chat finds people worth targeting, Zuckerberg wants to know their preferred method of communication, and he gets their attention by sending the first messages himself. Zuckerberg has taken recruiting into his own hands, according to a person familiar with his approach, because he recognizes that it is where he can personally have the most leverage inside the company he founded—that an email from him is a more powerful weapon than outreach from a faceless headhunter. Once the researchers actually believe the person emailing them really is the CEO of Meta, Zuckerberg often hosts them for meals at his homes in Palo Alto, Calif., and Lake Tahoe. He likes to remain involved during every step of the recruiting process, right down to planning their desk locations. He is also telling researchers they won't have to worry about computing power or funding at Meta, since their work will be supported by hundreds of billions of dollars in advertising revenue and the company's plentiful access to the most powerful chips. But it remains unclear whether this strategy of combining a personal touch with piles of money will pay off for Meta. In recent days, one of Zuckerberg's rivals publicly derided his offers. 'At least so far," Altman said, 'none of our best people have decided to take them up on that." Topics You May Be Interested In Catch all the Business News , Corporate news , Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.


India Today
40 minutes ago
- India Today
Reddit eyes Sam Altman's Orb scanner to verify humans and crack down on AI bots
Reddit may soon ask users to prove they're human – by scanning their eyes. According to an exclusive report by Semafor, the social media platform is in discussions to adopt World ID, a controversial digital identity system co-founded by OpenAI CEO Sam Altman. The goal? To fight the rising number of AI bots on Reddit, which have become harder to detect as tools like ChatGPT grow more ID is part of Altman's broader project, World, formerly known as Worldcoin. At the centre of the system is the Orb, a shiny, spherical device that scans a person's iris to create a unique, anonymous digital identity. This 'proof of humanness' can then be used across various apps and platforms – potentially including idea is to offer Reddit users a way to confirm they're real people without giving up personal information. Sources told Semafor that Reddit also plans to include "many" other verification options besides World ID, though details are unclear. If adopted, World ID could allow Reddit users to log in anonymously but with added credibility, keeping bots and fake accounts at bay. The platform, known for its pseudonymous culture and user-led communities, is facing increasing pressure to ensure authenticity while respecting eye-scanning Orb itself already arrived in the UK earlier this month, after launching in six US cities earlier this year. Orbs have now begun appearing in high street shops and malls in London. Rollouts in Manchester, Birmingham, Cardiff, Belfast and Glasgow are expected in the coming months, according to Bloomberg. The company behind the tech, Tools for Humanity, also plans to install self-service Orbs in select retail locations – much like standalone is how it works: you look into the Orb, which scans your iris and face. It then generates an encrypted code that becomes your World ID. The original images are immediately deleted, and the digital ID is stored locally on your phone. As an added incentive, users receive some of World's own cryptocurrency, Worldcoin (WLD), for signing World ID has drawn criticism globally over privacy concerns. Regulators in Germany, Argentina, and Kenya have opened investigations. Spain and Hong Kong have outright banned the technology, and South Korea recently fined the company over $800,000 for violating privacy for Humanity maintains that it does not store any biometric data, and that all iris scans are encrypted and anonymised. Ludwig has argued that World ID is safer and more privacy-conscious than other systems, such as India's Aadhaar, which has suffered multiple data breaches over the present, there are around 1,500 Orb scanners in use globally, with plans to distribute 12,000 more in the next year. As Reddit considers tapping into this controversial system to clean up its platform, it joins a growing list of companies struggling to answer one urgent question: how do we stay human in the age of AI?