
Quantum computing's achilles' heel: Tech giants are tackling an error crisis
At the turn of this year, the course became clear. First, quantum computing chips made a generational leap, albeit with Microsoft and Google taking different approaches to generating desired performance. Now, Microsoft has pushed the envelope further, having developed error-correction codes that are applicable to many types of qubits. So has IBM, signifying broad efforts towards the same result.
The company insists the present generation of quantum computers, that use qubits, often run into errors, which they cannot resolve on their own. 'Reliable quantum computing requires progress across the full stack, from error correction to hardware. With new 4D codes reducing error rates 1,000x, and our co-designed quantum system with Atom Computing, we're bringing utility-scale quantum closer than ever,' says Satya Nadella, Microsoft Chairman and CEO.
Atom Computing builds scalable quantum computers.
A quantum computer, compared with traditional, familiar computers, pack magnitudes more computing power to be able to solve complex problems. To compute, traditional computers store information in bits (that is, 0 and 1). Quantum computing is built around qubits that do both at the same time (a bit like Shrodinger's cat).
They are not designed to replace traditional computers, at least work and home uses. One could point to the 2024 movie AfrAId, and Netflix' 2023 movie Heart Of Stone, as having foretold quantum's prowess.
Microsoft's four-dimensional geometric codes require fewer physical qubits for compute, can check for errors faster, and have reportedly returned a 1,000-fold reduction in error rates. There is hope for this framework of error detection and correction, that can adapt to various types of qubits, making the technology more versatile and practical for real-world applications.
The significance of Microsoft's approach cannot be overstated. Traditional quantum error correction methods have struggled with a delicate balance between protecting quantum information whilst maintaining the very properties that make quantum computing powerful.
They aren't the only tech giant that is tackling errors in quantum computing.
IBM, this month, detailed a roadmap for the IBM Quantum Starling, which they say is the world's first large-scale fault-tolerant quantum computer. It is expected to be delivered by 2029, as part of IBM's new Quantum Data Center.
'Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business,' says said Arvind Krishna, Chairman and CEO of IBM.
Quantum computing stands at a critical juncture. Qubits are extremely sensitive to their environment. Smallest of disturbances, ranging from electromagnetic interference to temperature fluctuations, can cause them to 'decohere'. That means, they lose their quantum properties and essentially become classical bits. At that stage, quantum computations produce errors.
The challenge is both technical and mathematical. Since quantum states cannot be copied like data on a computer, quantum error correction becomes exponentially more complex.
Microsoft is assessing this development with a sense of caution.
'We are in the early stages of reliable quantum computing, and the impact that this technology will have is just beginning to be realised. Practical applications will start to be revealed as researchers in various industries adopt a co-design approach to explore interactions between quantum architectures, algorithms, and applications,' explains Krysta Svore, Technical Fellow, Advanced Quantum Development at Microsoft Quantum.
Earlier in the year, Microsoft's quantum computing aspirations saw significant forward movement, with the Majorana 1 chip — a first of its kind scalable chip with versatile architecture, that can potentially fit a million qubits. It currently holds 8 topological qubits.
Majorana 1 sits alongside Google's Willow chip, IBM's Quantum Heron, as well as the Zuchongzhi 3.0, developed by Chinese scientists late last year. Error correction was a focus area then too. Microsoft created what is essentially a new state of matter called a topological superconductor, that is more stable and error resistant.
Google too believes it has cracked the code for error correction and is building a machine that they expect will be ready by 2029. Crucial to their approach is the Willow chip, and the balance between logical qubits and physical qubits.
Physical qubits are the actual quantum bits built into the hardware - the individual atoms, photons, or superconducting circuits that store quantum information. Whereas, logical qubits are error-corrected qubits created by combining multiple physical qubits together with sophisticated error correction codes. Think of them as 'virtual' qubits.
Google's research points to the 'quantum error correction threshold', as the tipping point where this dynamic reverses — where logical qubits that are more reliable, outnumber physical ones.
There are similarities in Google and IBM's approach regarding this balance.
Central to IBM's approach is its creation of a quantum error-correcting code that they claim is about 10 times more efficient than prior methods. This efficiency gain proves crucial, at least in tests, because traditional error correction methods require hundreds or thousands of physical qubits to create a single reliable logical qubit, making large-scale quantum computers prohibitively complex.
For all the potential that quantum computing may profess to, at least in delivering real-world solutions for matters including drug discovery, cybersecurity, material science and financial risk analysis, it finds itself precariously perched in this pivotal moment. Error correction capabilities are important for it to simply work as it should, and also to keep operational costs down.
IBM's modular scalability, Google's systematic threshold-crossing methodology, and Microsoft's new 4D code architecture, though differing in approach, all believe they may be rushing towards a workable solution. As quantum creeps ever closer, the years that lie ahead will testify to levels of success.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
6 hours ago
- Time of India
This Microsoft feature is accidentally ‘blocking' Google Chrome on Windows
Microsoft 's Family Safety tool is reportedly preventing Google Chrome from opening on some Windows devices. According to a report by The Verge, the issue was first noticed on June 3, and since then, more users have complained about it. It is affecting those who have enabled Family Safety on their devices, causing Chrome to either close immediately or fail to launch at all. Other web browsers, such as Firefox and Opera, however are not affected. What is Microsoft's Family Safety feature The Family Safety feature is commonly used by schools and parents through Microsoft 365 subscriptions to limit online access for children. The bug, as per the report, has now been active for over two weeks, with no update or resolution from Microsoft at the time of publication. Google Chrome acknowledges the issue by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Investire è più facile che mai BG SAXO Scopri di più Undo The Verge report quotes Chrome support manager Ellen T who said 'Our team has investigated these reports and determined the cause of this behavior. For some users, Chrome is unable to run when Microsoft Family Safety is enabled.' While Chrome has acknowledged the issue, Microsoft is yet to issue a public statement or a timeline for a fix. 'We've not heard anything from Microsoft about a fix being rolled out,' a Chromium engineer wrote in a bug report dated June 10. 'They have provided guidance to users who contact them about how to get Chrome working again, but I wouldn't think that would have a large effect.' Some users have found that renaming the Chrome executable file (e.g., from to allows the browser to function. Disabling the 'filter inappropriate websites' option in Family Safety also resolves the issue, but removes content restrictions for children. While the issue is believed to be accidental, Microsoft has previously faced criticism for trying to steer users away from Chrome and toward its own Edge browser, using popups, misleading messages, and in some cases, altering search results. World Music Day 2025: Tech That Changed How We Listen to Music


The Hindu
12 hours ago
- The Hindu
OpenAI and Microsoft: A partnership under strain
A 'head over heels' relationship between a tech titan and an AI startup that began over six years ago is turning sour. Microsoft and OpenAI's pact powered the startup's artificial intelligence engine to build generative pre-trained models and de-aged the software maker for the AI era. Now — after cumulative investments swelling to $13 billion — the couple is battling between mutual reliance and burgeoning autonomy. This recalibration carries weighty implications for both firms. Recent reports suggest Microsoft is prepared to halt discussions over the future contours of its OpenAI alliance if disagreements on critical terms — like Microsoft's future equity stake — persist. The Windows software maker would then rely on its existing commercial contract, ensuring access to OpenAI's technology until marks a potential inflexion point in a relationship that saw Microsoft's capital and cloud infrastructure propel OpenAI to the vanguard of AI. Heart of the matter At the heart of the current negotiations are fundamental differences in strategic outlook. OpenAI has been overtly seeking to lessen its dependency on Microsoft for cloud computing, a move underscored by new partnerships. Notably, OpenAI finalised a deal in May to use Google Cloud's infrastructure, a significant step to diversify its computing resources beyond Microsoft's Azure —its current exclusive provider. It has also partnered with CoreWeave and is exploring arrangements with Oracle as part of Project Stargate to further expand its compute capacity. Such diversification provides OpenAI with technical alternatives and, presumably, greater negotiating leverage. The shifting personal ties between the firms' leaders, Satya Nadella of Microsoft and Sam Altman of OpenAI, mirror these corporate recalibrations. Once in near-constant communication, with Mr. Nadella reportedly texting Mr. Altman five or six times a day, their interactions have become more formalised, primarily consisting of scheduled weekly calls, per news reports. This devolution from spontaneous chats to structured exchange began after Mr. Altman's brief ousting from OpenAI in late 2023 — an event that led Mr. Nadella to rearchitect his company's AI future. While Mr. Nadella backed Mr. Altman, the Microsoft CEO also made his controversial decision to bring DeepMind's Mustafa Suleyman on board. At that point, Mr. Suleyman was running Inflection AI. And as part of the deal, the entire team at Inflection AI joined the software maker. Despite these undercurrents, public pronouncements remained diligently choreographed. Earlier this year, Mr. Altman posted a picture with Mr. Nadella on X, announcing the next phase of their partnership to be 'much better' than anyone is ready for. Mr. Nadella echoed the optimistic sentiment. Such displays were aimed at reassuring investors amidst intricate private negotiations and mounting competition from other AI players, as well as increasing regulatory scrutiny globally. A pivotal point A pivotal point of disagreement between the duo is OpenAI's corporate structure. In May, OpenAI announced it would restructure into a Public Benefit Corporation (PBC), while keeping its non-profit parent in control, retaining the authority to appoint board members . This was a significant shift from earlier considerations of a more conventional for-profit transition that might have diluted the non-profit's oversight and authority. The move, amidst criticism from OpenAI early investor and Tesla CEO Elon Musk, was aimed to better align its operational structure with its stated mission of developing AI for humanity's benefit, while still attracting substantial investment. This restructuring requires Microsoft's assent as a key stakeholder — with the tech giant having provided billions of dollars in funding. Microsoft is said to be negotiating the size of its own potential stake in this new PBC, with discussions reportedly ranging from 20% to 49%. Failure to finalise this restructuring by year-end could jeopardise funding from other investors, including a significant investment from SoftBank. Broader AI strategy Microsoft, for its part, is not standing still. Its AI strategy is visibly broadening beyond its OpenAI relationship. At its Build 2025 conference, Microsoft showcased integrations of models from Anthropic and Musk's xAI, signalling a move towards a more diversified AI portfolio. The company is also developing its own smaller, in-house models, like Phi-4, to reduce costs and reliance on any single provider for its Copilot services. This reflects a growing confidence in its proprietary capabilities and a desire to offer a wider range of AI tools on its Azure platform. Indeed, Microsoft's ability to leverage its existing agreement with OpenAI until 2030 offers it strategic latitude. But the evolving Microsoft-OpenAI dynamic unfolds against a fiercely competitive AI landscape. Both entities are balancing the fruits of their collaboration against the imperatives of strategic independence and market differentiation. Microsoft's potential willingness to pause talks and OpenAI's multi-cloud strategy both signal a relationship that is turning sour. The denouement of these negotiations will not only chart the future courses of the two firms but also establish significant precedents for partnerships, governance, and commercialisation in the rapidly maturing AI domain. The relationship, once a lodestar for AI collaboration, now offers a salient lesson in managing the intricate dance of shared ambition and diverging paths in an industry perpetually remaking itself.

Hindustan Times
12 hours ago
- Hindustan Times
5 smartphone myths you shouldn't fall for in 2025
Smartphones play a crucial role in our daily life, with over 7 billion users worldwide relying on these devices for communication, work, entertainment and creating content to make money. Despite rapid innovation and aggressive marketing, many buyers fall for common myths that can lead to poor purchasing decisions and wasted money. Understanding the facts behind these myths can help consumers make smarter choices. Here are five widespread smartphone misconceptions you should know before your next purchase. RAM in smartphones acts as short-term memory for running apps and processes. While the amount of RAM in phones has grown, from 6GB to as high as 16GB in some models, having more RAM does not automatically make a phone faster. Instead, RAM allows the phone to keep more apps active simultaneously without reloading them from slower storage. The speed of the RAM and the efficiency of the phone's processor play a bigger role in overall performance. A phone with a powerful processor and optimised software will often outperform one with excessive RAM but weaker hardware. Therefore, prioritising processor quality and software efficiency is more important than simply choosing the phone with the highest RAM. Also read: Love robots? 5 shows where humanoid robots are unexpected heroes High-end processors like Snapdragon 8 Elite or Apple's A18 Pro often receive attention for their raw power and gaming capabilities. However, most users will find mid-range processors such as Snapdragon 7 series or MediaTek Dimensity 8000 series to be more than sufficient for daily tasks like browsing, messaging, and streaming. Real-world performance depends heavily on software optimisation rather than just processor speed. Phones with mid-range chips and well-tuned software can provide smooth user experiences and better battery management. While flagship processors offer advanced AI and camera functions, these features rarely impact the average user's daily routine. Instead, focusing on how well the phone performs in everyday tasks and receives updates should guide your choice. Also read: Microsoft cancels Xbox handheld, but teases more thrilling portable gaming experience with Asus ROG Ally Many buyers believe that a higher megapixel count or multiple cameras automatically produce superior photos. In reality, photo quality depends on sensor size, lens quality, and image processing algorithms more than just the number of lenses or megapixels. Phones may include extra cameras like macro or depth sensors that serve little practical use and primarily boost marketing appeal. A well-executed dual-camera system often delivers better results than cluttered multi-camera setups on budget phones. Also, very high megapixel counts, such as 200MP, use pixel binning to combine pixels for better low-light shots, which results in photos that are smaller in resolution than the sensor suggests. Exceptional models like the Samsung Galaxy S25 Ultra use high-resolution sensors to enable features such as detailed cropping and 8K video, but these benefits depend on good overall camera design, not megapixels alone. Also read: India's digital job scene to expand in 2025 with rise in AI and data-driven roles Specifications like high refresh rate displays, fast/rapid charging, and many camera lenses look impressive, but don't always translate into a smooth user experience. Software optimisation, battery management, thermal control, and regular software updates contribute significantly to how well a mobile phone performs day-to-day. A device with top-end specs but poor software tuning can feel slower and less reliable than a mid-range phone with efficient software. When choosing a phone, pay close attention to real-world reviews and the manufacturer's update policies rather than just the spec sheet. Specs and features often dominate buying decisions, but a critical aspect lies beyond the initial unboxing: after-sales support. Even the most advanced smartphones can develop faults or require repairs. Unfortunately, some brands have inconsistent warranty policies or limited service infrastructure. Users might face long waits for repairs, unavailability of parts, or out-of-pocket expenses for manufacturer faults. Issues like screen defects or battery problems can turn an otherwise good phone into a costly burden if support is lacking. Researching a brand's reputation and the quality of its customer service is essential for a smooth ownership experience. Also read: How to quietly limit someone on Instagram without blocking, unfollowing, or causing drama Smartphone buyers should look beyond marketing claims and focus on the real factors that affect their usage experience. More RAM doesn't mean faster speed, flagship processors are not necessary for most, megapixels aren't the sole measure of photo quality, and after-sales support matters as much as hardware specs. Consumers can avoid overspending and disappointment by understanding these truths and selecting devices that truly fit their needs. The key is to balance hardware capability, software optimisation, and reliable support rather than chasing every flashy spec on the market.