
IBM plans to launch Starling quantum computer by 2029, it can detect and fix its own errors without crashing
IBM has unveiled a new vision to create the world's first large-scale, fault-tolerant quantum computer. The company aims to deliver the system in 2029, and calls it "IBM Quantum Starling" system. The project, to be housed within a newly constructed IBM Quantum Data Centre in Poughkeepsie, New York, promises to revolutionise the capabilities of quantum computing far beyond today's existing technologies. The Starling quantum computer is expected to execute 20,000 times more operations than current quantum machines, reaching levels of computational complexity previously thought unattainable. According to IBM, representing the full computational state of Starling would require memory equivalent to more than a quindecillion of today's most powerful supercomputers. With this leap, researchers and businesses will be able to explore the full spectrum of quantum states, offering insights far beyond what current quantum devices can deliver.IBM's Starling quantum computer'IBM is charting the next frontier in quantum computing,' said Arvind Krishna, IBM's Chairman and CEO. 'Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.'Fault-tolerant quantum systems are viewed as the gateway to practical applications across various sectors such as pharmaceuticals, materials science, chemistry, and optimisation. With hundreds or even thousands of logical qubits, these machines could potentially perform hundreds of millions, or even billions, of operations with unprecedented accuracy and efficiency.The Starling system aims to achieve 100 million quantum operations using 200 logical qubits. It will serve as the foundation for IBM's subsequent system, Quantum Blue Jay, which aspires to handle one billion quantum operations across 2,000 logical qubits.Unlike conventional qubits, logical qubits rely on multiple physical qubits operating together to store quantum information while continuously correcting for errors. Error correction is critical, as it allows the system to perform sustained computations without faults. The more physical qubits involved, the more reliable the logical qubit becomes, enabling extended quantum operations that were previously impossible.Until now, scaling up quantum systems has been hampered by the impracticality of managing the sheer number of physical qubits required. Previous error-correcting methods demanded excessive hardware and infrastructure, limiting real-world applications to only small-scale experiments.IBM's approach is grounded in a new architecture based on quantum low-density parity check (qLDPC) codes, which the company detailed in two newly published technical papers. This innovative error-correcting code, which gained recognition in Nature, reduces the number of physical qubits needed for error correction by around 90 per cent compared to traditional methods, making large-scale systems far more feasible.The first paper outlines how qLDPC codes will enable the system to process instructions efficiently and perform quantum operations with considerably less overhead. The second describes real-time decoding techniques, which allow conventional computing resources to swiftly identify and correct errors during quantum operations.IBM's roadmapIBM's updated Quantum Roadmap lays out a series of milestones leading up to Starling. In 2025, the IBM Quantum Loon processor will begin testing architectural components such as 'C-couplers' for long-distance qubit connections. In 2026, Quantum Kookaburra will mark the company's first modular processor capable of both storing and processing encoded information. By 2027, the Quantum Cockatoo system will connect multiple Kookaburra modules via 'L-couplers,' enabling scalable quantum systems that avoid the impracticality of massive, monolithic chips.
advertisement

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
a day ago
- Hindustan Times
Quantum computing's achilles' heel: Tech giants are tackling an error crisis
At the turn of this year, the course became clear. First, quantum computing chips made a generational leap, albeit with Microsoft and Google taking different approaches to generating desired performance. Now, Microsoft has pushed the envelope further, having developed error-correction codes that are applicable to many types of qubits. So has IBM, signifying broad efforts towards the same result. The company insists the present generation of quantum computers, that use qubits, often run into errors, which they cannot resolve on their own. 'Reliable quantum computing requires progress across the full stack, from error correction to hardware. With new 4D codes reducing error rates 1,000x, and our co-designed quantum system with Atom Computing, we're bringing utility-scale quantum closer than ever,' says Satya Nadella, Microsoft Chairman and CEO. Atom Computing builds scalable quantum computers. A quantum computer, compared with traditional, familiar computers, pack magnitudes more computing power to be able to solve complex problems. To compute, traditional computers store information in bits (that is, 0 and 1). Quantum computing is built around qubits that do both at the same time (a bit like Shrodinger's cat). They are not designed to replace traditional computers, at least work and home uses. One could point to the 2024 movie AfrAId, and Netflix' 2023 movie Heart Of Stone, as having foretold quantum's prowess. Microsoft's four-dimensional geometric codes require fewer physical qubits for compute, can check for errors faster, and have reportedly returned a 1,000-fold reduction in error rates. There is hope for this framework of error detection and correction, that can adapt to various types of qubits, making the technology more versatile and practical for real-world applications. The significance of Microsoft's approach cannot be overstated. Traditional quantum error correction methods have struggled with a delicate balance between protecting quantum information whilst maintaining the very properties that make quantum computing powerful. They aren't the only tech giant that is tackling errors in quantum computing. IBM, this month, detailed a roadmap for the IBM Quantum Starling, which they say is the world's first large-scale fault-tolerant quantum computer. It is expected to be delivered by 2029, as part of IBM's new Quantum Data Center. 'Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business,' says said Arvind Krishna, Chairman and CEO of IBM. Quantum computing stands at a critical juncture. Qubits are extremely sensitive to their environment. Smallest of disturbances, ranging from electromagnetic interference to temperature fluctuations, can cause them to 'decohere'. That means, they lose their quantum properties and essentially become classical bits. At that stage, quantum computations produce errors. The challenge is both technical and mathematical. Since quantum states cannot be copied like data on a computer, quantum error correction becomes exponentially more complex. Microsoft is assessing this development with a sense of caution. 'We are in the early stages of reliable quantum computing, and the impact that this technology will have is just beginning to be realised. Practical applications will start to be revealed as researchers in various industries adopt a co-design approach to explore interactions between quantum architectures, algorithms, and applications,' explains Krysta Svore, Technical Fellow, Advanced Quantum Development at Microsoft Quantum. Earlier in the year, Microsoft's quantum computing aspirations saw significant forward movement, with the Majorana 1 chip — a first of its kind scalable chip with versatile architecture, that can potentially fit a million qubits. It currently holds 8 topological qubits. Majorana 1 sits alongside Google's Willow chip, IBM's Quantum Heron, as well as the Zuchongzhi 3.0, developed by Chinese scientists late last year. Error correction was a focus area then too. Microsoft created what is essentially a new state of matter called a topological superconductor, that is more stable and error resistant. Google too believes it has cracked the code for error correction and is building a machine that they expect will be ready by 2029. Crucial to their approach is the Willow chip, and the balance between logical qubits and physical qubits. Physical qubits are the actual quantum bits built into the hardware - the individual atoms, photons, or superconducting circuits that store quantum information. Whereas, logical qubits are error-corrected qubits created by combining multiple physical qubits together with sophisticated error correction codes. Think of them as 'virtual' qubits. Google's research points to the 'quantum error correction threshold', as the tipping point where this dynamic reverses — where logical qubits that are more reliable, outnumber physical ones. There are similarities in Google and IBM's approach regarding this balance. Central to IBM's approach is its creation of a quantum error-correcting code that they claim is about 10 times more efficient than prior methods. This efficiency gain proves crucial, at least in tests, because traditional error correction methods require hundreds or thousands of physical qubits to create a single reliable logical qubit, making large-scale quantum computers prohibitively complex. For all the potential that quantum computing may profess to, at least in delivering real-world solutions for matters including drug discovery, cybersecurity, material science and financial risk analysis, it finds itself precariously perched in this pivotal moment. Error correction capabilities are important for it to simply work as it should, and also to keep operational costs down. IBM's modular scalability, Google's systematic threshold-crossing methodology, and Microsoft's new 4D code architecture, though differing in approach, all believe they may be rushing towards a workable solution. As quantum creeps ever closer, the years that lie ahead will testify to levels of success.


Time of India
a day ago
- Time of India
India moves to benchmark iron ore prices to global index for greater transparency
India has proposed benchmarking the price of domestically produced iron ore to S&P Global Platts , or other such global publication gauges. Currently, the Indian Bureau of Mines (IBM) announces the average sale price (ASP) of iron ore in the country based on self-declarations by mining companies. This is then used to calculate royalty and District Mineral Fund (DMF) disbursals payable to states. Moving away from the self-declaration regime, a Mines Ministry notification said the IBM shall compute the daily price of Iron Ore (60 to below 62 per cent Fe grade) Fines in Indian Rupees based on prices published daily by S&P Global Platts or other reputed publications for iron ore (of the same grade). Self-declared prices by mining companies led to the possibility of understating revenues which also meant a lesser royalty payable to the states. The proposal brings transparency in iron ore pricing, limiting room for mining companies to declare lower prices for higher grades. It also brings uniformity in iron ore prices , as earlier prices would vary across states. States use prices declared by these companies to calculate their revenue, as the law stipulates that mining entities pay 15 per cent of the iron ore sale revenue as royalty to the states. Further, 2 per cent of the royalty is paid as DMF, used to fund development activities in mining-affected areas.


Time of India
2 days ago
- Time of India
IBM study: Indian CEOs double down on AI investments to drive long-term innovation
74% of surveyed CEOs say more budget flexibility is needed to capitalize on digital opportunities that drive long-term growth and innovation64% of surveyed CEOs strongly agree that their organization is realizing value from GenAI investments beyond cost reduction A new global study by the IBM (NYSE: IBM ) Institute for Business Value suggests that surveyed Indian CEOs are open to investing in digital opportunities that drive long term growth and innovation but need more budget flexibility to do so. They also cite lack of expertise and knowledge as a top barrier to innovation in their organization. The study also points to Indian CEOs investing in AI with purpose and having clear metrics to measure innovation ROI. The annual IBM CEO study , which surveyed 2,000 CEOs globally, revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years. In India, 51% of surveyed CEOs confirm they are actively adopting AI agents today and preparing to implement them at scale. According to the findings, in India 58% of surveyed CEOs identify integrated enterprise-wide data architecture as critical for cross-functional collaboration, and 71% view their organization's proprietary data as key to unlocking the value of generative AI . However, the research indicates organizations may be struggling to cultivate an effective data environment: 53% of respondents acknowledge that the pace of recent investments has left their organization with disconnected, piecemeal technology. 'Indian CEOs are at the forefront of a massive transformation fuelled by technological advancements like generative AI and Agentic AI. It is no longer if they should adopt AI but where it can deliver the strongest competitive edge, and accelerated growth,' said Sandip Patel, Managing Director, IBM India & South Asia. 'To lead in this era, CEOs must see disruption as opportunity, focusing on tangible business outcomes while navigating constant change. At IBM, we're helping Indian enterprises scale AI responsibly and drive seamless AI adoption for long-term growth,' he added. Highlights for India from the IBM CEO Study include: Less Than a Third of AI Initiatives Met ROI Expectations, But Indian CEOs Stay Committed Surveyed CEOs report that only 25% of AI initiatives have delivered expected ROI over the last few years, and only 15% have scaled enterprise wide. To accelerate progress, 62% of CEO respondents say their organization is leaning into AI use cases based on ROI, with 66% reporting that their organization has clear metrics to measure innovation ROI effectively.64% of CEO respondents say their organization is realizing value from generative AI investments beyond cost reduction.69% of CEOs surveyed acknowledge that the risk of falling behind drives investment in some technologies before they have a clear understanding of the value they bring to the organization, but only 39% say it's better to be 'fast and wrong' than 'right and slow' when it comes to technology adoption. 44% of surveyed CEOs admit their organization struggles to balance funding for existing operations and investment in innovation when unexpected change occurs, as 74% of say more budget flexibility is needed to capitalize on digital opportunities that drive long-term growth and 2027, 84% of surveyed CEOs expect their investments in scaled AI efficiency and cost savings to have returned a positive ROI, while 78% expect to see a positive return from their investments in scaled AI growth and expansion. Indian CEOs Prioritize Strategic Leadership and AI Talent to Unlock Future Growth 67% of CEO respondents say their organization's success is directly tied to maintaining a broad group of leaders with a deep understanding of strategy and the authority to make critical decisions.61% of CEOs surveyed say that differentiation depends on having the right expertise in the right positions with the right cite lack of clear innovation strategy, aversion to risk and disruption, and lack of expertise and knowledge as top barriers to innovation in their organization.68% of CEOs say their organization will use automation to address skill gaps.54% of CEO respondents say they are hiring for roles related to AI that did not exist a year ago.