Latest news with #ArvindKrishna

Yahoo
16 hours ago
- Business
- Yahoo
BofA outlines the bull and bear arguments surrounding IBM shares
- Shares in International Business Machines (NYSE:IBM) has surged so far this year, spurred on by hopes around the software group's artificial intelligence ambitions. IBM has said that it now has a "book of business" for its ChatGPT-like generative AI that is worth $6 billion, while CEO Arvind Krishna has said that customer interest in utilizing different AI models would likely fuel demand in the future. The company has also been specializing in developing tools that allow clients to build out their own AI-enhanced agents. Speaking to Reuters in May, Krishna suggested that, using IBM's Granite suite of AI models, along with alternatives from Mistral and Facebook-owner Meta Platforms (NASDAQ:META), these agents could be constructed in mere minutes. These capabilities will lead to an acceleration of the rate of growth of its AI operations, Krishna said at the time. The comments came after IBM announced in April that it would invest some $150 billion in the United States, where it has long had a presence as a manufacturer of mainframe computers. Krishna noted that quantum computers -- a new type of computer that harnesses quantum mechanics to carry out tasks -- will also be made in the country. "There's going to be a very healthy market that behooves us to invest and lean in," Krisna told Reuters. Yet, even as optimism surrounds IBM's AI ambitions, a murky economic outlook clouded its most recent earnings. Faced with the looming threat of sweeping U.S. tariffs, analysts have warned that many companies may be reining in spending, potentially weighing on IBM's key consulting arm. A push by U.S. President Donald Trump's administration to slash government spending has also led to the shelving of 15 federal contracts at IBM that accounted for $100 million in business. Revenue from the consulting segment slipped in the most recent quarter by 2%, although IBM backed its 2025 target for top-line growth on a constant currency basis of at least 5%. Writing in a note to clients, analysts at BofA led by Wamsi Mohan said that IBM shares, despite trading at all-time highs, are "interesting due to the transformational initiatives undertaken by management." "IBM underwent a significant transformation over the last five years by shifting their software segment towards strategic M&A investments, shedding lower growth/high cost businesses, and rebalancing their portfolio towards cloud and AI trends," the brokerage wrote. However, they flagged that less rosy assessments of the stock have highlighted that IBM is "structurally under-owned and underweight." "This disconnect stems from the underperformance from 2010-2019 as revenues, margins and free cash flow were under pressure. While the turnaround [from 2020-2025] is acknowledged by bears, the valuation relative to growth profile remains a hurdle for many," the analysts said. Weighing these arguments, the BofA strategists lifted their price target for the stock to $320 from $290 and reiterated their "buy" rating of the stock. Related articles BofA outlines the bull and bear arguments surrounding IBM shares UBS identifies key thematic opportunities for stock market investors This sector is uniquely positioned to capture infrastructure spend growth: analyst Sign in to access your portfolio

Miami Herald
a day ago
- Business
- Miami Herald
Analysts revamp IBM stock price target after AI-fueled new high
For investors looking beyond popular AI software names like Palantir, International Business Machines Corp. (IBM) is quietly becoming a stock to watch. On June 18, IBM stock closed at a record $283.21. Year-to-date, the stock is up more than 27%. Don't miss the move: Subscribe to TheStreet's free daily newsletter The 114-year-old company, once best known for mainframes and hardware, has spent the past few years transforming itself into a modern software and consulting business. It has focused on strategic acquisitions, exited low-growth and high-cost units, and rebalanced its business toward artificial intelligence, hybrid cloud, and enterprise automation. Today, IBM sells software tools that help businesses build and manage AI systems, including WatsonX, its platform for creating and training AI models. It also owns Red Hat, the open-source software giant it acquired in 2019, which plays a key role in its hybrid cloud strategy. Many of IBM's clients are large enterprises and government agencies looking to streamline operations through AI and cloud technology. Image source: Bloomberg/Getty Images IBM stock fell 6.6% on April 24 after its first-quarter earnings report but quickly recovered in the weeks that followed. IBM reported revenue of $14.5 billion for the first quarter, up 1% from the prior year and slightly above analyst expectations. Adjusted earnings per share came in at $1.60, down 5% year over year. Related: Analysts unveil bold forecast for Alphabet stock despite ChatGPT threat The tech giant warned of an "uncertain" operating environment, but reaffirmed its full-year outlook for 5% revenue growth and $13.5 billion in free cash flow, key for both dividends and possible future acquisitions. "While no one is immune to uncertainty, we enter this environment from a position of relative strength and resiliency," IBM CEO Arvind Krishna said during the April earnings call. "Our clients run the world's most essential processes. Our diversity across businesses, geographies, industries, and large enterprise clients position us well to navigate the current climate." Bank of America analyst Wamsi Mohan has raised his price target for IBM to $320 from $290, reiterating a buy rating on the shares, according to a research note on June 18. Despite trading at all-time highs, Mohan believes that IBM "remains interesting due to the transformational initiatives undertaken by management, positioning for growth in Gen AI, Agentic AI (and eventually quantum), and strong FCF driven by internal productivity initiatives." Related: Analyst unveils surprising Nvidia stock price target after nearing record high The analyst continues to view IBM as "a defensive investment" with improving revenue growth. That growth could lead to more cash flow that could be reinvested in more mergers and acquisitions (M&A). Meanwhile, BofA sees the potential for IBM's revenues to accelerate. "In our opinion, the Mainframe has increased in relevance and drives everything from AI, increased software attach and higher quality MIPS on transaction processing, all of which support higher (accelerating) growth in the future," Mohan wrote. Wedbush is also bullish on IBM shares, raising its price target from $300 to $325 with an outperform rating. More Tech Stocks: Amazon tries to make AI great again (or maybe for the first time)Veteran portfolio manager raises eyebrows with latest Meta Platforms moveGoogle plans major AI shift after Meta's surprising $14 billion move IBM remains "one of our top software names to own as the AI Revolution accelerates over the coming years," Wedbush analyst Daniel Ives wrote in a research report published on June 20. "While the stock has had a great run so far in 2025, we believe IBM is still under owned and in the early stages of a renaissance of growth with AI the key driver." The average price target on IBM shares from 14 analysts tracked by TipRanks is $267.54. Related: Top analyst sends bold message on S&P 500 The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.


India Today
12-06-2025
- Business
- India Today
IBM plans to launch Starling quantum computer by 2029, it can detect and fix its own errors without crashing
IBM has unveiled a new vision to create the world's first large-scale, fault-tolerant quantum computer. The company aims to deliver the system in 2029, and calls it "IBM Quantum Starling" system. The project, to be housed within a newly constructed IBM Quantum Data Centre in Poughkeepsie, New York, promises to revolutionise the capabilities of quantum computing far beyond today's existing technologies. The Starling quantum computer is expected to execute 20,000 times more operations than current quantum machines, reaching levels of computational complexity previously thought unattainable. According to IBM, representing the full computational state of Starling would require memory equivalent to more than a quindecillion of today's most powerful supercomputers. With this leap, researchers and businesses will be able to explore the full spectrum of quantum states, offering insights far beyond what current quantum devices can Starling quantum computer'IBM is charting the next frontier in quantum computing,' said Arvind Krishna, IBM's Chairman and CEO. 'Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.'Fault-tolerant quantum systems are viewed as the gateway to practical applications across various sectors such as pharmaceuticals, materials science, chemistry, and optimisation. With hundreds or even thousands of logical qubits, these machines could potentially perform hundreds of millions, or even billions, of operations with unprecedented accuracy and Starling system aims to achieve 100 million quantum operations using 200 logical qubits. It will serve as the foundation for IBM's subsequent system, Quantum Blue Jay, which aspires to handle one billion quantum operations across 2,000 logical conventional qubits, logical qubits rely on multiple physical qubits operating together to store quantum information while continuously correcting for errors. Error correction is critical, as it allows the system to perform sustained computations without faults. The more physical qubits involved, the more reliable the logical qubit becomes, enabling extended quantum operations that were previously now, scaling up quantum systems has been hampered by the impracticality of managing the sheer number of physical qubits required. Previous error-correcting methods demanded excessive hardware and infrastructure, limiting real-world applications to only small-scale approach is grounded in a new architecture based on quantum low-density parity check (qLDPC) codes, which the company detailed in two newly published technical papers. This innovative error-correcting code, which gained recognition in Nature, reduces the number of physical qubits needed for error correction by around 90 per cent compared to traditional methods, making large-scale systems far more first paper outlines how qLDPC codes will enable the system to process instructions efficiently and perform quantum operations with considerably less overhead. The second describes real-time decoding techniques, which allow conventional computing resources to swiftly identify and correct errors during quantum roadmapIBM's updated Quantum Roadmap lays out a series of milestones leading up to Starling. In 2025, the IBM Quantum Loon processor will begin testing architectural components such as 'C-couplers' for long-distance qubit connections. In 2026, Quantum Kookaburra will mark the company's first modular processor capable of both storing and processing encoded information. By 2027, the Quantum Cockatoo system will connect multiple Kookaburra modules via 'L-couplers,' enabling scalable quantum systems that avoid the impracticality of massive, monolithic chips. advertisement


Tahawul Tech
11-06-2025
- Business
- Tahawul Tech
'IBM is charting the next frontier in quantum computing, one that will solve real-world challenges.' – Arvind Krishna, IBM CEO
IBM has outlined its plans to build the world's first large-scale fault-tolerant quantum computer, which will ultimately pave the way for practical and scalable quantum computing. Delivered by 2029, IBM Quantum Starling will be built in a new IBM Quantum Data Center in Poughkeepsie, New York and is expected to perform 20,000 times more operations than today's quantum computers. To represent the computational state of an IBM Starling would require the memory of more than a quindecillion (10^48) of the world's most powerful supercomputers. With Starling, users will be able to fully explore the complexity of its quantum states, which are beyond the limited properties able to be accessed by current quantum computers. IBM, which already operates a large, global fleet of quantum computers, is releasing a new Quantum Roadmap that outlines its plans to build out a practical, fault-tolerant quantum computer. 'IBM is charting the next frontier in quantum computing,' said Arvind Krishna, Chairman and CEO, IBM. 'Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.' A large-scale, fault-tolerant quantum computer with hundreds or thousands of logical qubits could run hundreds of millions to billions of operations, which could accelerate time and cost efficiencies in fields such as drug development, materials discovery, chemistry, and optimization. Starling will be able to access the computational power required for these problems by running 100 million quantum operations using 200 logical qubits. It will be the foundation for IBM Quantum Blue Jay, which will be capable of executing 1 billion quantum operations over 2,000 logical qubits. A logical qubit is a unit of an error-corrected quantum computer tasked with storing one qubit's worth of quantum information. It is made from multiple physical qubits working together to store this information and monitor each other for errors. Like classical computers, quantum computers need to be error corrected to run large workloads without faults. To do so, clusters of physical qubits are used to create a smaller number of logical qubits with lower error rates than the underlying physical qubits. Logical qubit error rates are suppressed exponentially with the size of the cluster, enabling them to run greater numbers of operations. Creating increasing numbers of logical qubits capable of executing quantum circuits, with as few physical qubits as possible, is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. The Path to Large-Scale Fault Tolerance The success of executing an efficient fault-tolerant architecture is dependent on the choice of its error-correcting code, and how the system is designed and built to enable this code to scale. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations – necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be able to be implemented beyond small-scale experiments and devices. A practical, large-scale, fault-tolerant quantum computer requires an architecture that is: Fault-tolerant to suppress enough errors for useful algorithms to succeed. to suppress enough errors for useful algorithms to succeed. Able to prepare and measure logical qubits through computation. through computation. Capable of applying universal instructions to these logical qubits. to these logical qubits. Able to decode measurements from logical qubits in real-time and can alter subsequent instructions. and can alter subsequent instructions. Modular to scale to hundreds or thousands of logical qubits to run more complex algorithms. to scale to hundreds or thousands of logical qubits to run more complex algorithms. Efficient enough to execute meaningful algorithms with realistic physical resources, such as energy and infrastructure. Today, IBM is introducing two new technical papers that detail how it will solve the above criteria to build a large-scale, fault-tolerant architecture. The first paper unveils how such a system will process instructions and run operations effectively with qLDPC codes. This work builds on a groundbreaking approach to error correction featured on the cover of Nature that introduced quantum low-density parity check (qLDPC) codes. This code drastically reduces the number of physical qubits needed for error correction and cuts required overhead by approximately 90 percent, compared to other leading codes. Additionally, it lays out the resources required to reliably run large-scale quantum programs to prove the efficiency of such an architecture over others. The second paper describes how to efficiently decode the information from the physical qubits and charts a path to identify and correct errors in real-time with conventional computing resources. From Roadmap to Reality The new IBM Quantum Roadmap outlines the key technology milestones that will demonstrate and execute the criteria for fault tolerance. Each new processor in the roadmap addresses specific challenges to build quantum systems that are modular, scalable, and error-corrected: IBM Quantum Loon , expected in 2025 , is designed to test architecture components for the qLDPC code, including 'C-couplers' that connect qubits over longer distances within the same chip. , expected in , is designed to test architecture components for the qLDPC code, including 'C-couplers' that connect qubits over longer distances within the same chip. IBM Quantum Kookaburra , expected in 2026 , will be IBM's first modular processor designed to store and process encoded information. It will combine quantum memory with logic operations — the basic building block for scaling fault-tolerant systems beyond a single chip. , expected in , will be IBM's first modular processor designed to store and process encoded information. It will combine quantum memory with logic operations — the basic building block for scaling fault-tolerant systems beyond a single chip. IBM Quantum Cockatoo, expected in 2027, will entangle two Kookaburra modules using 'L-couplers.' This architecture will link quantum chips together like nodes in a larger system, avoiding the need to build impractically large chips. Together, these advancements are being designed to culminate in Starling in 2029.


Channel Post MEA
11-06-2025
- Business
- Channel Post MEA
IBM Plans World's First Fault-Tolerant Quantum Computer By 2029
IBM has unveiled its path to build the world's first large-scale, fault-tolerant quantum computer, setting the stage for practical and scalable quantum computing. Delivered by 2029, IBM Quantum Starling will be built in a new IBM Quantum Data Center in Poughkeepsie, New York and is expected to perform 20,000 times more operations than today's quantum computers. To represent the computational state of an IBM Starling would require the memory of more than a quindecillion (1048) of the world's most powerful supercomputers. With Starling, users will be able to fully explore the complexity of its quantum states, which are beyond the limited properties able to be accessed by current quantum computers. IBM, which already operates a large, global fleet of quantum computers, is releasing a new Quantum Roadmap that outlines its plans to build out a practical, fault-tolerant quantum computer. 'IBM is charting the next frontier in quantum computing,' said Arvind Krishna , Chairman and CEO, IBM. 'Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.' A large-scale, fault-tolerant quantum computer with hundreds or thousands of logical qubits could run hundreds of millions to billions of operations, which could accelerate time and cost efficiencies in fields such as drug development, materials discovery, chemistry, and optimization. Starling will be able to access the computational power required for these problems by running 100 million quantum operations using 200 logical qubits. It will be the foundation for IBM Quantum Blue Jay, which will be capable of executing 1 billion quantum operations over 2,000 logical qubits. A logical qubit is a unit of an error-corrected quantum computer tasked with storing one qubit's worth of quantum information. It is made from multiple physical qubits working together to store this information and monitor each other for errors. Like classical computers, quantum computers need to be error corrected to run large workloads without faults. To do so, clusters of physical qubits are used to create a smaller number of logical qubits with lower error rates than the underlying physical qubits. Logical qubit error rates are suppressed exponentially with the size of the cluster, enabling them to run greater numbers of operations. Creating increasing numbers of logical qubits capable of executing quantum circuits, with as few physical qubits as possible, is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. The Path to Large-Scale Fault Tolerance The success of executing an efficient fault-tolerant architecture is dependent on the choice of its error-correcting code, and how the system is designed and built to enable this code to scale. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations – necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be able to be implemented beyond small-scale experiments and devices. A practical, large-scale, fault-tolerant quantum computer requires an architecture that is: Fault-tolerant to suppress enough errors for useful algorithms to succeed. to suppress enough errors for useful algorithms to succeed. Able to prepare and measure logical qubits through computation. through computation. Capable of applying universal instructions to these logical qubits. to these logical qubits. Able to decode measurements from logical qubits in real-time and can alter subsequent instructions. and can alter subsequent instructions. Modular to scale to hundreds or thousands of logical qubits to run more complex algorithms. to scale to hundreds or thousands of logical qubits to run more complex algorithms. Efficient enough to execute meaningful algorithms with realistic physical resources, such as energy and infrastructure. Today, IBM is introducing two new technical papers that detail how it will solve the above criteria to build a large-scale, fault-tolerant architecture. The first paper unveils how such a system will process instructions and run operations effectively with qLDPC codes. This work builds on a groundbreaking approach to error correction featured on the cover of Nature that introduced quantum low-density parity check (qLDPC) codes. This code drastically reduces the number of physical qubits needed for error correction and cuts required overhead by approximately 90 percent, compared to other leading codes. Additionally, it lays out the resources required to reliably run large-scale quantum programs to prove the efficiency of such an architecture over others. The second paper describes how to efficiently decode the information from the physical qubits and charts a path to identify and correct errors in real-time with conventional computing resources. From Roadmap to Reality The new IBM Quantum Roadmap outlines the key technology milestones that will demonstrate and execute the criteria for fault tolerance. Each new processor in the roadmap addresses specific challenges to build quantum computers that are modular, scalable, and error-corrected: IBM Quantum Loon , expected in 2025 , is designed to test architecture components for the qLDPC code, including 'C-couplers' that connect qubits over longer distances within the same chip. , expected in , is designed to test architecture components for the qLDPC code, including 'C-couplers' that connect qubits over longer distances within the same chip. IBM Quantum Kookaburra , expected in 2026 , will be IBM's first modular processor designed to store and process encoded information. It will combine quantum memory with logic operations — the basic building block for scaling fault-tolerant systems beyond a single chip. , expected in , will be IBM's first modular processor designed to store and process encoded information. It will combine quantum memory with logic operations — the basic building block for scaling fault-tolerant systems beyond a single chip. IBM Quantum Cockatoo, expected in 2027, will entangle two Kookaburra modules using 'L-couplers.' This architecture will link quantum chips together like nodes in a larger system, avoiding the need to build impractically large chips. Together, these advancements are being designed to culminate in Starling in 2029.