
IBM and NVIDIA Expand Collaboration to Accelerate AI at Scale
IBM has announced new collaborations with NVIDIA, including planned new integrations based on the NVIDIA AI Data Platform reference design to help enterprises more effectively put their data to work to help build, scale and manage generative AI workloads and agentic AI applications. As part of today's news, IBM is planning to launch a content-aware storage capability for its hybrid cloud infrastructure offering, IBM Fusion; intends to expand its watsonx integrations; and is introducing new IBM Consulting capabilities with NVIDIA to help drive AI innovation across the enterprise.
A 2024 IBM report found that more than three in four executives surveyed (77 percent) say generative AI is market-ready, up from just 36 percent in 2023. With this push to put AI into production comes an increased need for compute and data-intensive technologies. The collaboration between IBM and NVIDIA will enable IBM to provide hybrid AI solutions that take advantage of open technologies and platforms while also supporting data management, performance, security, and governance.
Leveraging the NVIDIA AI Data Platform reference architecture, these new solutions are the latest in the IBM and NVIDIA collaboration to build enterprise infrastructure for AI: Augmenting Unstructured Data Processing for AI Performance : With IBM's new content-aware storage (CAS) capability, enterprises will be able to extract the meaning hidden in their rapidly growing volumes of unstructured data for inferencing, without compromising trust and safety, to responsibly scale and enhance AI applications like retrieval-augmented generation (RAG) and AI reasoning. IBM Storage Scale will respond to queries using the extracted and augmented data, speeding up the communications between GPUs and storage using NVIDIA BlueField-3 DPUs and NVIDIA Spectrum-X networking. The multimodal document data extraction workflow will also leverage NVIDIA NeMo Retriever microservices, built with NVIDIA NIM. CAS will be embedded in the next update of IBM Fusion planned for the second quarter of this year.
: With IBM's new content-aware storage (CAS) capability, enterprises will be able to extract the meaning hidden in their rapidly growing volumes of unstructured data for inferencing, without compromising trust and safety, to responsibly scale and enhance AI applications like retrieval-augmented generation (RAG) and AI reasoning. IBM Storage Scale will respond to queries using the extracted and augmented data, speeding up the communications between GPUs and storage using NVIDIA BlueField-3 DPUs and NVIDIA Spectrum-X networking. The multimodal document data extraction workflow will also leverage NVIDIA NeMo Retriever microservices, built with NVIDIA NIM. CAS will be embedded in the next update of IBM Fusion planned for the second quarter of this year. Enabling More Accessible AI : IBM plans to integrate its watsonx offerings with NVIDIA NIM as part of a larger effort to provide access to leading AI models across multiple cloud environments. This will allow organizations to leverage watsonx.ai, IBM's enterprise-grade AI platform and developer studio, to develop and deploy AI models into their applications of choice while utilizing externally hosted models. IBM's watsonx.governance also allows enterprises to implement robust monitoring and governance of NVIDIA NIM microservices across any hosting environment. This type of interoperability is increasingly essential as organizations adopt agentic AI and other advanced applications that require AI model integration.
: IBM plans to integrate its watsonx offerings with NVIDIA NIM as part of a larger effort to provide access to leading AI models across multiple cloud environments. This will allow organizations to leverage watsonx.ai, IBM's enterprise-grade AI platform and developer studio, to develop and deploy AI models into their applications of choice while utilizing externally hosted models. IBM's watsonx.governance also allows enterprises to implement robust monitoring and governance of NVIDIA NIM microservices across any hosting environment. This type of interoperability is increasingly essential as organizations adopt agentic AI and other advanced applications that require AI model integration. Increasing Support for Compute-Intensive Workloads : With more enterprises embracing generative AI and high-performance computing (HPC), IBM Cloud expanded its NVIDIA accelerated computing portfolio by announcing the availability of NVIDIA H200 instances on IBM Cloud. With its large memory capacity and high bandwidth, the NVIDIA H200 Tensor Core GPU instances are engineered to meet the demands of modern AI workloads and larger foundation models.
: With more enterprises embracing generative AI and high-performance computing (HPC), IBM Cloud expanded its NVIDIA accelerated computing portfolio by announcing the availability of NVIDIA H200 instances on IBM Cloud. With its large memory capacity and high bandwidth, the NVIDIA H200 Tensor Core GPU instances are engineered to meet the demands of modern AI workloads and larger foundation models. Transforming Processes with Agentic AI and NVIDIA: IBM Consulting is introducing AI Integration Services to help clients transform and govern end-to-end business processes with agentic AI using NVIDIA Blueprints, such as industry-specific workflows that require agentic AI at the edge. Example use cases include autonomous inspection and maintenance in the manufacturing industry or proactive video data analysis and anomaly response in the energy industry.
IBM Consulting is introducing AI Integration Services to help clients transform and govern end-to-end business processes with agentic AI using NVIDIA Blueprints, such as industry-specific workflows that require agentic AI at the edge. Example use cases include autonomous inspection and maintenance in the manufacturing industry or proactive video data analysis and anomaly response in the energy industry. Optimizing Compute Intensive AI Workloads Across Hybrid Cloud Environments: IBM Consulting helps clients build, modernize and manage compute-intensive AI workloads across hybrid cloud environments leveraging RedHat OpenShift and NVIDIA AI. This includes technologies like NVIDIA AI Foundry, NVIDIA NeMo, NVIDIA AI Enterprise, NVIDIA Blueprints, and NVIDIA Clara to accelerate high-compute, complex tasks, while managing AI governance, data security and compliance requirements.
'IBM is focused on helping enterprises build and deploy effective AI models and scale with speed,' said Hillery Hunter , CTO and General Manager of Innovation, IBM Infrastructure. 'Together, IBM and NVIDIA are collaborating to create and offer the solutions, services and technology to unlock, accelerate, and protect data – ultimately helping clients overcome AI's hidden costs and technical hurdles to monetize AI and drive real business outcomes.'
'AI agents need to rapidly access, fetch and process data at scale, and today, these steps occur in separate silos,' said Rob Davis , vice president, Storage Networking Technology, NVIDIA. 'The integration of IBM's content-aware storage with NVIDIA AI orchestrates data and compute across an optimized network fabric to overcome silos with an intelligent, scalable system that drives near real-time inference for responsive AI reasoning.' 0 0
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Zawya
3 days ago
- Zawya
Musk's X to offer investment, trading in 'super app' push, FT reports
X CEO Linda Yaccarino has said users will soon be able to make investments or trades on the social media platform, the Financial Times reported on Thursday, a move to support billionaire owner Elon Musk's vision to create an "everything app." In an interview with the publisher, Yaccarino said the company was exploring the introduction of an X credit or debit card, which could come as soon as this year. Musk, who in April 2022 clinched a $44 billion deal to buy Twitter and later rebranded it as X, has signaled plans to model it as a "super app," similar to China's WeChat. The social media platform did not immediately respond to a Reuters request for comment. "2025 X will connect you in ways never thought possible. X TV, X Money, Grok and more," Yaccarino wrote in a post in December last year. Payments giant Visa and X partnered to offer direct payment solutions to customers of the social media app, a person familiar with the matter said earlier this year. A super app, or what Musk refers to as an "everything app," has been described as the Swiss army knife of mobile apps, offering a suite of services for users such as messaging, social networking, payments and e-commerce shopping. X hired NBCUniversal advertising chief Yaccarino as CEO in 2023 amid advertiser exodus from the platform as they worried that their ads could appear next to inappropriate content. Yaccarino said that 96% of X's ad clients prior to acquisition had now come back to the platform, the Financial Times report said. The company is poised for its first year of ad revenue growth this year since its acquisition by Musk, according to data from research firm Emarketer in March. X had filed a lawsuit in federal court in Texas against the World Federation of Advertisers, accusing them of unlawfully conspiring to boycott the site and causing it to lose revenue. (Reporting by Jaspreet Singh in Bengaluru; Editing by Andrea Ricci)


Tahawul Tech
4 days ago
- Tahawul Tech
SandboxAQ improves drug discovery with data creation
SandboxAQ, an artificial intelligence startup, recently released a wealth of data in hopes it will speed up the discovery of new medical treatments. The goal is to help scientists predict whether a drug will bind to its target in the human body. But while the data is backed up by real-world scientific experiments, it did not come from a lab. Instead, SandboxAQ, which has raised nearly $1 billion in venture capital, generated the data using Nvidia's chips and will feed it back into AI models that it hopes scientists can use to rapidly predict whether a small-molecule pharmaceutical will bind to the protein that researchers are targeting, a key question that must be answered before a drug candidate can move forward. For example, if a drug is meant to inhibit a biological process like the progression of a disease, scientists can use the tool to predict whether the drug molecule is likely to bind to the proteins involved in that process. The approach is an emerging field that combines traditional scientific computing techniques with advancements in AI. In many fields, scientists have long had equations that can precisely predict how atoms combine into molecules. But even for relatively small three-dimensional pharmaceutical molecules, the potential combinations become far too vast to calculate manually, even with today's fastest computers. So SandboxAQ's approach was to use existing experimental data to calculate about 5.2 million new, 'synthetic' three-dimensional molecules – molecules that haven't been observed in the real world, but were calculated with equations based on real-world data. That synthetic data, which SandboxAQ is releasing publicly, can be used to train AI models that can predict whether a new drug molecule is likely to stick to the protein researchers are targeting in a fraction of the time it would take to calculate it manually, while retaining accuracy. SandboxAQ will charge money for its own AI models developed with the data, which it hopes will get results that rival running lab experiments, but virtually. 'This is a long-standing problem in biology that we've all, as an industry, been trying to solve for', said Nadia Harhen, general manager of AI simulation at SandboxAQ. 'All of these computationally generated structures are tagged to a ground-truth experimental data, and so when you pick this data set and you train models, you can actually use the synthetic data in a way that's never been done before'. Source: Reuters Image Credit: Stock Image


Web Release
4 days ago
- Web Release
VAST Data Powers Smarter, Evolving AI Agents with NVIDIA Data Flywheel
VAST Data, the AI Operating System company, announced today that it is delivering a complete data and compute platform that enables AI intelligence to continuously evolve. The VAST AI OS, combined with NVIDIA AI Enterprise, which includes NeMo microservices that power a data flywheel for continuous model improvement, creates a unified environment where AI pipelines can constantly learn, adapt, and improve. This reference workflow provides a self-optimizing foundation for scalable AI, laying the groundwork for billions of intelligent agents to fine-tune and evolve from their data and experiences. This solution provides enterprises with a converged software platform for data management, database services, and AI compute orchestration. Additionally, VAST AI OS AgentEngine uniquely shares feedback by providing the critical capability to map the intricate web of agent-data interactions through production logs. This granular traceability allows the flywheel to dissect these multi-step interactions, accurately identifying which specific elements require adjustment to enhance outcomes, accelerating model performance and accuracy at scale. 'AI-powered businesses need thinking machines designed for a future where billions of AI agents learn from their own experiences, fine-tune in real time, and create new possibilities through collaboration,' said Jeff Denworth, Co-Founder of VAST Data. 'By unifying NVIDIA's AI software and hardware technologies within the core of the VAST AI Operating System, we are giving customers the foundation to operationalize continuous improvements in AI intelligence at scale, with the security, governance, and service delivery tools required to manage these intelligent agents and the data they rely upon.' As AI moves from isolated projects to always-on infrastructure, businesses need systems that evolve in real time with every data point while addressing new security and governance challenges around fine-tuned models, agent interactions, and decentralized data pipelines. This collaboration makes it possible to run continuous, automated AI pipelines – from ingestion to inference to retraining – all managed within the VAST AI Operating System. Among the first to embrace this strategy is CACEIS, one of Europe's largest asset servicing firms. In collaboration with VAST and NVIDIA, CACEIS is exploring a real-time AI platform concept designed to securely capture, transcribe, and analyze 100% of client meetings. The vision is for the system to instantly generate meeting minutes, surface actionable insights, and deliver anonymized trend data — all seamlessly integrated into their sovereign CRM. With an end-to-end security model at its foundation, the platform is being designed to safeguard client privacy and data integrity while continuously evolving through every interaction. 'AI will be a game-changer, highlighting trends in current needs by analysing meeting reports so we can better serve clients,' said Arnaud Misset, Chief Digital Officer, CACEIS. Using VAST's AgentEngine that leverages the NVIDIA AI-Q Blueprint, CACEIS is developing a platform proof of concept that would enable AI agents to assist relationship managers in real time and help uncover new business opportunities. Built by NVIDIA NeMo microservices and the NVIDIA data flywheel blueprint, the envisioned CACEIS AI factory would continuously capture data and insights from every customer interaction. These feedback loops are intended to drive ongoing model refinement and training, allowing the system to improve and adapt with each meeting. As AI agents learn from one another and from human counterparts, this concept sets the stage for new ideas, collective intelligence, and enterprise-wide knowledge sharing to take shape. This capability was showcased during NVIDIA's Kari Briski's presentation at GTC Europe in Paris. Watch the demo replay here. This collaboration signals a broader shift in enterprise AI, from one-size-fits-all models to dynamic ecosystems of intelligent agents that continuously fine-tune, collaborate, and generate new ideas from their own data and interactions. Managing these agents at scale requires fine-grained security, governance, and access controls to ensure they operate safely and within defined boundaries. It also demands scalable, dynamic infrastructure capable of handling the varied and unpredictable demands of AI agents as they interact with diverse data tools and services. The VAST AI Operating System provides this real-time data infrastructure, along with compute orchestration, QoS tools that enforce fairness as different agents run within the environment, and a security framework to refine intelligence and operationalize AI innovation in a secure, scalable, and adaptive system. 'Data flywheels leverage each interaction with an AI agent to continuously improve system intelligence and value,' said Kari Briski, Vice President of Generative AI Software at NVIDIA. 'CACEIS is an exemplary pioneer with a vision of building an agentic AI data flywheel with NVIDIA and VAST to supercharge productivity for financial services in Europe.' Additional Resources: ? VAST + NVIDIA ? DEMO: AI Agents Unlocked: CACEIS Redefines Client Conversations With VAST Data and NVIDIA ? NVIDIA BLOG: Chat with Your Enterprise Data: Open-Source AI-Q NVIDIA Blueprint Puts Institutional Knowledge at Your Fingertips ? NVIDIA BLOG: Sovereign AI Agents Think Local, Act Global With NVIDIA AI Factories ? NVIDIA BLOG: Build Efficient AI Agents Through Model Distillation With NVIDIA's Data Flywheel Blueprint