Latest news with #NVIDIAEnterpriseAIFactory


Channel Post MEA
6 days ago
- Business
- Channel Post MEA
Dataiku And NVIDIA Unveil FSI Blueprint AI Systems In Financial services.
Dataiku has announced a new FSI Blueprint for deploying agentic AI systems in financial services. This blueprint is designed to help banking and insurance institutions create, connect, and control intelligent AI agents at scale—with the governance, performance, and flexibility required for production in these highly regulated industries. This announcement builds on Dataiku's integration in the NVIDIA Enterprise AI Factory validated design, which helps enterprises accelerate the development and deployment of secure, scalable AI infrastructure. 'AI agents represent the next major shift in enterprise productivity, and banks are among the earliest adopters,' said Malcolm deMayo, Vice President of Global Financial Services at NVIDIA. 'This new bank blueprint from Dataiku, accelerated by NVIDIA, combines reusable components that enable banks to automate thousands of repetitive manual tasks. This allows institutions to deploy intelligent systems that can adapt to complex workflows and evolve responsibly over time—all while meeting regulatory and compliance requirements through central governance.' The FSI Blueprint combines The Universal AI Platform and Dataiku LLM Mesh with NVIDIA NIMmicroservices, NVIDIA NeMo, and GPU-accelerated infrastructure. It leverages AI agents powered by NVIDIA to provide financial institutions with a secure and modular foundation for building agentic AI solutions across use cases like fraud detection, customer service, risk analysis, and operations automation. 'Financial institutions are under pressure to operationalize AI faster, while managing risk, regulation, and complexity,' said John McCambridge, Global Head of Financial Services at Dataiku. 'This FSI Blueprint helps banks and insurers move beyond experimentation, delivering trusted AI agents that are observable, cost-controlled, and designed to deliver meaningful business value.' The Dataiku LLM Mesh offers native integration with NVIDIA NIM to simplify deployment of open, proprietary, and custom models within financial environments. Guardrails within Dataiku LLM Guard Services, such as Cost Guard and Quality Guard, provide built-in oversight, giving IT and product teams control over model usage, cost optimization, and performance evaluation. The collaboration between Dataiku and NVIDIA was unveiled during NVIDIA GTC Paris at VivaTech 2025. The FSI Blueprint represents the first in a series of joint initiatives to drive agentic AI innovation in highly regulated industries, with expansion planned into life sciences and energy. Financial institutions interested in deploying the FSI Blueprint can engage directly with joint go-to-market teams from Dataiku and NVIDIA. To learn from Dataiku and NVIDIA experts how to seamlessly integrate GenAI and agents across different compute environments and front-end applications, register for the FSI Blueprint webinar here:
Yahoo
11-06-2025
- Business
- Yahoo
Supermicro Unveils Industry's Broadest Enterprise AI Solution Portfolio for NVIDIA Blackwell Architecture to Accelerate AI Factory Deployments in European Market
Offers an industry-leading portfolio of more than 30 solutions designed for air or liquid-cooled NVIDIA HGX™ B200, liquid-cooled NVIDIA GB200 NVL72, and NVIDIA RTX PRO 6000 Blackwell Server Edition Speeds up time-to-online through NVIDIA Certified systems and NVIDIA Enterprise AI Factory Validated Designs Future-ready solution stack supports upcoming NVIDIA GB300 NVL72 and HGX B300 NVL8 for seamless technology transitions SAN JOSE, Calif., June 11, 2025 /PRNewswire/ -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing an expansion of the industry's broadest portfolio of solutions designed for NVIDIA Blackwell Architecture to the European market. The introduction of more than 30 solutions reinforces Supermicro's industry leadership by providing the most comprehensive and efficient solution stack for NVIDIA HGX B200, GB200 NVL72, and RTX PRO 6000 Blackwell Server Edition deployments, enabling rapid time-to-online for European enterprise AI factories across any environment. Through close collaboration with NVIDIA, Supermicro's solution stack enables the deployment of NVIDIA Enterprise AI Factory validated design and supports the upcoming introduction of NVIDIA Blackwell Ultra solutions later this year, including NVIDIA GB300 NVL72 and HGX B300. "With our first-to-market advantage and broad portfolio of NVIDIA Blackwell solutions, Supermicro is uniquely positioned to meet the accelerating demand for enterprise AI infrastructure across Europe," said Charles Liang, president and CEO of Supermicro. "Our collaboration with NVIDIA, combined with our global manufacturing capabilities and advanced liquid cooling technologies, enables European organizations to deploy AI factories with significantly improved efficiency and reduced implementation timelines. We're committed to providing the complete solution stack enterprises need to successfully scale their AI initiatives." For more information, please visit In addition to Supermicro's growing selections of air-cooled and liquid-cooled NVIDIA HGX B200 systems and NVIDIA GB200 NVL72 that are rapidly adopted and deployed globally, Supermicro is further expanding the portfolio, such as the new 4U front I/O liquid-cooled Supermicro NVIDIA HGX B200 system incorporating Supermicro's DLC-2 technology. The DLC-2 significantly improves cooling efficiency and front I/O design simplifies cable and liquid-cooling hose management and serviceability, while the DLC-2 in-rack coolant distribution units (CDUs) are capable of removing 250kW of heat per rack. This allows customers to deploy significantly more compute power within existing facility constraints while maintaining optimal thermal performance for sustained AI workloads. "NVIDIA Blackwell-powered AI factories accelerate demanding AI workloads and drive operational excellence across every function of the enterprise," said Chris Marriott, vice president, Enterprise Platforms at NVIDIA. "With its comprehensive portfolio and breakthrough energy efficiency, Supermicro's Blackwell systems transform data centers into AI factories that drive productivity and deliver maximum performance with minimal cost and power." Supermicro is currently accepting orders for systems featuring NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, available across multiple form factors to enable AI deployment from data centers to network edge environments. The lineup includes a new 4U NVIDIA RTX PRO Server equipped with eight NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and NVIDIA MGX™ PCIe Switch Board with ConnectX-8 SuperNIC, which combine PCIe Gen6 switching and 800 Gb/s networking capabilities in a single device to create a versatile enterprise AI factory platform. Supermicro's NVIDIA-Certified Systems will serve as essential building blocks for enterprise AI factories, integrating seamlessly with NVIDIA Spectrum™-X Ethernet networking, NVIDIA-Certified Storage, and NVIDIA AI Enterprise software through Supermicro's Data Center Building Block Solutions (DCBBS), which simplifies deployment of NVIDIA Enterprise AI Factory validated design and accelerates time-to-online for on-premises AI infrastructure. Supermicro's Data Center Building Block Solutions stack readiness for upcoming NVIDIA GB300 NVL72 and NVIDIA HGX B300 systems ensures seamless technology transitions without infrastructure overhaul. The standardized solution architecture includes flexible floor plans, rack elevations, and a comprehensive bill of materials that can accommodate next-generation hardware while maintaining compatibility with existing networking, power, cooling, and management infrastructure. This forward-compatibility protects customer investments while enabling immediate adoption of enhanced AI capabilities as they become available. The NVIDIA AI Enterprise Factory validated design provides additional assurance with rigorous testing and optimization for NVIDIA Blackwell accelerated computing. Featuring NVIDIA networking and the full stack NVIDIA AI Enterprise software platform, these designs allow customers to scale their reasoning model deployments with confidence that their infrastructure foundation will support future innovation cycles. Supermicro's comprehensive approach includes data center design consultation, solution validation, and professional onsite deployment services, reducing typical deployment timelines from 12-18 months to as little as three months. The integration with SuperCloud Composer® software provides data center-level management and infrastructure orchestration capabilities, enabling customers to immediately begin production of AI workloads upon system deployment. With global manufacturing facilities across San Jose, Europe, and Asia, Supermicro delivers unmatched manufacturing capacity for liquid-cooled rack systems, ensuring timely delivery and consistent quality. This end-to-end approach eliminates the complexity of coordinating multiple vendors while providing customers with a single point of accountability for their entire AI infrastructure stack, from initial consultation to ongoing operational support. About Super Micro Computer, Inc. Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling). Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc. All other brands, names, and trademarks are the property of their respective owners. View original content to download multimedia: SOURCE Super Micro Computer, Inc. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
11-06-2025
- Business
- Yahoo
Supermicro Unveils Industry's Broadest Enterprise AI Solution Portfolio for NVIDIA Blackwell Architecture to Accelerate AI Factory Deployments in European Market
Offers an industry-leading portfolio of more than 30 solutions designed for air or liquid-cooled NVIDIA HGX™ B200, liquid-cooled NVIDIA GB200 NVL72, and NVIDIA RTX PRO 6000 Blackwell Server Edition Speeds up time-to-online through NVIDIA Certified systems and NVIDIA Enterprise AI Factory Validated Designs Future-ready solution stack supports upcoming NVIDIA GB300 NVL72 and HGX B300 NVL8 for seamless technology transitions SAN JOSE, Calif., June 11, 2025 /PRNewswire/ -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing an expansion of the industry's broadest portfolio of solutions designed for NVIDIA Blackwell Architecture to the European market. The introduction of more than 30 solutions reinforces Supermicro's industry leadership by providing the most comprehensive and efficient solution stack for NVIDIA HGX B200, GB200 NVL72, and RTX PRO 6000 Blackwell Server Edition deployments, enabling rapid time-to-online for European enterprise AI factories across any environment. Through close collaboration with NVIDIA, Supermicro's solution stack enables the deployment of NVIDIA Enterprise AI Factory validated design and supports the upcoming introduction of NVIDIA Blackwell Ultra solutions later this year, including NVIDIA GB300 NVL72 and HGX B300. "With our first-to-market advantage and broad portfolio of NVIDIA Blackwell solutions, Supermicro is uniquely positioned to meet the accelerating demand for enterprise AI infrastructure across Europe," said Charles Liang, president and CEO of Supermicro. "Our collaboration with NVIDIA, combined with our global manufacturing capabilities and advanced liquid cooling technologies, enables European organizations to deploy AI factories with significantly improved efficiency and reduced implementation timelines. We're committed to providing the complete solution stack enterprises need to successfully scale their AI initiatives." For more information, please visit In addition to Supermicro's growing selections of air-cooled and liquid-cooled NVIDIA HGX B200 systems and NVIDIA GB200 NVL72 that are rapidly adopted and deployed globally, Supermicro is further expanding the portfolio, such as the new 4U front I/O liquid-cooled Supermicro NVIDIA HGX B200 system incorporating Supermicro's DLC-2 technology. The DLC-2 significantly improves cooling efficiency and front I/O design simplifies cable and liquid-cooling hose management and serviceability, while the DLC-2 in-rack coolant distribution units (CDUs) are capable of removing 250kW of heat per rack. This allows customers to deploy significantly more compute power within existing facility constraints while maintaining optimal thermal performance for sustained AI workloads. "NVIDIA Blackwell-powered AI factories accelerate demanding AI workloads and drive operational excellence across every function of the enterprise," said Chris Marriott, vice president, Enterprise Platforms at NVIDIA. "With its comprehensive portfolio and breakthrough energy efficiency, Supermicro's Blackwell systems transform data centers into AI factories that drive productivity and deliver maximum performance with minimal cost and power." Supermicro is currently accepting orders for systems featuring NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, available across multiple form factors to enable AI deployment from data centers to network edge environments. The lineup includes a new 4U NVIDIA RTX PRO Server equipped with eight NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and NVIDIA MGX™ PCIe Switch Board with ConnectX-8 SuperNIC, which combine PCIe Gen6 switching and 800 Gb/s networking capabilities in a single device to create a versatile enterprise AI factory platform. Supermicro's NVIDIA-Certified Systems will serve as essential building blocks for enterprise AI factories, integrating seamlessly with NVIDIA Spectrum™-X Ethernet networking, NVIDIA-Certified Storage, and NVIDIA AI Enterprise software through Supermicro's Data Center Building Block Solutions (DCBBS), which simplifies deployment of NVIDIA Enterprise AI Factory validated design and accelerates time-to-online for on-premises AI infrastructure. Supermicro's Data Center Building Block Solutions stack readiness for upcoming NVIDIA GB300 NVL72 and NVIDIA HGX B300 systems ensures seamless technology transitions without infrastructure overhaul. The standardized solution architecture includes flexible floor plans, rack elevations, and a comprehensive bill of materials that can accommodate next-generation hardware while maintaining compatibility with existing networking, power, cooling, and management infrastructure. This forward-compatibility protects customer investments while enabling immediate adoption of enhanced AI capabilities as they become available. The NVIDIA AI Enterprise Factory validated design provides additional assurance with rigorous testing and optimization for NVIDIA Blackwell accelerated computing. Featuring NVIDIA networking and the full stack NVIDIA AI Enterprise software platform, these designs allow customers to scale their reasoning model deployments with confidence that their infrastructure foundation will support future innovation cycles. Supermicro's comprehensive approach includes data center design consultation, solution validation, and professional onsite deployment services, reducing typical deployment timelines from 12-18 months to as little as three months. The integration with SuperCloud Composer® software provides data center-level management and infrastructure orchestration capabilities, enabling customers to immediately begin production of AI workloads upon system deployment. With global manufacturing facilities across San Jose, Europe, and Asia, Supermicro delivers unmatched manufacturing capacity for liquid-cooled rack systems, ensuring timely delivery and consistent quality. This end-to-end approach eliminates the complexity of coordinating multiple vendors while providing customers with a single point of accountability for their entire AI infrastructure stack, from initial consultation to ongoing operational support. About Super Micro Computer, Inc. Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling). Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc. All other brands, names, and trademarks are the property of their respective owners. View original content to download multimedia: SOURCE Super Micro Computer, Inc. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
22-05-2025
- Business
- Yahoo
CrowdStrike Strengthens AI Security with NVIDIA Enterprise AI Factory Integration
On May 19, CrowdStrike Holdings, Inc. (NASDAQ:CRWD) announced its integration into the NVIDIA Enterprise AI Factory validated design, enabling enterprises to protect AI infrastructure, systems, and models as they scale AI adoption. A financial planner carefully scrutinizing company's investment portfolio. With NVIDIA Blackwell infrastructure, this validated design provides a full-stack AI ecosystem—covering data ingestion, model training, deployment, and runtime use—to help businesses harness AI efficiently. However, AI expansion also introduces risks such as data poisoning, model tampering, and exposure to sensitive data. To combat these threats, CrowdStrike Falcon secures AI with AI, using a continuous feedback loop powered by trillions of daily security events processed by its platform. With insights from elite threat hunters and intelligence analysts, CrowdStrike delivers machine-speed detection and response against both known and emerging threats. Innovations such as Falcon Cloud Security AI-SPM, AI Model Scanning, and Shadow AI detection help businesses identify and mitigate AI-related risks before they escalate. Combined with CrowdStrike AI Red Team Services and Falcon Adversary OverWatch, the integration provides end-to-end security for AI-driven enterprise operations within the NVIDIA AI Factory. As a global cybersecurity leader, CrowdStrike continues to redefine modern security with its cloud-native Falcon platform, securing endpoints, cloud workloads, identity, and data with real-time threat intelligence, automated protection, and elite threat hunting. The platform's lightweight architecture ensures rapid deployment, superior protection, and reduced complexity, reinforcing AI security in today's digital landscape. While we acknowledge the potential of CrowdStrike Holdings, Inc. (NASDAQ:CRWD) as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an AI stock that is more promising than CRWD and that has 100x upside potential, check out our report about the cheapest AI stock. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Straits Times
22-05-2025
- Business
- Straits Times
ASUS Announces Advanced AI POD Design Built with NVIDIA at Computex 2025
Enterprise-optimized reference architectures for accelerated AI infrastructure solutions TAIPEI, May 20, 2025 /PRNewswire/ -- ASUS today announced at Computex 2025 that it is pioneering the next wave of intelligent infrastructure with the launch of the NVIDIA® Enterprise AI Factory validated design, featuring advanced ASUS AI POD designs with optimized reference architectures. These solutions are available as NVIDIA-Certified Systems across NVIDIA Grace Blackwell, HGX, and MGX platforms, supporting both air-cooled and liquid-cooled data centers. Engineered to accelerate agentic AI adoption at every scale, these innovations deliver unmatched scalability, performance, and thermal efficiency, making them the ultimate choice for enterprises seeking to deploy AI at unprecedented speed and scale. ASUS Announces Advanced AI POD Design Built with NVIDIA at Computex 2025 NVIDIA Enterprise AI Factory with ASUS AI POD The validated NVIDIA Enterprise AI Factory with ASUS AI POD design provides guidance for developing, deploying, and managing agentic AI, physical AI, and HPC workloads on the NVIDIA Blackwell platform on-premises. Designed for enterprise IT, it provides accelerated computing, networking, storage, and software to help deliver faster time-to-value AI factory deployments while mitigating deployment risks. Below are the reference architecture designs that help clients use approved practices, acting as a knowledge repository and a standardized framework for diverse applications. For massive-scale computing, the advanced ASUS AI POD, accelerated by NVIDIA GB200/GB300 NVL72 racks and incorporating NVIDIA Quantum InfiniBand or NVIDIA Spectrum-XEthernet networking platforms, features liquid cooling to enable a non-blocking 576-GPU cluster across eight racks, or an air-cooled solution to support one rack with 72 GPUs. This ultra-dense, ultra-efficient architecture redefines AI reasoning computing performance and efficiency. AI-ready racks: Scalable power for LLMs and immersive workloads ASUS presents NVIDIA MGX-compliant rack designs with ESC8000 series featuring dual Intel® Xeon® 6 processors and RTX PRO™ 6000 Blackwell Server Edition with the latest NVIDIA ConnectX-8 SuperNIC – supporting speeds of up to 800Gb/s or other scalable configurations — delivering exceptional expandability and performance for state-of-the-art AI workloads. Integration with the NVIDIA AI Enterprise software platform provides highly-scalable, full-stack server solutions that meet the demanding requirements of modern computing. In addition, NVIDIA HGX reference architecture optimized by ASUS delivers unmatched efficiency, thermal management, and GPU density for accelerated AI fine-tuning, LLM inference, and training. Built on the ASUS XA NB3I-E12 with NVIDIA HGX B300 or ESC NB8-E11 embedded with NVIDIA HGX B200, this centralized rack solution offers unmatched manufacturing capacity for liquid-cooled or air-cooled rack systems, ensuring timely delivery, reduced total cost of ownership (TCO), and consistent performance. Engineered for the AI Factory, enabling next-gen agentic AI Integrated with NVIDIA's agentic AI showcase, ASUS infrastructure supports autonomous decision-making AI, featuring real-time learning, and scalable AI agents for business applications across industries. As a global leader in AI infrastructure solutions, ASUS provides complete data center excellence with both air- and liquid-cooled options — delivering unmatched performance, efficiency, and reliability. We also deliver ultra-high-speed networking, cabling and storage rack architecture designs with NVIDIA-certified storage, RS501A-E12-RS12U as well as the VS320D series to ensure seamless scalability for AI/HPC applications. Additionally, advanced SLURM-based workload scheduling and NVIDIA UFM fabric management for NVIDIA Quantum InfiniBand networks optimize resource utilization, while the WEKA Parallel File System and ASUS ProGuard SAN Storage provide high-speed, scalable data handling. ASUS also provides a comprehensive software platform and services, including ASUS Control Center (Data Center Edition) and ASUS Infrastructure Deployment Center (AIDC), ensuring seamless development, orchestration, and deployment of AI models. ASUS L11/L12-validated solutions empower enterprises to deploy AI at scale with confidence through world-class deployment and support. From design to deployment, ASUS is the trusted partner for next-generation AI Factory innovation. Availability & Pricing ASUS servers are available worldwide. Please visit for more ASUS infrastructure solutions or please contact your local ASUS representative for further information.