Latest news with #ISCHighPerformance2025

Associated Press
12-06-2025
- Business
- Associated Press
KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment
SINGAPORE--(BUSINESS WIRE)--Jun 12, 2025-- KAYTUS, a leading provider of end-to-end AI and liquid cooling solutions, today announced the release of the latest version of its MotusAI AI DevOps Platform at ISC High Performance 2025. The upgraded MotusAI platform delivers significant enhancements in large model inference performance and offers broad compatibility with multiple open-source tools covering the full lifecycle of large models. Engineered for unified and dynamic resource scheduling, it dramatically improves resource utilization and operational efficiency in large-scale AI model development and deployment. This latest release of MotusAI is set to further accelerate AI adoption and fuel business innovation across key sectors such as education, finance, energy, automotive, and manufacturing. This press release features multimedia. View the full release here: MotusAI Dashboard As large AI models become increasingly embedded in real-world applications, enterprises are deploying them at scale, to generate tangible value across a wide range of sectors. Yet, many organizations continue to face critical challenges in AI adoption, including prolonged deployment cycles, stringent stability requirements, fragmented open-source tool management, and low compute resource utilization. To address these pain points, KAYTUS has introduced the latest version of its MotusAI AI DevOps Platform, purpose-built to streamline AI deployment, enhance system stability, and optimize AI infrastructure efficiency for large-scale model operations. Enhanced Inference Performance to Ensure Service Quality Deploying AI inference services is a complex undertaking that involves service deployment, management, and continuous health monitoring. These tasks require stringent standards in model and service governance, performance tuning via acceleration frameworks, and long-term service stability, all of which typically demand substantial investments in manpower, time, and technical expertise. The upgraded MotusAI delivers robust large-model deployment capabilities that bring visibility and performance into perfect alignment. By integrating optimized frameworks such as SGLang and vLLM, MotusAI ensures high-performance, distributed inference services that enterprises can deploy quickly and with confidence. Designed to support large-parameter models, MotusAI leverages intelligent resource and network affinity scheduling to accelerate time-to-launch while maximizing hardware utilization. Its built-in monitoring capabilities span the full stack—from hardware and platforms to pods and services—offering automated fault diagnosis and rapid service recovery. MotusAI also supports dynamic scaling of inference workloads based on real-time usage and resource monitoring, delivering enhanced service stability. Comprehensive Tool Support to Accelerate AI Adoption As AI model technologies evolve rapidly, the supporting ecosystem of development tools continues to grow in complexity. Developers require a streamlined, universal platform to efficiently select, deploy, and operate these tools. The upgraded MotusAI provides extensive support for a wide range of leading open-source tools, enabling enterprise users to configure and manage their model development environments on demand. With built-in tools such as LabelStudio, MotusAI accelerates data annotation and synchronization across diverse categories, improving data processing efficiency and expediting model development cycles. MotusAI also offers an integrated toolchain for the entire AI model lifecycle. This includes LabelStudio and OpenRefine for data annotation and governance, LLaMA-Factory for fine-tuning large models, Dify and Confluence for large model application development, and Stable Diffusion for text-to-image generation. Together, these tools empower users to adopt large models quickly and boost development productivity at scale. Hybrid Training-Inference Scheduling on the Same Node to Maximize Resource Efficiency Efficient utilization of computing resources remains a critical priority for AI startups and small to mid-sized enterprises in the early stages of AI adoption. Traditional AI clusters typically allocate compute nodes separately for training and inference tasks, limiting the flexibility and efficiency of resource scheduling across the two types of workloads. The upgraded MotusAI overcomes traditional limitations by enabling hybrid scheduling of training and inference workloads on a single node, allowing for seamless integration and dynamic orchestration of diverse task types. Equipped with advanced GPU scheduling capabilities, MotusAI supports on-demand resource allocation, empowering users to efficiently manage GPU resources based on workload requirements. MotusAI also features multi-dimensional GPU scheduling, including fine-grained partitioning and support for Multi-Instance GPU (MIG), addressing a wide range of use cases across model development, debugging, and inference. MotusAI's enhanced scheduler significantly outperforms community-based versions, delivering a 5× improvement in task throughput and 5× reduction in latency for large-scale POD deployments. It enables rapid startup and environment readiness for hundreds of PODs while supporting dynamic workload scaling and tidal scheduling for both training and inference. These capabilities empower seamless task orchestration across a wide range of real-world AI scenarios. About KAYTUS KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at and follow us on LinkedIn and X. View source version on CONTACT: Media Contacts [email protected] KEYWORD: EUROPE SINGAPORE SOUTHEAST ASIA ASIA PACIFIC INDUSTRY KEYWORD: APPS/APPLICATIONS TECHNOLOGY OTHER TECHNOLOGY SOFTWARE NETWORKS INTERNET HARDWARE DATA MANAGEMENT ARTIFICIAL INTELLIGENCE SOURCE: KAYTUS Copyright Business Wire 2025. PUB: 06/12/2025 07:11 AM/DISC: 06/12/2025 07:10 AM


Business Wire
12-06-2025
- Business
- Business Wire
KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment
SINGAPORE--(BUSINESS WIRE)-- KAYTUS, a leading provider of end-to-end AI and liquid cooling solutions, today announced the release of the latest version of its MotusAI AI DevOps Platform at ISC High Performance 2025. The upgraded MotusAI platform delivers significant enhancements in large model inference performance and offers broad compatibility with multiple open-source tools covering the full lifecycle of large models. Engineered for unified and dynamic resource scheduling, it dramatically improves resource utilization and operational efficiency in large-scale AI model development and deployment. This latest release of MotusAI is set to further accelerate AI adoption and fuel business innovation across key sectors such as education, finance, energy, automotive, and manufacturing. As large AI models become increasingly embedded in real-world applications, enterprises are deploying them at scale, to generate tangible value across a wide range of sectors. Yet, many organizations continue to face critical challenges in AI adoption, including prolonged deployment cycles, stringent stability requirements, fragmented open-source tool management, and low compute resource utilization. To address these pain points, KAYTUS has introduced the latest version of its MotusAI AI DevOps Platform, purpose-built to streamline AI deployment, enhance system stability, and optimize AI infrastructure efficiency for large-scale model operations. Enhanced Inference Performance to Ensure Service Quality Deploying AI inference services is a complex undertaking that involves service deployment, management, and continuous health monitoring. These tasks require stringent standards in model and service governance, performance tuning via acceleration frameworks, and long-term service stability, all of which typically demand substantial investments in manpower, time, and technical expertise. The upgraded MotusAI delivers robust large-model deployment capabilities that bring visibility and performance into perfect alignment. By integrating optimized frameworks such as SGLang and vLLM, MotusAI ensures high-performance, distributed inference services that enterprises can deploy quickly and with confidence. Designed to support large-parameter models, MotusAI leverages intelligent resource and network affinity scheduling to accelerate time-to-launch while maximizing hardware utilization. Its built-in monitoring capabilities span the full stack—from hardware and platforms to pods and services—offering automated fault diagnosis and rapid service recovery. MotusAI also supports dynamic scaling of inference workloads based on real-time usage and resource monitoring, delivering enhanced service stability. Comprehensive Tool Support to Accelerate AI Adoption As AI model technologies evolve rapidly, the supporting ecosystem of development tools continues to grow in complexity. Developers require a streamlined, universal platform to efficiently select, deploy, and operate these tools. The upgraded MotusAI provides extensive support for a wide range of leading open-source tools, enabling enterprise users to configure and manage their model development environments on demand. With built-in tools such as LabelStudio, MotusAI accelerates data annotation and synchronization across diverse categories, improving data processing efficiency and expediting model development cycles. MotusAI also offers an integrated toolchain for the entire AI model lifecycle. This includes LabelStudio and OpenRefine for data annotation and governance, LLaMA-Factory for fine-tuning large models, Dify and Confluence for large model application development, and Stable Diffusion for text-to-image generation. Together, these tools empower users to adopt large models quickly and boost development productivity at scale. Hybrid Training-Inference Scheduling on the Same Node to Maximize Resource Efficiency Efficient utilization of computing resources remains a critical priority for AI startups and small to mid-sized enterprises in the early stages of AI adoption. Traditional AI clusters typically allocate compute nodes separately for training and inference tasks, limiting the flexibility and efficiency of resource scheduling across the two types of workloads. The upgraded MotusAI overcomes traditional limitations by enabling hybrid scheduling of training and inference workloads on a single node, allowing for seamless integration and dynamic orchestration of diverse task types. Equipped with advanced GPU scheduling capabilities, MotusAI supports on-demand resource allocation, empowering users to efficiently manage GPU resources based on workload requirements. MotusAI also features multi-dimensional GPU scheduling, including fine-grained partitioning and support for Multi-Instance GPU (MIG), addressing a wide range of use cases across model development, debugging, and inference. MotusAI's enhanced scheduler significantly outperforms community-based versions, delivering a 5× improvement in task throughput and 5× reduction in latency for large-scale POD deployments. It enables rapid startup and environment readiness for hundreds of PODs while supporting dynamic workload scaling and tidal scheduling for both training and inference. These capabilities empower seamless task orchestration across a wide range of real-world AI scenarios. About KAYTUS KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at and follow us on LinkedIn and X.
Yahoo
12-06-2025
- Business
- Yahoo
KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment
Streamlined inference performance, tool compatibility, resource scheduling, and system stability to fast-track large AI model deployment. SINGAPORE, June 12, 2025--(BUSINESS WIRE)--KAYTUS, a leading provider of end-to-end AI and liquid cooling solutions, today announced the release of the latest version of its MotusAI AI DevOps Platform at ISC High Performance 2025. The upgraded MotusAI platform delivers significant enhancements in large model inference performance and offers broad compatibility with multiple open-source tools covering the full lifecycle of large models. Engineered for unified and dynamic resource scheduling, it dramatically improves resource utilization and operational efficiency in large-scale AI model development and deployment. This latest release of MotusAI is set to further accelerate AI adoption and fuel business innovation across key sectors such as education, finance, energy, automotive, and manufacturing. As large AI models become increasingly embedded in real-world applications, enterprises are deploying them at scale, to generate tangible value across a wide range of sectors. Yet, many organizations continue to face critical challenges in AI adoption, including prolonged deployment cycles, stringent stability requirements, fragmented open-source tool management, and low compute resource utilization. To address these pain points, KAYTUS has introduced the latest version of its MotusAI AI DevOps Platform, purpose-built to streamline AI deployment, enhance system stability, and optimize AI infrastructure efficiency for large-scale model operations. Enhanced Inference Performance to Ensure Service Quality Deploying AI inference services is a complex undertaking that involves service deployment, management, and continuous health monitoring. These tasks require stringent standards in model and service governance, performance tuning via acceleration frameworks, and long-term service stability, all of which typically demand substantial investments in manpower, time, and technical expertise. The upgraded MotusAI delivers robust large-model deployment capabilities that bring visibility and performance into perfect alignment. By integrating optimized frameworks such as SGLang and vLLM, MotusAI ensures high-performance, distributed inference services that enterprises can deploy quickly and with confidence. Designed to support large-parameter models, MotusAI leverages intelligent resource and network affinity scheduling to accelerate time-to-launch while maximizing hardware utilization. Its built-in monitoring capabilities span the full stack—from hardware and platforms to pods and services—offering automated fault diagnosis and rapid service recovery. MotusAI also supports dynamic scaling of inference workloads based on real-time usage and resource monitoring, delivering enhanced service stability. Comprehensive Tool Support to Accelerate AI Adoption As AI model technologies evolve rapidly, the supporting ecosystem of development tools continues to grow in complexity. Developers require a streamlined, universal platform to efficiently select, deploy, and operate these tools. The upgraded MotusAI provides extensive support for a wide range of leading open-source tools, enabling enterprise users to configure and manage their model development environments on demand. With built-in tools such as LabelStudio, MotusAI accelerates data annotation and synchronization across diverse categories, improving data processing efficiency and expediting model development cycles. MotusAI also offers an integrated toolchain for the entire AI model lifecycle. This includes LabelStudio and OpenRefine for data annotation and governance, LLaMA-Factory for fine-tuning large models, Dify and Confluence for large model application development, and Stable Diffusion for text-to-image generation. Together, these tools empower users to adopt large models quickly and boost development productivity at scale. Hybrid Training-Inference Scheduling on the Same Node to Maximize Resource Efficiency Efficient utilization of computing resources remains a critical priority for AI startups and small to mid-sized enterprises in the early stages of AI adoption. Traditional AI clusters typically allocate compute nodes separately for training and inference tasks, limiting the flexibility and efficiency of resource scheduling across the two types of workloads. The upgraded MotusAI overcomes traditional limitations by enabling hybrid scheduling of training and inference workloads on a single node, allowing for seamless integration and dynamic orchestration of diverse task types. Equipped with advanced GPU scheduling capabilities, MotusAI supports on-demand resource allocation, empowering users to efficiently manage GPU resources based on workload requirements. MotusAI also features multi-dimensional GPU scheduling, including fine-grained partitioning and support for Multi-Instance GPU (MIG), addressing a wide range of use cases across model development, debugging, and inference. MotusAI's enhanced scheduler significantly outperforms community-based versions, delivering a 5× improvement in task throughput and 5× reduction in latency for large-scale POD deployments. It enables rapid startup and environment readiness for hundreds of PODs while supporting dynamic workload scaling and tidal scheduling for both training and inference. These capabilities empower seamless task orchestration across a wide range of real-world AI scenarios. About KAYTUS KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at and follow us on LinkedIn and X. View source version on Contacts Media Contacts media@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Korea Herald
10-06-2025
- Business
- Korea Herald
Empowering the Future of HPC: MiTAC Showcases Advanced Server Platforms at ISC High Performance 2025
HAMBURG, Germany, June 10, 2025 /PRNewswire/ -- MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings Corp. (TSE:3706) and a leading manufacturer in server platform design, unveils its Advanced Server Platforms at ISC High Performance 2025, Booth #A02. Featuring AMD EPYC™ 9005 Series and Intel ® Xeon ® 6 processors, these platforms highlight MiTAC's commitment to delivering robust performance, efficiency, and scalability tailored to specific needs for AI computing. MiTAC introduces its latest Intel-based servers optimized for modern data center workloads: Built on the Intel Xeon 6 architecture, MiTAC's solutions integrate AI accelerators, high-speed I/O, and power-aware design to meet the evolving demands of intelligent computing with a sustainable approach. AMD EPYC™ 9005 Series Platforms: Scalable Computing with Enhanced Sustainability MiTAC leverages the performance-per-watt advantages of AMD EPYC™ 9005 Series processors to deliver next-generation efficiency for AI, HPC, and cloud-native workloads: MiTAC's AMD-based solutions empower organizations to enhance data center sustainability, reduce energy consumption, and scale efficiently—without compromising performance. Experience MiTAC's Commitment to Sustainable Innovation At ISC 2025, MiTAC demonstrates its forward-looking approach to intelligent infrastructure—delivering platforms that support next-generation AI and HPC workloads while advancing data center sustainability. Visit MiTAC at Booth #A02 to discover how our Intel and AMD-powered solutions enable energy-efficient, high-performance computing built for the future of AI, cloud, and hyperscale operations. About MiTAC Computing Technology Corporation MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings, delivers comprehensive, energy-efficient server solutions backed by industry expertise dating back to the 1990s. Specializing in AI, HPC, cloud, and edge computing, MiTAC Computing employs rigorous methods to ensure uncompromising quality not just at the barebone level but, more importantly, at the system and rack levels—where true performance and integration matter most. This commitment to quality at every level sets MiTAC Computing apart from others in the industry. The company provides tailored platforms for hyperscale data centers, HPC, and AI applications, guaranteeing optimal performance and scalability. With a global presence and end-to-end capabilities—from R&D and manufacturing to global support—MiTAC Computing offers flexible, high-quality solutions designed to meet unique business needs. Leveraging the latest advancements in AI and liquid cooling, along with the recent integration of Intel DSG and TYAN server products, MiTAC Computing stands out for its innovation, efficiency, and reliability, empowering businesses to tackle future challenges.


Cision Canada
10-06-2025
- Business
- Cision Canada
Empowering the Future of HPC: MiTAC Showcases Advanced Server Platforms at ISC High Performance 2025
HAMBURG, Germany, June 10, 2025 /CNW/ -- MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings Corp. (TSE:3706) and a leading manufacturer in server platform design, unveils its Advanced Server Platforms at ISC High Performance 2025, Booth #A02. Featuring AMD EPYC™ 9005 Series and Intel ® Xeon ® 6 processors, these platforms highlight MiTAC's commitment to delivering robust performance, efficiency, and scalability tailored to specific needs for AI computing. Intel® Xeon® 6 Platform Solutions: Balancing Performance and Energy Efficiency for AI-Driven Workloads MiTAC introduces its latest Intel-based servers optimized for modern data center workloads: R2520G6 – A 2U dual-socket compute server, purpose-built for performance and power efficiency across AI, cloud, and enterprise applications. Supporting up to 8TB of DDR5 memory, four PCIe 5.0 x16 slots, and flexible U.2 and E1.S storage options, the R2520G6 delivers a robust, scalable foundation for data-intensive operations. M2710G6 – A 2U 2-node system targeting cloud service providers and hyperscalers. Each node supports single Intel Xeon 6900P processor with up to 128 cores per node, enabling high-density virtualization and containerized workload deployment at scale. G4520G6 – A GPU-accelerated compute platform for AI and HPC, equipped with dual Intel Xeon 6700P processors and eight double-width GPU slots, delivering exceptional parallel processing capabilities. The system includes 32 DDR5-6400 RDIMM slots and redundant 80 PLUS Titanium power supplies for maximum throughput with optimized energy use. Built on the Intel Xeon 6 architecture, MiTAC's solutions integrate AI accelerators, high-speed I/O, and power-aware design to meet the evolving demands of intelligent computing with a sustainable approach. AMD EPYC™ 9005 Series Platforms: Scalable Computing with Enhanced Sustainability MiTAC leverages the performance-per-watt advantages of AMD EPYC™ 9005 Series processors to deliver next-generation efficiency for AI, HPC, and cloud-native workloads: TYAN GC68C-B8056 – A 1U single-socket server purpose-built for high-density cloud and AI environments. Featuring 24 DDR5 DIMM slots, 12 tool-less 2.5-inch NVMe U.2 hot-swap bays, and optimized thermal design, this platform delivers high compute performance with industry-leading energy efficiency. M2810Z5 – A 2U 4-node single-socket system that supports AMD EPYC 9005 processors. Each node is equipped with 12 DDR5 DIMM slots (up to 3TB memory per node) and supports four E1.S drives, enabling dense, modular compute with scalable memory and storage resources—ideal for space- and power-conscious AI and HPC deployments. MiTAC's AMD-based solutions empower organizations to enhance data center sustainability, reduce energy consumption, and scale efficiently—without compromising performance. Experience MiTAC's Commitment to Sustainable Innovation At ISC 2025, MiTAC demonstrates its forward-looking approach to intelligent infrastructure—delivering platforms that support next-generation AI and HPC workloads while advancing data center sustainability. Visit MiTAC at Booth #A02 to discover how our Intel and AMD-powered solutions enable energy-efficient, high-performance computing built for the future of AI, cloud, and hyperscale operations. About MiTAC Computing Technology Corporation MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings, delivers comprehensive, energy-efficient server solutions backed by industry expertise dating back to the 1990s. Specializing in AI, HPC, cloud, and edge computing, MiTAC Computing employs rigorous methods to ensure uncompromising quality not just at the barebone level but, more importantly, at the system and rack levels—where true performance and integration matter most. This commitment to quality at every level sets MiTAC Computing apart from others in the industry. The company provides tailored platforms for hyperscale data centers, HPC, and AI applications, guaranteeing optimal performance and scalability. With a global presence and end-to-end capabilities—from R&D and manufacturing to global support—MiTAC Computing offers flexible, high-quality solutions designed to meet unique business needs. Leveraging the latest advancements in AI and liquid cooling, along with the recent integration of Intel DSG and TYAN server products, MiTAC Computing stands out for its innovation, efficiency, and reliability, empowering businesses to tackle future challenges.