KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment
SINGAPORE--(BUSINESS WIRE)--Jun 12, 2025--
KAYTUS, a leading provider of end-to-end AI and liquid cooling solutions, today announced the release of the latest version of its MotusAI AI DevOps Platform at ISC High Performance 2025. The upgraded MotusAI platform delivers significant enhancements in large model inference performance and offers broad compatibility with multiple open-source tools covering the full lifecycle of large models. Engineered for unified and dynamic resource scheduling, it dramatically improves resource utilization and operational efficiency in large-scale AI model development and deployment. This latest release of MotusAI is set to further accelerate AI adoption and fuel business innovation across key sectors such as education, finance, energy, automotive, and manufacturing.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20250612546292/en/
MotusAI Dashboard
As large AI models become increasingly embedded in real-world applications, enterprises are deploying them at scale, to generate tangible value across a wide range of sectors. Yet, many organizations continue to face critical challenges in AI adoption, including prolonged deployment cycles, stringent stability requirements, fragmented open-source tool management, and low compute resource utilization. To address these pain points, KAYTUS has introduced the latest version of its MotusAI AI DevOps Platform, purpose-built to streamline AI deployment, enhance system stability, and optimize AI infrastructure efficiency for large-scale model operations.
Enhanced Inference Performance to Ensure Service Quality
Deploying AI inference services is a complex undertaking that involves service deployment, management, and continuous health monitoring. These tasks require stringent standards in model and service governance, performance tuning via acceleration frameworks, and long-term service stability, all of which typically demand substantial investments in manpower, time, and technical expertise.
The upgraded MotusAI delivers robust large-model deployment capabilities that bring visibility and performance into perfect alignment. By integrating optimized frameworks such as SGLang and vLLM, MotusAI ensures high-performance, distributed inference services that enterprises can deploy quickly and with confidence. Designed to support large-parameter models, MotusAI leverages intelligent resource and network affinity scheduling to accelerate time-to-launch while maximizing hardware utilization. Its built-in monitoring capabilities span the full stack—from hardware and platforms to pods and services—offering automated fault diagnosis and rapid service recovery. MotusAI also supports dynamic scaling of inference workloads based on real-time usage and resource monitoring, delivering enhanced service stability.
Comprehensive Tool Support to Accelerate AI Adoption
As AI model technologies evolve rapidly, the supporting ecosystem of development tools continues to grow in complexity. Developers require a streamlined, universal platform to efficiently select, deploy, and operate these tools.
The upgraded MotusAI provides extensive support for a wide range of leading open-source tools, enabling enterprise users to configure and manage their model development environments on demand. With built-in tools such as LabelStudio, MotusAI accelerates data annotation and synchronization across diverse categories, improving data processing efficiency and expediting model development cycles. MotusAI also offers an integrated toolchain for the entire AI model lifecycle. This includes LabelStudio and OpenRefine for data annotation and governance, LLaMA-Factory for fine-tuning large models, Dify and Confluence for large model application development, and Stable Diffusion for text-to-image generation. Together, these tools empower users to adopt large models quickly and boost development productivity at scale.
Hybrid Training-Inference Scheduling on the Same Node to Maximize Resource Efficiency
Efficient utilization of computing resources remains a critical priority for AI startups and small to mid-sized enterprises in the early stages of AI adoption. Traditional AI clusters typically allocate compute nodes separately for training and inference tasks, limiting the flexibility and efficiency of resource scheduling across the two types of workloads.
The upgraded MotusAI overcomes traditional limitations by enabling hybrid scheduling of training and inference workloads on a single node, allowing for seamless integration and dynamic orchestration of diverse task types. Equipped with advanced GPU scheduling capabilities, MotusAI supports on-demand resource allocation, empowering users to efficiently manage GPU resources based on workload requirements. MotusAI also features multi-dimensional GPU scheduling, including fine-grained partitioning and support for Multi-Instance GPU (MIG), addressing a wide range of use cases across model development, debugging, and inference.
MotusAI's enhanced scheduler significantly outperforms community-based versions, delivering a 5× improvement in task throughput and 5× reduction in latency for large-scale POD deployments. It enables rapid startup and environment readiness for hundreds of PODs while supporting dynamic workload scaling and tidal scheduling for both training and inference. These capabilities empower seamless task orchestration across a wide range of real-world AI scenarios.
About KAYTUS
KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at KAYTUS.com and follow us on LinkedIn and X.
View source version on businesswire.com:https://www.businesswire.com/news/home/20250612546292/en/
CONTACT: Media Contacts
[email protected]
KEYWORD: EUROPE SINGAPORE SOUTHEAST ASIA ASIA PACIFIC
INDUSTRY KEYWORD: APPS/APPLICATIONS TECHNOLOGY OTHER TECHNOLOGY SOFTWARE NETWORKS INTERNET HARDWARE DATA MANAGEMENT ARTIFICIAL INTELLIGENCE
SOURCE: KAYTUS
Copyright Business Wire 2025.
PUB: 06/12/2025 07:11 AM/DISC: 06/12/2025 07:10 AM
http://www.businesswire.com/news/home/20250612546292/en
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 hours ago
- Yahoo
These 2 AI Stocks Give You Access to China's ‘New AI Tiger' MiniMax
Chinese AI startup MiniMax is emerging as a formidable competitor to DeepSeek, having recently launched breakthrough products, including the M1 reasoning model, which uses less than half the computing power of DeepSeek-R1. The Shanghai-based company, valued at $2.5 billion in its March 2024 funding round, is preparing for a Hong Kong initial public offering (IPO) as early as this year. MiniMax represents one of China's 'four new AI tigers' competing against Western giants, such as OpenAI. The company's latest innovations include the Hailuo 02 video generator and MiniMax Agent, which indicate it is poised to gain traction in the generative AI space. Dear Tesla Stock Fans, Mark Your Calendars for June 30 The 'Golden Era' for Tesla Starts June 22. Should You Buy TSLA Stock First? Nvidia Is Quickly Approaching a New Record High. Is It Too Late to Buy NVDA Stock? Markets move fast. Keep up by reading our FREE midday Barchart Brief newsletter for exclusive charts, analysis, and headlines. Investors can gain exposure to MiniMax through two major stakeholders: Alibaba (BABA) and Tencent (TCEHY), both of which are strategic investors in the startup. This provides retail investors with indirect access to one of China's most promising AI companies, without requiring them to wait for the IPO. However, industry experts warn that these so-called AI tigers face sustainability challenges, with most still burning cash while seeking viable profit models beyond free trials and limited enterprise customization. Valued at a market capitalization of $593 billion, Tencent is one of the largest companies in China. In Q1, it reported revenue of $25.1 billion, a 13% year-over-year increase, while net income grew 22% to $8.5 billion. The technology giant's gross profit exceeded $14 billion for the first time, demonstrating strong operational leverage across its diversified business portfolio. Tencent's strategic AI investments are already generating tangible returns across multiple segments. Marketing services revenue accelerated 20% year-over-year, benefiting from AI-powered advertising improvements that enhanced click-through rates from historical 1% levels to 3% in certain inventories. Domestic Games achieved exceptional 24% growth, with flagship titles like Honor of Kings and Peacekeeper Elite reaching record quarterly revenues, supported by AI-enhanced user engagement and content optimization. Tencent's Weixin ecosystem, serving 1.4 billion monthly active users, is becoming the centerpiece of its AI strategy. The tech giant has integrated the Yuanbao AI assistant directly into Weixin chats, enabling context-aware responses and content discovery. Weixin Search now incorporates large language model results, while AI-powered tools help content creators generate images and video effects, significantly reducing the time required for Mini Program development. The gaming portfolio demonstrated remarkable strength, with Delta Force achieving 12 million peak daily active users and becoming the highest-ranked new mobile game released in China over the past three years. International games grew 23% year-over-year, driven by titles including PUBG Mobile and Brawl Stars. Management emphasized that current AI investments represent a long-term value creation strategy, with CEO Pony Ma noting that while near-term costs may temporarily narrow operating leverage, these investments will generate 'substantial incremental returns' over the longer term. Tencent increased capital expenditures by 91% year-over-year to $3.8 billion, primarily for investments in GPUs and servers to enhance its AI capabilities. Out of the 16 analysts covering Tencent stock, 13 recommend 'Strong Buy,' two recommend 'Moderate Buy,' and one recommends 'Hold.' The average target price for Tencent stock is $90, roughly 41% above the current price of $64. In fiscal Q4 2025 (ended in March), Alibaba grew its sales by 7% year over year to $32.6 billion. A focus on operational efficiency allowed the e-commerce giant to increase adjusted EBITDA by 36% to $4.6 billion. Alibaba's AI and cloud computing initiatives are driving momentum. Alibaba Cloud achieved accelerated 18% revenue growth, powered by sustained triple-digit growth in AI-related products for the seventh consecutive quarter. Management highlighted the expansion of AI adoption beyond large enterprises to small and medium-sized businesses, with new customers migrating from traditional offline infrastructure to cloud-based AI services across various sectors, including manufacturing, financial services, and even animal farming. E-commerce operations showed strong user engagement, with Taobao and Tmall Group's customer management revenue growing 12% year-over-year, driven by improved monetization through the Quanzhantui advertising platform and new software service fees. The platform's premium 88VIP membership exceeded 50 million users, demonstrating growing customer loyalty and spending power. International expansion accelerated through Alibaba International Digital Commerce (AIDC), which achieved 22% revenue growth driven by robust cross-border business performance. It remains on track to achieve quarterly profitability in its international e-commerce operations. Alibaba strengthened its balance sheet by divesting non-core assets, generating $2.6 billion in cash proceeds. The company returned $16.5 billion to shareholders through $11.9 billion in share repurchases and $4.6 billion in dividends, including a 5% increase in annual dividends. With a strong $50.5 billion net cash position, Alibaba is well-positioned to capitalize on AI opportunities while maintaining its commitment to shareholder returns. Out of the 20 analysts covering BABA stock, 19 recommend 'Strong Buy' and one recommends 'Moderate Buy.' The average target price for BABA stock is $162, 44% above the current price. On the date of publication, Aditya Raghunath did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on
Yahoo
2 hours ago
- Yahoo
Analysts reboot Micron Technology stock price target ahead of earnings
Analysts reboot Micron Technology stock price target ahead of earnings originally appeared on TheStreet. An elephant might never forget but Micron Technology () has more storage space. The Boise, Idaho, company makes memory and storage chips for data centers, computers and smartphones, and its client list includes such tech-sector superstars as AI-chip chieftain Nvidia () , Mac and iPhone maker Apple () , Facebook parent Meta Platforms () and software kingpin Microsoft () . 💵💰Don't miss the move: Subscribe to TheStreet's free daily newsletter 💰 Make no mistake, there's money in memories, and we're not just talking about those singing cats on Broadway. "After the historic downturn of 2022–2023, the memory industry has entered a phase of strong recovery. In 2024, memory revenue reached a record $170 billion,' according to market researcher Yole Group. "This rebound was fueled by AI-training requirements in data centers, with [high-bandwidth memory) playing a pivotal role due to its premium pricing and performance advantages," the firm added. HBM continues to outperform the broader DRAM chip segment, Yole Group said. This year HBM revenue is set to nearly double to around $34 billion. Micron is a key player in the HBM market, offering solutions like HBM3E and HBM4 designed for high-performance computing and AI applications. The company recently joined the Trump administration to unveil plans to expand its U.S. investments to about $150 billion in domestic memory manufacturing and $50 billion in research and development, creating an estimated 90,000 direct and indirect jobs. More Tech Stocks: Amazon tries to make AI great again (or maybe for the first time) Veteran portfolio manager raises eyebrows with latest Meta Platforms move Google plans major AI shift after Meta's surprising $14 billion move 'Micron's U.S. memory manufacturing and R&D plans underscore our commitment to driving innovation and strengthening the domestic semiconductor industry,' Chairman, President and CEO Sanjay Mehrotra said in a statement. The company's shares are up nearly up nearly 45% this year and off 16% from this time in 2024. Investment firms have been issuing research reports for Micron Technology ahead of its fiscal-third-quarter earnings report, scheduled for June 25. Wedbush boosted its price target on Micron to $150 from $130 while maintaining an outperform rating. The firm said memory-pricing trends turned more positive in the second quarter. And while Wedbush said it doesn't see as significant an inflection in Q3 as it had been expecting back in March, the firm said pricing for both DRAM and NAND, which are two types of semiconductor memory, "still will lift over the next couple of quarters." Better fundamentals are driven by stronger enterprise/server demand, which started around April and looks to hold through the rest of the year, Wedbush said. Demand for both AI and standard workloads appears better than might have been initially anticipated, it said. "Moreover, we view growing [high-bandwidth memory] requirements as not just positive for MU's numbers but also ultimately positive for industry dynamics as [capital spending] and clean-room space are reallocated to support HBM growth," Wedbush will limit the likelihood of too much supply of NAND/DRAM, and it increases the probability that production of more standard parts will trail demand. That, the firm said, would create a more positive pricing/margin cycle vs. what is embedded in its expectations, the firm said. Morgan Stanley maintained an equal-weight rating and $98 price target on Micron, given how much the stock has rallied already, according to The Fly. But the investment firm is "tactically bullish," given that AI spending is materially accelerating as Micron grows its participation. The investment firm, which notes that its estimates remain 20% above consensus for August-quarter earnings, also highlights Sandisk, () which it likes better long term and on which it has an overweight rating. Based in Milpitas, Calif., Sandisk designs and produces flash-memory products, including memory cards, USB flash drives and solid-state drives. Consensus numbers have started to come up for Micron over the past few weeks, but in addition to Morgan Stanley being about 20% above consensus for August earnings per share, the firm is also 14% above for November after it raised estimates six weeks reboot Micron Technology stock price target ahead of earnings first appeared on TheStreet on Jun 20, 2025 This story was originally reported by TheStreet on Jun 20, 2025, where it first appeared. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
2 hours ago
- Yahoo
B.Riley Maintained a Buy Rating on Onto Innovation (ONTO), Keeps the PT
Onto Innovation Inc. (NYSE:ONTO) is one of the 11 Best Tech Stocks to Buy On the Dip. On June 13, Financial analyst Craig Ellis maintained a Buy rating on Onto Innovation Inc. (NYSE:ONTO) with a price target of $160. The rating comes after the company announced enhanced leadership with two new executive appointments. The company appointed Brian Roberts as chief financial officer and Shirley Chen as senior vice president of customer success in a move to achieve its strategic objectives. On May 8, Onto Innovation Inc. (NYSE:ONTO) reported its Q1 2025 results, highlighting record quarterly revenue of $267 million, marking the seventh consecutive quarter of growth. The growth was driven by growth in advanced nodes and advanced packaging markets, particularly supporting AI compute engines and increased investments in cloud and enterprise servers. A technician observing a macro defect inspection process, the precision of the company's systems. Nearly all the products of Onto Innovation Inc. (NYSE:ONTO) are manufactured in the United States, which exposes the company to higher incoming costs due to tariffs imposed by the Trump administration. Management noted accelerating strategic programs to establish manufacturing capabilities in Asia to help eradicate the tariff threat. The company expects shipments from these new facilities to begin in the second half of 2025, thereby further improving its margins. Onto Innovation Inc. (NYSE:ONTO) is engaged in the designing, development, and manufacturing of advanced equipment and systems for microelectronics. It focuses on key areas including Control methodology, Defect Inspection, Lithography Systems, and Data Analysis Systems. While we acknowledge the potential of ONTO as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: The Best and Worst Dow Stocks for the Next 12 Months and 10 Unstoppable Stocks That Could Double Your Money. Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data