logo
#

Latest news with #Kubernetes

Mirantis unveils architecture to speed & secure AI deployment
Mirantis unveils architecture to speed & secure AI deployment

Techday NZ

time4 days ago

  • Business
  • Techday NZ

Mirantis unveils architecture to speed & secure AI deployment

Mirantis has released a comprehensive reference architecture to support IT infrastructure for AI workloads, aiming to assist enterprises in deploying AI systems quickly and securely. The Mirantis AI Factory Reference Architecture is based on the company's k0rdent AI platform and designed to offer a composable, scalable, and secure environment for artificial intelligence and machine learning (ML) workloads. According to Mirantis, the solution provides criteria for building, operating, and optimising AI and ML infrastructure at scale, and can be operational within days of hardware installation. The architecture leverages templated and declarative approaches provided by k0rdent AI, which Mirantis claims enables rapid provisioning of required resources. This, the company states, leads to accelerated prototyping, model iteration, and deployment—thereby shortening the overall AI development cycle. The platform features curated integrations, accessible via the k0rdent Catalog, for various AI and ML tools, observability frameworks, continuous integration and delivery, and security, all while adhering to open standards. Mirantis is positioning the reference architecture as a response to rising demand for specialised compute resources, such as GPUs and CPUs, crucial for the execution of complex AI models. "We've built and shared the reference architecture to help enterprises and service providers efficiently deploy and manage large-scale multi-tenant sovereign infrastructure solutions for AI and ML workloads," said Shaun O'Meara, chief technology officer, Mirantis. "This is in response to the significant increase in the need for specialized resources (GPU and CPU) to run AI models while providing a good user experience for developers and data scientists who don't want to learn infrastructure." The architecture addresses several high-performance computing challenges, including Remote Direct Memory Access (RDMA) networking, GPU allocation and slicing, advanced scheduling, performance tuning, and Kubernetes scaling. Additionally, it supports integration with multiple AI platform services, such as Gcore Everywhere Inference and the NVIDIA AI Enterprise software ecosystem. In contrast to typical cloud-native workloads, which are optimised for scale-out and multi-core environments, AI tasks often require the aggregation of multiple GPU servers into a single high-performance computing instance. This shift demands RDMA and ultra-high-performance networking, areas which the Mirantis reference architecture is designed to accommodate. The reference architecture uses Kubernetes and is adaptable to various AI workload types, including training, fine-tuning, and inference, across a range of environments. These include dedicated or shared servers, virtualised settings using KubeVirt or OpenStack, public cloud, hybrid or multi-cloud configurations, and edge locations. The solution addresses the specific needs of AI workloads, such as high-performance storage and high-speed networking technologies, including Ethernet, Infiniband, NVLink, NVSwitch, and CXL, to manage the movement of large data sets inherent to AI applications. Mirantis has identified and aimed to resolve several challenges in AI infrastructure, such as: Time-intensive fine-tuning and configuration compared to traditional compute systems; Support for hard multi-tenancy to ensure security, isolation, resource allocation, and contention management; Maintaining data sovereignty for data-driven AI and ML workloads, particularly where models contain proprietary information; Ensuring compliance with varied regional and regulatory standards; Managing distributed, large-scale infrastructure, which is common in edge deployments; Effective resource sharing, particularly of high-demand compute components such as GPUs; Enabling accessibility for users such as data scientists and developers who may not have specific IT infrastructure expertise. The composable nature of the Mirantis AI Factory Reference Architecture allows users to assemble infrastructure using reusable templates across compute, storage, GPU, and networking components, which can then be tailored to specific AI use cases. The architecture includes support for a variety of hardware accelerators, including products from NVIDIA, AMD, and Intel. Mirantis reports that its AI Factory Reference Architecture has been developed with the goal of supporting the unique operational requirements of enterprises seeking scalable, sovereign AI infrastructures, especially where control over data and regulatory compliance are paramount. The framework is intended as a guideline to streamline the deployment and ongoing management of these environments, offering modularity and integration with open standard tools and platforms.

Nutanix Reveals Public Sector Cloud Adoption Trends
Nutanix Reveals Public Sector Cloud Adoption Trends

TECHx

time5 days ago

  • Business
  • TECHx

Nutanix Reveals Public Sector Cloud Adoption Trends

Home » Emerging technologies » Cloud Computing » Nutanix Reveals Public Sector Cloud Adoption Trends Nutanix (NASDAQ: NTNX), a hybrid multicloud computing company, announced the findings of its seventh annual global Public Sector Enterprise Cloud Index (ECI) survey and research report. The study highlights significant progress and challenges in cloud and GenAI adoption within the public sector. The report revealed that 83% of public sector organizations have a GenAI strategy in place. Among them, 54% are actively implementing the strategy, while 29% are in preparation stages. However, 76% of IT decision-makers reported that their current infrastructure needs moderate to significant improvements to support modern, cloud-native applications at scale. This points to infrastructure modernization as a critical priority for enabling GenAI deployments. The study also found that GenAI adoption is accelerating across the public sector. Leaders are increasingly using GenAI for employee support, chatbots, and content generation. Yet, concerns about security persist. About 92% of public sector leaders stated their organizations need to do more to secure GenAI models and applications. As a result, 96% reported that security and privacy are becoming top priorities. Greg O'Connell, VP, Federal Sales, Public Sector at Nutanix, said, 'Generative AI is no longer a future concept, it's already transforming how we work. 94% of public sector organizations are already putting AI to work and expect returns in as little as one year.' The report further explored trends in Kubernetes, containerization, and future cloud strategies. Key findings include: 92% agree more needs to be done to secure GenAI models and apps. 96% say GenAI is shifting organizational priorities toward privacy and security. 76% believe current infrastructure is not fully ready to support cloud-native applications. Additionally, 96% of respondents said their organizations are already in the process of containerizing applications. The benefits of cloud-native apps and containers are widely recognized, with 91% acknowledging their positive impact. This year's ECI report was conducted by Vanson Bourne in Fall 2024. The study surveyed 1,500 IT and DevOps decision-makers across various sectors and regions, including the Americas, EMEA, and APJ. Nutanix's latest findings reflect growing confidence in GenAI's potential, while highlighting the urgent need for infrastructure, security, and skillset improvements in the public sector.

Rakesh Kumar Mali: A Leader Revolutionizing Software Engineering
Rakesh Kumar Mali: A Leader Revolutionizing Software Engineering

Int'l Business Times

time5 days ago

  • Business
  • Int'l Business Times

Rakesh Kumar Mali: A Leader Revolutionizing Software Engineering

In today's digital-first landscape, where user experience and system resilience are key to business success, leaders like Rakesh Kumar Mali play a pivotal role in driving transformation and stand out as true pioneers—those who not only adapt to change but drive it. Rakesh Mali is one such luminary in the field of computer software engineering. With over 14 years of career in the software industry, marked by innovation, leadership, and an unwavering commitment to excellence, Rakesh has redefined what it means to be a software engineer in the modern era. From leading high-performing technical teams to architecting fault-tolerant systems, modernizing legacy applications, and contributing to academic research, Rakesh's journey is a testament to the power of technical expertise combined with visionary leadership. His contributions have not only transformed businesses but also set new benchmarks in software engineering practices. A Leader Who Delivers: Technical Team Leadership & Project Excellence At the heart of Rakesh's success is his ability to lead and inspire technical teams. Serving as a Delivery Module Lead at a major global company, he has spearheaded multiple high-impact projects, ensuring timely delivery without compromising quality. His leadership style blends technical depth with people-centric management, fostering an environment where innovation thrives. Under his guidance, teams have successfully transitioned from monolithic architectures to microservices, enabling scalability, agility, and faster deployment cycles. His expertise in designing fault-tolerant and highly available systems has been instrumental in maintaining seamless operations for mission-critical applications, achieving an impressive 99% uptime. One of Rakesh's key strengths lies in leading digital transformation initiatives—helping enterprises modernize from monolithic systems to scalable, fault-tolerant, and secure cloud-native architectures. He focuses heavily on building systems that are resilient, observable, and DevSecOps-enabled, which not only meet technical requirements but also align with business goals. Modernizing Legacy Systems: A Leap into Cloud-Native Future One of Rakesh's most significant contributions has been his work in modernizing legacy applications into cloud-native architectures. Legacy systems, often plagued by inefficiencies and high maintenance costs, were transformed into scalable, resilient, and cost-effective solutions. By leveraging containerization, Kubernetes, and modern serverless computing, Rakesh ensured that these applications could handle modern workloads while reducing infrastructure overhead. This modernization effort not only improved performance but also reduced application response time by 30%, significantly enhancing user experience. Zero Downtime Deployments: The Blue-Green Revolution In this connected world, no one wants to experience any service outage, but the application deployments have traditionally been a pain point for enterprises, often leading to service disruptions. Rakesh tackled this challenge head-on by implementing innovative Blue-Green Deployment strategies. This approach allowed seamless switching between production environments, eliminating downtime during updates and ensuring uninterrupted service for end-users. His implementation of this technique has set a new standard within his organization, reducing deployment risks and improving overall system reliability. This has achieved 99% uptime for the applications, which is an impressive milestone for any organization. Cybersecurity: Fortifying Applications Against Threats In an era where cyber threats are growing in sophistication, Rakesh has been at the forefront of application security. He led the remediation of cybersecurity vulnerabilities, ensuring that systems were robust against potential breaches. By conducting thorough security audits, implementing OWASP best practices, and integrating advanced threat detection mechanisms, he fortified applications against attacks. His proactive approach to cybersecurity has not only safeguarded sensitive data but also reinforced trust with clients. In order to ensure continuous monitoring for the application's integrity, Rakesh has integrated a software composition tool into the application pipeline that enables an automated and scheduled scanning of the various open-source libraries for any open security threat. An IEEE Senior Member & SCRS Fellow: Recognizing Excellence Rakesh's contributions extend beyond corporate achievements. His dedication to advancing technology has earned him prestigious recognitions, including: IEEE Senior Membership — IEEE is a global network of nearly half a million engineering and STEM professionals. Only 10% of members get senior grade, which requires extensive experience and reflects professional maturity and documented achievements of significance. A distinction awarded to professionals who have demonstrated significant achievements in engineering. SCRS Fellow Membership — Being awarded the prestigious Fellow Membership by the Soft Computing Research Society is a tremendous honor and a testament to years of dedication, innovation, and impactful contributions to the field of computing. This recognition not only highlights Rakesh's commitment to advancing research but also inspires him to continue pushing the boundaries of intelligent systems and computational intelligence. These accolades reflect his standing as a thought leader in the tech community. Bridging Industry and Academia: Research, Peer Reviews, Jury Member & Keynote Speaker Rakesh's influence is not confined to industry alone. He has authored research articles, contributing valuable insights to the field. His work bridges the gap between theoretical advancements and practical implementations, making complex concepts accessible to engineers and researchers alike. Asia Research Awards have recognized his dedication to academic excellence with the Best Researcher Award, which is a testament to the impact and originality of his scholarly contributions. Additionally, he serves as a Peer Review Board member for renowned publications—International Research Journal on Innovations in Engineering and Technology and ESP Journal of Engineering & Technology Advancements, where he evaluates cutting-edge research, ensuring high academic standards. His meticulous and insightful reviews earned him the Best Peer Reviewer Award, further cementing his reputation as a trusted authority in the research community. Rakesh's expertise has made him a sought-after authority in the tech world. He serves as an active jury member at the Globee® Awards, where he evaluates groundbreaking technological innovations, helping recognize excellence in the industry. Rakesh has been invited as a Keynote Speaker at the International Conference on Artificial Intelligence and Computational Technologies, where he presented the topic "Reimagining Logistics: The Role of Generative AI in Logistics & Supply Chain Transformation." Rakesh has also been invited as an industry expert speaker at the 5th International Conference on Intelligent Vision and Computing (ICIVC 2025) organized by The ICFAI University, Dehradun, India, which is ranked 36th in India. Rakesh presented the speech on "Smart Spending in the Cloud: Strategic Cost Management for Cloud-Native Software Development." Rakesh is part of the IEEE Senior Member Review Panel, where he evaluates the contributions of applicants worldwide in the field of computer science and technology. Rakesh also spends time sharing his hard-earned knowledge with future engineers using platforms such as and CodePath. Conclusion: A True Architect of the Future Rakesh Mali's journey is a masterclass in technical leadership, innovation, and relentless pursuit of excellence. Whether it is optimizing systems for peak performance, securing applications against cyber threats, or mentoring the next generation of engineers, his impact is far-reaching. In an industry that moves at breakneck speed, Rakesh stands out as a visionary who not only keeps pace with change but drives it. His work continues to shape the future of software engineering, making him a true architect of tomorrow's digital landscape. As technology evolves, one thing remains certain: leaders like Rakesh Mali will always be at the forefront, paving the way for a smarter, faster, and more secure digital world.

Why China is giving away its tech for free
Why China is giving away its tech for free

Economist

time6 days ago

  • Business
  • Economist

Why China is giving away its tech for free

Underpinning the digital economy is a deep foundation of open-source software, freely available for anyone to use. The majority of the world's websites are run using Apache and Nginx, two open-source programs. Most computer servers are powered by Linux, another such program, which is also the basis of Google's Android operating system. Kubernetes, a program widely used to manage cloud-computing workloads, is likewise open-source. The software is maintained and improved upon by a global community of developers.

Build, Operate and Optimize AI and ML Infrastructure at Scale with Industry's First Reference Architecture to Support AI Workloads
Build, Operate and Optimize AI and ML Infrastructure at Scale with Industry's First Reference Architecture to Support AI Workloads

Business Wire

time6 days ago

  • Business
  • Business Wire

Build, Operate and Optimize AI and ML Infrastructure at Scale with Industry's First Reference Architecture to Support AI Workloads

CAMPBELL, Calif.--(BUSINESS WIRE)-- Mirantis, the Kubernetes-native AI infrastructure company enabling enterprises to build and operate scalable, secure, and sovereign AI infrastructure across any environment, today announced the industry's first comprehensive reference architecture for IT infrastructure to support AI workloads. The Mirantis AI Factory Reference Architecture, built on Mirantis k0rdent AI, provides a secure, composable, scalable, and sovereign platform for building, operating, and optimizing AI and ML infrastructure at scale. Share The Mirantis AI Factory Reference Architecture, built on Mirantis k0rdent AI, provides a secure, composable, scalable, and sovereign platform for building, operating, and optimizing AI and ML infrastructure at scale. It enables: AI workloads to be deployed within days of hardware installation using k0rdent AI's templated, declarative model for rapid provisioning; Faster prototyping, iteration, and deployment of models and services to dramatically shorten the AI development lifecycle; Curated integrations (via the k0rdent Catalog) for AI/ML tools, observability, CI/CD, security, and more, which leverage open standards. 'We've built and shared the reference architecture to help enterprises and service providers efficiently deploy and manage large-scale multi-tenant sovereign infrastructure solutions for AI and ML workloads,' said Shaun O'Meara, chief technology officer, Mirantis. 'This is in response to the significant increase in the need for specialized resources (GPU and CPU) to run AI models while providing a good user experience for developers and data scientists who don't want to learn infrastructure.' With the reference architecture, Mirantis addresses complex issues related to high-performance computing that include remote direct memory access (RDMA) networking, GPU allocation and slicing, sophisticated scheduling requirements, performance tuning, and Kubernetes scaling. The architecture can also integrate a choice of AI Platform Services, including Gcore Everywhere Inference and the NVIDIA AI Enterprise software ecosystem. Cloud native workloads, which are typically designed for scale-out and multi-core operations, are quite different from AI workloads, that can require turning many GPU-based servers into one single supercomputer with aggregated memory that requires RDMA and ultra-high performance networking. The reference architecture leverages Kubernetes and supports multiple AI workload types (training, fine-tuning, inference) across: dedicated or shared servers; virtualized environments (KubeVirt/OpenStack); public cloud or hybrid/multi-cloud; and edge locations. It addresses the novel challenges related to provisioning, configuration, and maintenance of AI infrastructure and supporting the unique needs of workloads, including high-performance storage, and ultra-high-speed networking (Ethernet, Infiniband, NVLink, NVSwitch, CXL) to keep up with AI data movement needs. They include: Fine-tuning and configuration, which typically take longer to implement and learn than traditional compute systems; Hard multi-tenancy for data security and isolation, resource allocation, and contention management; Data sovereignty of AI and ML workloads that are typically data-driven or contain unique intellectual property in their models, which makes it critical to control how and where this data is used; Compliance with regional and regulatory requirements; Managing scale and sprawl because the infrastructure used for AI and ML is typically comprised of a large number of compute systems that can be highly distributed for edge workloads; Resource sharing of GPUs and other vital compute resources that are scarce and expensive and thus must be shared effectively and/or leveraged wherever they are available; Skills availability because many AI and ML projects are run by data scientists or developers who are not specialists in IT infrastructure. The Mirantis AI Factory Reference Architecture is designed to be composable so that users can assemble infrastructure from reusable templates across compute, storage, GPU, and networking layers tailored to their specific AI workload needs. It includes support for NVIDIA, AMD, and Intel AI accelerators. Access the complete reference architecture document, along with more information. About Mirantis Mirantis is the Kubernetes-native AI infrastructure company, enabling organizations to build and operate scalable, secure, and sovereign infrastructure for modern AI, machine learning, and data-intensive applications. By combining open source innovation with deep expertise in Kubernetes orchestration, Mirantis empowers platform engineering teams to deliver composable, production-ready developer platforms across any environment - on-premises, in the cloud, at the edge, or in data centers. As enterprises navigate the growing complexity of AI-driven workloads, Mirantis delivers the automation, GPU orchestration, and policy-driven control needed to cost-effectively manage infrastructure with confidence and agility. Committed to open standards and freedom from lock-in, Mirantis ensures that customers retain full control of their infrastructure strategy. Mirantis serves many of the world's leading enterprises, including Adobe, Ericsson, Inmarsat, PayPal, and Societe Generale. Learn more at

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store