logo
#

Latest news with #CIODive

Michelin scraps VMware containers for open-source Kubernetes platform
Michelin scraps VMware containers for open-source Kubernetes platform

Yahoo

time14-06-2025

  • Business
  • Yahoo

Michelin scraps VMware containers for open-source Kubernetes platform

This story was originally published on CIO Dive. To receive daily news and insights, subscribe to our free daily CIO Dive newsletter. When Broadcom purchased VMware, IT shops took notice. The $61 billion deal, finalized in November 2023, signaled a seismic shift in the virtualization software vendor's expansive and nearly ubiquitous enterprise portfolio, which Broadcom rapidly packaged into just a few subscription service bundles. As customers grappled with the prospect of steep cost increases for operationally critical technologies, a small group of Michelin engineers sensed a transformation opportunity. 'It was a good time to start checking for an open-source project to replace this vendor solution,' Arnaud Pons, platform architect at Michelin, told CIO Dive. Digital transformation is rarely a linear process. Modernization journeys often take unexpected turns as enterprises navigate cloud costs, vendor relationships and organization changes. The VMware acquisition was a fork in the road for Michelin's platform engineering team, which had been running hundreds of applications in the vendor's Tanzu Kubernetes Grid system for several years. Michelin could either channel its energies into adapting to revamped container services or pivot its own, internally driven strategy. It wasn't a difficult decision from an engineering perspective, according to Gabriel Quennesson, Michelin's container as a service tech lead. 'At the end of the day, we realized that everything that we needed to do was readily available, and possible with open source tools,' Quennesson said in a Thursday case study published by the Cloud Native Computing Foundation. Michelin joined the CNCF in April, a few months after completing the migration to an in-house platform dubbed Michelin Kubernetes services, or MKS, by the engineering team. The entire process took roughly six months, Quennesson told CIO Dive. 'By having the knowledge of working on the technology for a couple of years, we were able to move rather quickly out of Tanzu — maybe quicker than moving to another vendor solution — because we could identify a migration path that neither VMware nor other vendors could provide us,' said Quennesson. A team of 11 engineers, including Quennesson and Pons, now manages nearly 450 containerized software applications deployed across 42 locations supporting critical functions, such as ordering and logistics. 'While some of the manpower goes to simply 'keeping the lights on,' by ensuring the best platform availability and supportability, the move to open source allows us to start looking at the future and to what features had most value for our end users,' Quennesson said in a March blog post detailing Michelin's Kubernetes journey. From a developer perspective, the case for open source is a relatively easy one: Talented engineers want to build technologies that solve difficult problems. Quennesson and Pons had to make a business argument in favor of migration, too. After Michelin crunched the numbers, the company determined it could cut its yearly container costs by 44% by building an open-source platform. The open-source shift also lifted the spirits of its engineering team. 'I measure morale first,' Pons said. 'It's really better to build a solution to a problem instead of just creating tickets and waiting months for someone at the vendor to call you with a solution that is not a very good one, anyway.' The business equation was fairly straightforward, as well, according to Quennesson. 'We were more interested in the quality of the solution than the cost, but it's basic math,' he said. 'You are saving the subscription cost, which created a significant cost reduction, and your infrastructure costs are about the same.' Michelin's Kubernetes journey began in 2018, when the company first implemented containers using Kubespray to avoid vendor lock-in, Quennesson said in the March blog. The team ran its containers on Azure public cloud but opted against Microsoft's Kubernetes platform and didn't turn to a third-party service until it chose Tanzu in 2021. While Tanzu enabled Michelin to build out and scale its containers both in the cloud and on-premises, the transition wasn't seamless or without frustrations. 'The Kubernetes ecosystem is large and feature rich,' Quennesson said. 'You have to pick and choose what you want and need, so one of the reasons to use a vendor is that this job is done for you.' VMware's decisions didn't always align with Michelin's needs, according to Quennesson. Relinquishing control to an outside vendor put engineers in an uncomfortably passive role, which created a less inviting atmosphere for developers at a time when tech talent is scarce. 'Skilled engineers were parked in a passive role of opening tickets and interacting with the support teams when they could and most of the time had found out what the issue was and how to fix it,' Quennesson said in the blog post. Without a vendor to fall back on, the engineering team now has full responsibility for keeping the system up and running. But the tradeoff has been worth it, according to Quennesson. 'It's created a virtuous circle,' he said. 'If you adhere to open source principles and do cool stuff, you attract talent and you make it much easier to retain talent.' Recommended Reading IT job posts rise, unemployment dips in May: CompTIA Sign in to access your portfolio

Oracle waves off cloud customers as its data center investments balloon
Oracle waves off cloud customers as its data center investments balloon

Yahoo

time13-06-2025

  • Business
  • Yahoo

Oracle waves off cloud customers as its data center investments balloon

This story was originally published on CIO Dive. To receive daily news and insights, subscribe to our free daily CIO Dive newsletter. Oracle fielded more cloud business than its data centers could handle, executives said Wednesday during the company's Q4 2025 earnings call for the three months ending May 31. 'We actually currently are still waving off customers — or scheduling them out into the future so that we have enough supply to meet demand,' CEO Safra Catz said. The junior hyperscaler, which trails cloud giants AWS, Microsoft, Google Cloud and Alibaba Cloud in the global battle for market share, saw quarterly cloud infrastructure and software services revenue increase 27% to $6.7 billion year over year, accounting for 42% of the company's $15.9 billion in revenue. Oracle's capital expenditure for the entire fiscal year more than doubled to $21.2 billion compared with the prior year as the company raced to add data center capacity. 'We are putting out as much capacity as we possibly can,' Catz said. 'When we all of a sudden have higher CapEx, it means we are filling out data centers and we are buying components to build our computers.' As large language models devour compute resources, cloud providers are pouring tens of billions of dollars into data centers to narrow gaps between supply and demand. Hyperscalers drove an unprecedented 34% year-over-year spike in data center hardware and software spending, which hit a record high $282 billion in 2024, according to Synergy Research Group. Large cloud providers accounted for more than half of $455 billion in data center capital investments on the year, marking a 51% increase over 2023, the Dell'Oro Group found. Oracle remained a relatively small fish in an expansive pond. The provider took in roughly 3% of $94 billion in global cloud infrastructure services spending, which grew 23% year over year during the first three months of 2025, according to Synergy Research Group. AWS and Microsoft, the two largest hyperscalers, captured the lion's share of cloud revenues, controlling 29% and 22% of the market, respectively. As the market grew and capacity constraints surfaced even among the largest providers, Oracle doubled down on data centers, expanding its cloud estate to more than 100 regions. In March, the Catz vowed to triple capacity by the middle of next year and the company committed to a $5 billion U.K.-based cloud infrastructure buildout. On Wednesday, Catz said the company's capital expenditure will likely surpass $25 billion in the next fiscal year. 'We don't build unless we've got orders for our capacity to be built out,' said Catz. 'It is all to meet demand.' Oracle CTO Larry Ellison added color to Catz's assessment. 'We recently got an order that said we'll take all the capacity you have wherever it is — it could be in Europe, it could be in Asia, we'll just take everything,' Ellison said. 'We did the best we could to give them the capacity they needed. The demand is astronomical.' As the company grows its cloud footprint, it is also leveraging recent partnerships with its three biggest competitors to grow its database business. In the last two years, AWS, Microsoft and Google Cloud made separate agreements to run Oracle database servers in their data centers. The multicloud integrations are currently deployed in 23 hyperscaler cloud regions, with an additional 47 in the works, Ellison said. 'Most of the world's most valuable data is stored in an Oracle database,' Ellison added. 'All of those databases are moving to the cloud — Oracle's cloud, Microsoft's Azure cloud, Amazon's cloud or Google's cloud. As use of AI increases, so will Oracle's database market share.' Recommended Reading Nvidia lures all 4 major cloud hyperscalers with Blackwell 'superchip' Sign in to access your portfolio

How 3 banks are capitalizing on AI
How 3 banks are capitalizing on AI

Yahoo

time11-06-2025

  • Business
  • Yahoo

How 3 banks are capitalizing on AI

This story was originally published on CIO Dive. To receive daily news and insights, subscribe to our free daily CIO Dive newsletter. The banking industry was quick to recognize the business potential of generative AI and, on the flip side, appreciate the perils inherent in reckless adoption. Adept at managing risk, the sector's largest institutions took a cautious yet persistent approach moving pilots into production. Adoption has picked up momentum over the last year, according to Evident Insights, which tracks 50 of the largest banks in North America, Europe and Asia. The 50 banks announced 266 AI use cases as of last week, up from 167 in February, Colin Gilbert, VP of intelligence at Evident said Tuesday during a virtual roundtable hosted by the industry analyst firm. 'The vast majority, or about 75%, are still internal or employee facing,' he said, adding that the distribution between generative AI and traditional predictive AI use cases was split roughly 50/50. As banking integrates the technology into daily operations and models mature, the mix is shifting toward generative AI capabilities with customer-facing features, Mudit Gupta, partner and Americas financial services consulting practice AI lead at EY, said during the panel. 'You tend to start with productivity because it's low risk,' Gupta said. 'You establish proof points so that when you get further down the road of adoption, you can move on to transformation.' Technology executives from three global banks each put their own spin on Gupta's formulation. 'We are taking incremental steps to do something exponential,' Rohit Dhawan, director of AI and advanced analytics at Lloyds Banking Group, said. The bank is consolidating its AI efforts to move beyond individual use cases after bolstering its cloud-based data strategy earlier this year with Oracle's Azure-based database system and Exadata customer cloud data system. 'It's a very different mindset where you go from thinking about how to infuse or optimize a process with AI to fundamentally reimagining the process with AI,' Dhawan said. Generative AI use cases abound in banking. The technology has capabilities that reach across processes, from managing vast quantities of customer and compliance data for associates to assisting engineers in refactoring legacy applications. Banking executives expect generative AI to be capable of handling up to 40% of daily tasks by the end of the year, according to an April KPMG report. Nearly 3 in 5 of the 200 U.S. bank executives surveyed by the firm said the technology is integral to their long-term innovation plans. Until recently, NatWest Group was moving gradually with AI, measuring return on investment one use case at a time, the bank's Chief Data and Analytics Officer Zachery Anderson said during the panel. 'We've made a pretty big shift in the last eight months to start to reimagine pieces that really looked at customer experiences, in particular, and how we might rebuild those entirely from front to back,' he said. While AI assistants, such as Bank of America's Erica for Employees and Citi's Stylus document intelligence and Assist virtual assistant, are becoming commonplace, the breadth of the technology's capabilities is expanding as deployments increase. In September, JPMorgan Chase announced it would equip 140,000 employees with its LLM Suite AI assistant. 'Generative AI is going to impact every function within a bank — every single part of the job,' Accenture Global Banking Lead and Senior Managing Director Michael Abbott told CIO Dive in January, as automated agentic tools began to dominate the AI space. NatWest is leveraging two deployment pathways, Anderson said. 'We have a core set of data scientists, data engineers, that are working on the biggest, most difficult use cases,' he said. 'They're working on things that are at the edge of feasibility right now, because usually the models are improving so quickly, but by the time the project's done, what was at the edge before is now in the core of possibility.' At the same time, the bank is pushing AI into non-technical functions. In addition to giving the tools to developers, NatWest rolled out an internal AI tool to business users and a 'very large portion of the users in the bank,' said Anderson. 'The feasible edge of what you can do with the models and the agents right now is not only increasing, but it's also jagged,' Anderson said. 'Things you think you can do, you can't do, and things that you end up finding out that you can do surprise you sometimes … with all 70,000 of our employees exploring that edge, we're mapping out the frontier in a much faster way than we were before.' Truist has moved from quick wins to use cases that reach farther up the banking food chain. 'Extracting knowledge is the most popular use case,' Chandra Kapireddy, head of analytics, AI/ML and Gen AI at Truist, said. 'It's really low risk, the data is already out there, and it's high reward [because] it gets answers pretty quickly.' Answers communicate value to business users and help sustain momentum as AI use cases grow in complexity and cost. Early wins also provide IT executives with the political capital to engage in some necessary experimentation with the technology. 'If you try and aim for perfection, you're going to be spinning your wheels,' Kapireddy said. 'That's going to be very productive at the beginning of your use case life cycle. But as you start investing dollars in it, you have to make sure that business stakeholders know that it's going to have an impact.' Recommended Reading CIOs prioritize data upgrades as AI adoption intensifies Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Cisco fortifies enterprise networking gear to support AI workloads
Cisco fortifies enterprise networking gear to support AI workloads

Yahoo

time11-06-2025

  • Business
  • Yahoo

Cisco fortifies enterprise networking gear to support AI workloads

This story was originally published on CIO Dive. To receive daily news and insights, subscribe to our free daily CIO Dive newsletter. Cisco rolled out a series of hardware upgrades designed to ease enterprise AI adoption, including retooled networking components, a unified network management platform and the Deep Network Model domain-specific large language model to power an AI assistant, the company said Tuesday at its annual Cisco Live conference in San Diego. 'As AI transforms work, it fuels explosive traffic growth across campus, branch and industrial networks, overwhelming IT teams with complexity and novel security risks at a time when downtime has never been more costly,' Jeetu Patel, president and chief product officer at Cisco, said in the announcement. Cisco's portfolio overhaul comes as CIOs grapple with infrastructure limitations and safety concerns inherent in AI deployments, according to Matt Eastwood, IDC SVP of enterprise infrastructure. 'The reality is that existing enterprise networks are simply not equipped to handle the scale, security and reliability requirements that AI demands,' Eastwood said in the announcement. Cisco is banking on enterprises taking a hybrid route to AI adoption. The LLMs that power off-the-shelf chatbots, copilots and AI assistants are trained on high-capacity chips in massive cloud data centers but enterprises need secure networks to connect models to their data. The company signaled an enterprise IT shift last month, after surpassing its fiscal year 2025 goal of $1 billion in hyperscaler AI networking gear orders in its third quarter, which ended on April 26. 'On the tail-end of building out all the public cloud infrastructure for training, there is a significantly larger opportunity in enterprise AI, as they build out the capability to do inferencing inside their own data centers,' Cisco EVP and CFO Scott Herren said during the Q3 2025 earnings call. Cisco saw revenue increase 11% to $14.1 billion, with networking revenue growing 8% year over year. Switches and enterprise routing equipment led the segment with double-digit growth, Herren said. As AI infrastructure build outs reach into the enterprise, Cisco has tightened its alliance with GPU chipmaker Nvidia. The two companies agreed to collaborate on an integrated architecture for AI-ready enterprise data center networks in February and announced Tuesday that Nvidia's RTX Pro 6000 Blackwell Server Edition GPU is available in Cisco servers. In addition to training a network-savvy AI agent trained on Cisco specifications and courseware to detect anomalies, diagnose problems and automate workflows, the company is designing routers, switches and other networking gear for a growing class of agentic tools, Patel said during a briefing last week. 'There'll be tens of billions of agents conducting work on our behalf,' said Patel. 'To get all this to happen, the fundamental requirements around infrastructure, as well as safety and security, will need to be completely rethought, because the classical ways that infrastructure was handled just won't be able to deal with the scale and proportion that we're talking about — that's why you're seeing such a massive level of build out globally of data center capacity.' The largest hyperscaler by market share, AWS, committed $20 billion to AI infrastructure buildouts in Pennsylvania on Monday. The company announced plans for a $10 billion data center construction project in North Carolina last week. Enterprises are eager to shore up on-site infrastructure, too. Nearly 9 in 10 organizations plan to expand compute capacity to run AI workloads, according to a Sandpiper Research and Insights survey of more than 8,000 senior IT and business leaders commissioned by Cisco. More than three-quarters of respondents said their organization had suffered a major network outage due to congestion, cyberattack or misconfiguration. 'The consequence of us not getting this infrastructure piece right is pretty profound,' Patel said. 'What we want to do is have people not have to think about infrastructure.' Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

AI skills shortage surpasses big data, cybersecurity
AI skills shortage surpasses big data, cybersecurity

Yahoo

time11-06-2025

  • Business
  • Yahoo

AI skills shortage surpasses big data, cybersecurity

This story was originally published on CIO Dive. To receive daily news and insights, subscribe to our free daily CIO Dive newsletter. The gap in AI skills is accelerating drastically as enterprises rush to deploy the technology, according to the Nash Squared/Harvey Nash Digital Leadership report published in May. The company surveyed more than 2,000 technology leaders. More than half of IT leaders say their companies suffered from an undersupply of AI talent, up from 28% in the previous edition of the report, published in 2023. AI know-how went from being the sixth most scarce technology skill to the No. 1 in 16 months, marking the fastest increase in more than 15 years. Nine in 10 respondents said their companies were piloting or investing in AI use, up from 59% in the 2023 report. Despite the rise, more than two-thirds of leaders said they had not yet received a measurable return on investment from the technology. Deploying AI has long been an enterprise need, with executives hoping to plug automation into key processes in search of productivity wins. Despite ambitions, a large swath of projects remain stuck in the experimental phase. Several roadblocks stand in the way of full-fledged adoption, including data deficiencies and financial constraints. A looming skills gap has also dampened enterprise AI plans. "As AI is so new, there is no 'playbook' here," said Bev White, CEO of Nash Squared, in the study announcement. "It's about a mix of approaches including formal training where available, reskilling IT staff and staff outside of the traditional IT function to widen the pool, on-the-job experimentation and knowledge sharing and transfer. This needs to coincide with the development of a new operating model where AI is stitched in." The two-year, 23-percentage-point jump for AI skills was the steepest increase for a specific skill recorded by Harvey Nash since it first began tracking this metric 16 years ago. A dearth of AI talent was reported by the majority of leaders across several sectors, including education, logistics, manufacturing, business services and pharmaceuticals. Enterprise AI ambitions have steadily driven up AI workforce demand, widening talent gaps. Job site Indeed tracked a significant spike in generative AI job postings in January, which nearly tripled year over year, according to a February report.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store