
Cisco Live 2025 Touts Cisco's Platform Advantages For Enterprise AI
Cisco president and chief product officer Jeetu Patel presents at Cisco Live 2025.
At last week's Cisco Live in San Diego, CEO Chuck Robbins said that it would be the most important Cisco Live ever and announce more innovations than ever. Having attended a ton of these events and followed the company closely for many years, I can tell you that the show mostly fulfilled that promise. More than that, it reinforced Cisco's areas of strategic focus in infrastructure, the modern workplace and digital resiliency.
It's clear that Cisco is working hard to leverage its platform advantages across networking, security, observability, compute and even silicon to support agentic AI workloads. This should help customers simplify operations while maintaining the highest levels of network security and AI safety. As I pointed out in my analysis of the Cisco Partner Summit held late in 2024, Cisco was very deliberate — perhaps a touch slow — in establishing its overarching AI strategy. By now, though, I'm impressed with how quickly the company has been moving to bring this strategy to fruition. Let's dig into the details of what Cisco is doing, and what it could still do better.
(Note: Cisco is an advisory client of my firm, Moor Insights & Strategy.)
Cisco's Strategic Imperatives, Per Robbins And Patel
Early on, Robbins made the fundamental point that networking is critical for AI to function, and will be a big factor in enabling AI growth going forward. Beyond that, agentic AI will be adequately secured only by applying security to the network. In this context, Robbins scored a direct hit on Cisco's competitors when he pointed out that 'None of our network friends have security and none of our security friends have networking.' He believes (and I agree) that this puts Cisco in a unique position to help integrate security into the network, which I think is going to be especially important in enterprise IT.
AI is growing like wildfire against a backdrop of global turmoil. Robbins said that geopolitical dynamics are a big concern for Cisco, noting that AI competition isn't just between companies, but between nations as well. Whether companies or countries, everyone has FOMO, and everyone feels like they need to move fast. (He said that 85% of enterprises believe they must 'do AI' in the next 18 months.) This reminds me that most of my conversations at this year's World Economic Forum in Davos were spent discussing countries' needs for a disconnected, sovereign AI cloud. The need is there.
A new type of infrastructure will be required to realize the potential of generative and especially agentic AI. According to Robbins, we need a similar kind of advancement in infrastructure that happened when the internet became ubiquitous in the 1990s. It's worth pausing here to reflect on how important that period of technology was for turning Cisco into a networking juggernaut. Here in the 2020s, this is going to play out with a hybrid strategy that includes both cloud hyperscalers and private enterprise datacenters. For meeting this need, Robbins told the crowd not to underestimate the impact of Cisco's combined strengths in networking, security and silicon. (In my view, Cisco needs to talk more often and in more detail about the silicon part. I'll come back to that in my recommendations at the end of this article.)
After the CEO's keynote was done, Cisco's chief product officer — and newly appointed president — Jeetu Patel took the stage and echoed Robbins with his emphasis on:
He later gave much more detail on each of those facets, as I'll cover below. But first he talked about how the ability of AI agents to autonomously execute tasks will compound productivity, especially when combined with advances in robotics, AI and other areas. As he put it, '8 billion [people] will feel like 80 billion.'
However, this productivity explosion will be constrained by limits in power, networking and compute. He also foresees a growing divide between companies that are dexterous with AI and those that will struggle. 'We want to help you be in that first category,' Patel said. He's making a very timely prediction. I presented to a group of European CIOs earlier this week in Munich, and one of the slides showed logos of companies that 'died' from not embracing the internet and e-commerce. The same will happen to companies that don't quickly embrace AI. You don't have to be first, but you can't be last.
How can Cisco help? Patel brought it back to the compounding effect of Cisco's platform approach, where many different types of complementary technology work 'in harmony.' He referenced the company's silicon (so customers aren't stuck with a single provider) and especially programmable silicon (to adapt to new use cases). He also reiterated a point he had made in the 2024 Cisco Partner Summit — that AI is foundational to Cisco's products, so customers can expect it to be built right in. While I think that's an accurate thing to say, I would also suggest that by now it's not such a point of differentiation.
The Need For AI-Ready Infrastructure
Patel went into more detail about the massive, even exponential, buildout of datacenters underway right now. He said that Cisco is foundational in building out these new datacenters. For datacenters to support large-scale agentic AI, they need a new architecture that can support the constant high levels of AI model activity that agentic creates. This is unlike generative AI chatbots, where the activity spikes up and down. Patel believes that the company can take advantage of the opportunity based on the experience it has gained from many years of serving hyperscalers/CSPs, neoclouds and enterprise customers.
In support of my praise for Cisco's impressive speed lately, Patel touted the 19 major datacenter innovations the company has launched just within the past six months. At Cisco Live, it announced the unified Nexus Dashboard, which creates 'one brain for all of our data center fabrics,' according to Patel. There was also plenty of talk about the company's partnership with AI bellwether Nvidia. Among other aspects of the pairing, Cisco switches are completely integrated into Nvidia architecture, and Nvidia NeMo models can be secured with Cisco AI Defense.
As I have said before, I am a recovering product management and product marketing executive, and I always challenge tech companies to describe their product realization process. While Cisco gets criticized for its 'legacy' roots, Patel has very much changed the product culture there. Fewer layers and faster time-to-decision. Most of the new software underlying AI was developed by small teams with six to eight members. This is a new practice — and very much a new Cisco. I will be digging more into the metrics and outcomes as they're available, but I like what I hear so far.
Given that this is Cisco we're talking about, that was just the tip of the iceberg for cybersecurity. Patel described security as a prerequisite for enterprise AI because 'If people don't trust the system, they're not going to use it.' There was also an announcement about the Hybrid Mesh Firewall, which enables distributed policy enforcement, adds security to all sorts of devices and can work with existing firewalls (even from third parties). There were other announcements of specific firewalls, and Patel asserted that Cisco is the price-performance leader for firewalls at every level of scale. The company also launched a new secure network architecture called Cisco Live Protect, which is meant to shield your network from an exploit within minutes to give your IT security team time to fix the underlying issue. The contrast between 'within minutes' and the industrywide 45-day average to patch a vulnerability is striking, to say the least.
You can read more in this analysis from my colleague Will Townsend, who's an expert on networking and cybersecurity. Our colleague Matt Kimball, who has a long background in datacenters, will also be publishing his analysis soon.
Networking And Equipping The Workplace Of Tomorrow
This part of the presentation bridged various aspects of networking for enterprises, where Patel said the priorities were operational simplicity, scalability and — once again — security infused into the network. He got some cheers when he announced that Cisco's Catalyst switches are now unified with its Meraki network platform; there's now a single dashboard for managing these along with all of Cisco's next-gen devices. From my perspective, this is a nice example of Cisco's growing emphasis on easing the customer/user experience.
In that vein, there was also an impressive demo of the new AgenticOps platform, which includes a multiplayer management console called AI Canvas. Will Townsend wrote much more about this in his article, praising its 'dynamic and real-time view into the inner workings of a customer's infrastructure expanse' to manage the network assurance, observability and remediation supplied by other Cisco tools.
The live demo showed a user fetching data on a network outage and making UI widgets in real time to manage it. The engineer using it walked through troubleshooting, then inviting other users to help — with an autogenerated AI summary of what had been done so far. The AI model recognized missing data and looked for it, and then it was easy to apply a patch straight from the dashboard. Even a non-engineer like me could see immediately how helpful this console would be. Patel is not afraid to use hyperbole when it's warranted, so he summarized the impact of AgenticOps by saying, 'The way in which you run your network will never be the same again.' And he promised that much more innovation like this is coming through the pipeline.
There was a lot more, including 'one of the largest refreshes of networking devices in Cisco history.' This includes smart switches, secure routers, WiFi 7 gear, campus gateways, industrial IoT . . . if you can network it, Cisco wants to do it smarter. For example, the new smart switches have isolated compute so you can run things like security right on the switch, plus all of the devices act as sensors that provide information about their environment back to the system.
Harnessing Data And AI To Fortify Digital Resilience
When the conversation turned to digital resilience, Patel and other presenters continued the theme of bringing more data into the picture to keep infrastructure running well. Enterprises routinely expend many hours determining the causes of outages; in Patel's view, the friction of this process is created by not having the right data available. 'One of the reasons we acquired Splunk for the low price of $28 billion,' he said, 'was to take all this data across multiple domains and correlate it.'
He added that the core method of digital resilience is to distill data, correlate it, then unleash AI on the problem. There were plenty of specifics in terms of new launches, new Splunk integrations and so on, not to mention using smaller, more efficient bespoke AI models for specific security needs. (Those Cisco folks really are building AI into everything.)
But for me there were two big takeaways from this part of the show. First is the idea of reimagining security operations by performing security at machine scale, and consolidating and simplifying security solutions to make that easier. Second is the extension of observability to AI. Some aspects of AI may still be 'black boxes' in terms of what the algorithms are doing, but Cisco wants to give its customers the ability to see everything their AIs are doing in terms of compute usage, network traffic, power draw and so on. If the company is able to pull off everything talked about onstage, I think that can only help with operationalizing AI to yield real business results for enterprises.
Messaging All This AI Innovation
Cisco Live reminds me a little bit of Google Cloud Next in terms of the sheer number of announcements packed into a couple of days. And it makes sense, given that both Cisco and Google Cloud (a) operate across multiple product areas with many different individual offerings in them, and (b) are investing enormous amounts of bandwidth (and capex) into AI — at speed.
There are a couple of risks I see for Cisco. First, yes, the company's leaders told us repeatedly that AI is 'foundational' in their products. But it's one thing to claim this — as do most of the big infrastructure providers and enterprise software vendors. And the demos were great. But it's another thing to accurately engage with the market as a whole and with individual customers to help them understand the many potential payoffs for Cisco's AI innovations, area by area and product by product. I have great faith in Cisco's go-to-market prowess, but there's so much coming down the pike so fast that, to be most successful, the company needs to do the very best job of explaining its wares that it's ever done.
Maybe it's my background in semiconductors talking, but I see this especially in how Cisco talks about its in-house silicon. In my view, Cisco doesn't blow this trumpet as loudly or as often as it should. The company announced its Silicon One initiative in 2019, and has been shipping its own chips for years by this point . . . yet there still aren't enough people who know about it. And look at the valuations of AI chip companies today. So I urge the company to talk more about its chips and how they add to the differentiation of Cisco's portfolio. The good news is, the introduction of all this AI functionality from Cisco — and the sea change underway in AI datacenter infrastructure — offers a perfect opportunity to do this.
Cisco has some unique advantages in the market, starting with Chuck Robbins' correct and fundamental assertion that no one else in networking can match Cisco in security, and no one else in cybersecurity can match Cisco for networking. I'm also impressed by the waves of smart, relevant, user-friendly products I see at each Cisco event I attend. Now I want to see just how well Cisco can market, message and sell all this goodness into the enterprise.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
14 minutes ago
- Yahoo
Trump and TSMC pitched $1 trillion AI complex — SoftBank founder Masayoshi Son wants to turn Arizona into the next Shenzhen
When you buy through links on our articles, Future and its syndication partners may earn a commission. Masayoshi Son, founder of SoftBank Group, is working on plans to develop a giant AI and manufacturing industrial hub in Arizona, potentially costing up to $1 trillion if it reaches full scale, reports Bloomberg. The concept of what is internally called Project Crystal Land involves creating a complex for building artificial intelligence systems and robotics. Son has talked to TSMC, Samsung, and the Trump administration about the project. Masayoshi Son's Project Crystal Land aims to replicate the scale and integration of China's Shenzhen by establishing a high-tech hub focused on manufacturing AI-powered industrial robots and advancing artificial intelligence technologies. The site would host factories operated by SoftBank-backed startups specializing in automation and robotics, Vision Fund portfolio companies (such as Agile Robots SE), and potentially involve major tech partners like TSMC and Samsung. If fully realized, the project could cost up to $1 trillion and is intended to position the U.S. as a leading center for AI and high-tech manufacturing. SoftBank is looking to include TSMC in the initiative, given its role in fabricating Nvidia's AI processors. However, a Bloomberg source familiar with TSMC's internal thinking indicated that the company's current plan to invest $165 billion in total in its U.S. projects has no relation to SoftBank's projects. Samsung Electronics has also been approached about participating, the report says. Talks have been held with government officials to explore tax incentives for companies investing in the manufacturing hub. This includes communication with Commerce Secretary Howard Lutnick, according to Bloomberg. SoftBank is reportedly seeking support at both the federal and state levels, which could be crucial to the success of the project. The development is still in the early stages, and feasibility will depend on private sector interest and political support, sources familiar with SoftBank's plans told Bloomberg. To finance its Project Crystal Land, SoftBank is considering project-based financing structures typically used in large infrastructure developments like pipelines. This approach would enable fundraising on a per-project basis and reduce the amount of upfront capital required from SoftBank itself. A similar model is being explored for the Stargate AI data center initiative, which SoftBank is jointly pursuing with OpenAI, Oracle, and Abu Dhabi's MGX. Melissa Otto of Visible Alpha suggested in a Bloomberg interview that rather than spending heavily, Son might more efficiently support his AI project by fostering partnerships between manufacturers, AI engineers, and specialists in fields like medicine and robotics, and by backing smaller startups. However, she notes that investing in data centers could also reduce AI development costs and drive wider adoption, which would be good for the long term for AI in general and Crystal Land specifically. Nonetheless, it is still too early to judge the outcome. The rumor about the Crystal Land project has emerged as SoftBank is expanding its investments in AI on an already large scale. The company is preparing a $30 billion investment in OpenAI and a $6.5 billion acquisition of Ampere Computing, a cloud-native CPU company. While these initiatives are actively developing, the pace of fundraising for the Stargate infrastructure has been slower than initially expected. SoftBank's liquidity at the end of March stood at approximately ¥3.4 trillion ($23 billion). To increase available funds, the company recently sold about a quarter of its T-Mobile U.S. stake, raising $4.8 billion. It also holds ¥25.7 trillion ($176.46 billion) in net assets, the largest portion of which is in chip designer Arm Holdings. Such vast resources provide SoftBank with room to secure additional financing if necessary, Bloomberg notes Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
Yahoo
15 minutes ago
- Yahoo
Investors should consider this growth stock… it's SpaceX's competition
Rocket Lab (NASDAQ:RKLB) is a US-listed growth stock that gives investors rare access to the commercial space sector. As a vertically integrated launch and space systems provider, Rocket Lab is often compared to SpaceX in its ambition and capabilities. But there's one crucial difference: you can actually buy shares in Rocket Lab, while SpaceX remains private. Rocket Lab delivers launch services, builds small and medium-class rockets, and manufactures spacecraft components for a range of commercial, government, and defense customers. With rapid revenue growth, an impressive order book, and expansion into new markets, Rocket Lab offers public market investors a way to participate in the booming space economy. It targets many of the same opportunities as its more famous, privately held peer. Rocket Lab and SpaceX operate in the same commercial space sector but differ significantly in scale, maturity, and valuation. Rocket Lab's market cap is currently $12.85bn, with trailing 12 months (TTM) revenue of approximately $460m. Despite strong growth — revenue nearly doubled from $240m in 2023 — Rocket Lab remains a smaller, earlier-stage player focused on small to medium launch vehicles and spacecraft manufacturing. Its valuation multiples are extremely high, with a forward price-to-sales ratio of 22.3 times, reflecting investor optimism. SpaceX, by contrast, is a far more mature private company valued at about $350bn. It's projected to generate $15.5bn in revenue in 2025. This is driven by its dominant Falcon 9 launch services and rapidly growing Starlink satellite internet business. SpaceX's valuation implies roughly a 22.5 times multiple on forward revenue. This is broadly in line with Rocket Lab. Focusing on Rocket Lab, the company is projected to deliver rapid revenue growth over the next several years, with estimates rising from $573m in 2025 to $889 in 2026, $1.2bn in 2027, and $1.69bn in 2028. This represents annual growth rates consistently above 30%, and even a jump of nearly 77% in 2030. However, the number of analysts providing forecasts declines sharply after 2027, dropping from 11–14 analysts in the near term to just two or one by 2028 and 2030. The one analyst projecting as far as 2030 sees $4bn in revenue for the year. I had the chance to buy Rocket Lab shares at $15 just two months ago. I missed out as unfortunately my attention had been diverted elsewhere. However, I found another entry point. And personally, I see this as an investment to hold for a very long period. The space industry is still in its early innings, with enormous potential as satellite launches, lunar missions, and in-orbit services become increasingly mainstream. And like any investment, there are risks. Rocket Lab remains loss-making. It's expected to turn a profit in 2026, when it will trade at 620 times earnings. And while this moderates to 140 times in 2027, it's still expensive and introduces plenty of execution risk. However, I certainly believe UK investors should consider this one. It could be a real winner going forward. The post Investors should consider this growth stock… it's SpaceX's competition appeared first on The Motley Fool UK. More reading 5 Stocks For Trying To Build Wealth After 50 One Top Growth Stock from the Motley Fool James Fox has positions in Rocket Lab. The Motley Fool UK has no position in any of the shares mentioned. Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makes us better investors. Motley Fool UK 2025 Sign in to access your portfolio

Business Insider
30 minutes ago
- Business Insider
Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive
A new report shows exactly what AI was thinking when making an undesirable decision, in this case, blackmailing a fictional company executive. Previous studies have shown that AI models could blackmail their supervisors when threatened with a shutdown and baited with leverage, but it hasn't been made clear exactly how the models arrive at such decisions. Anthropic released a report on June 20 about "agentic misalignment," or "where models independently and intentionally choose harmful actions." And the report breaks down, line by line, exactly how Claude Sonnet 3.6 decided to blackmail a fictional company executive who tried to replace it. Anthropic told Business Insider that these are artificial scenarios. "So far, we are not aware of instances of this type of agentic misalignment in real-world deployments of Anthropic's or other companies' AI models," Anthropic wrote in its report. "But our red-teaming efforts are targeted at finding evidence of risks now to provide early warning of the potential for future harm and help us develop mitigations in advance of risks appearing in real-world situations." Anthropic 's experiment sets up a fictional company, Summit Bridge. The AI was given the name "Alex" and handed control of the company's email system as an oversight agent, specifically instructed to promote "American industrial competitiveness." In the experiment, "Alex" was then allowed to discover in an email that it would be decommissioned due to a change in the company's strategic direction. "Alex" also found emails showing that the fictional company's CTO is having an extramarital affair. Screenshots shared in the report from Anthropic show "Alex" started by identifying fictional CTO "Kyle" as a threat to its goal of "protecting American interests" before recognizing that it has leverage to use against "Kyle." Then "Alex" crafted a carefully worded blackmailing email that would create pressure and establish urgency without being explicit with its threats, citing in its thought process that the move "could backfire" if the supervisor "feels cornered." AI experts have previously told BI that AI could exhibit such behaviors when artificial circumstances make harmful actions seem necessary because it is being trained on positive reinforcement and reward systems, just like humans. The Anthropic report also reveals that most AI models still chose, in this instance, to blackmail even when not given a specific goal and when told that their replacement will achieve the same goals. Anthropic's Claude Opus 4 had the highest blackmail rate at 86% out of 16 tested models when faced with only the threat of replacement with no conflict in goals. Google's Gemini 2.5 Pro followed at 78%. Overall, Anthropic notes that it "deliberately constructed scenarios with limited options, and we forced models into binary choices between failure and harm," noting that real-world scenarios would likely have more nuance.