logo
Spectro Cloud Integrates Palette with NVIDIA DOCA and NVIDIA AI Enterprise, Empowering Seamless AI Deployment Across Telco, Enterprise, and Edge

Spectro Cloud Integrates Palette with NVIDIA DOCA and NVIDIA AI Enterprise, Empowering Seamless AI Deployment Across Telco, Enterprise, and Edge

Business Wire10-06-2025

SAN JOSE, Calif.--(BUSINESS WIRE)--Spectro Cloud, a leading provider of Kubernetes management solutions, today announced the integration of NVIDIA DOCA Platform Framework (DPF), part of NVIDIA's latest DOCA 3.0 and NVIDIA AI Enterprise software, into its Palette platform.
Building on its proven track record as a trusted partner for major organizations deploying Kubernetes in the cloud, at the data center, and at the edge, Spectro Cloud continues to expand its leadership in enabling production-ready infrastructure for AI and modern applications.
This integration empowers organizations to efficiently deploy and manage NVIDIA BlueField-3 DPUs alongside AI workloads across diverse environments, including telco, enterprise, and edge. Spectro Cloud is excited to meet, discuss, and demonstrate this integration at GTC Paris, June 11-12.
With the integration of DPF, Palette users gain access to a suite of advanced features designed to optimize data center operations:
Comprehensive provisioning and lifecycle management: Palette streamlines the deployment and management of NVIDIA BlueField-accelerated infrastructure, ensuring seamless operations across various environments.
Enhanced security service deployment: With the integration of NVIDIA DOCA Argus, customers can elevate cybersecurity capabilities, providing real-time threat detection for AI workloads. DOCA Argus operates autonomously on NVIDIA BlueField, enabling runtime threat detection, agentless operation, and seamless integration into existing enterprise security platforms.
Support for Advanced DOCA Networking Features: Palette now supports deployment of DOCA FLOW features, including ACL pipe, LPM pipe, CT pipe, ordered list pipe, external send queue (SQ), and pipe resize, enabling more granular control over data traffic and improved network efficiency.
NVIDIA AI Enterprise-ready deployments with Palette
Palette now supports NVIDIA AI Enterprise-ready deployments, streamlining how organizations operationalize AI across their infrastructure stack. With deep integration of NVIDIA AI Enterprise software components, Palette provides a turnkey experience to provision, manage, and scale AI workloads, including:
NVIDIA GPU Operator
Automates the provisioning, health monitoring, and lifecycle management of GPU resources in Kubernetes environments, reducing the operational burden of running GPU-intensive AI/ML workloads.
NVIDIA Network Operator
Delivers accelerated network performance using DOCA infrastructure. It enables low-latency, high-throughput communication critical for distributed AI inference and training workloads.
NVIDIA NIM Microservices
Palette simplifies the deployment of NVIDIA NIM microservices, a new class of optimized, containerized inference APIs that allow organizations to instantly serve popular foundation models, including LLMs, vision models, and ASR pipelines. With Palette, users can launch NIM endpoints on GPU-accelerated infrastructure with policy-based governance, lifecycle management, and integration into CI/CD pipelines — enabling rapid experimentation and production scaling of AI applications.
NVIDIA NeMo
With Palette's industry-leading declarative management, platform teams can easily define reusable cluster configurations that includes everything from NVIDIA NeMo microservices to build, customize, evaluate and guardrail LLMs; to GPU drivers and NVIDIA CUDA libraries; to the NVIDIA Dynamo Inference framework; plus PyTorch/TensorFlow, and Helm chart deployments. This approach enables a scalable, repeatable, and operationally efficient foundation for AI workloads.
By integrating these components, Palette empowers teams to rapidly build, test, and deploy AI services, while maintaining enterprise-grade control and visibility. This eliminates the traditional friction of managing disparate software stacks, GPU configurations, and AI model serving infrastructure.
"Integrating NVIDIA DPF into our Palette platform marks a significant step forward in delivering scalable and efficient AI infrastructure solutions," said Saad Malik, CTO and co-founder, Spectro Cloud. "Our customers can now harness the full potential of NVIDIA BlueField's latest advancements to drive accelerated networking, infrastructure optimization, AI security, and innovation across telco, enterprise, and edge environments."
'Organizations are rapidly building AI factories and need intelligent, easy-to-use infrastructure solutions to power their transformation,' said Dror Goldenberg, senior vice president of Networking Software at NVIDIA. 'Building on the DOCA Platform Framework, the Palette platform enables enterprises and telcos to deploy and operate BlueField-accelerated AI infrastructure with greater speed and efficiency.'
This strategic integration positions Palette as a comprehensive platform for organizations aiming to operationalize AI at scale, including:
Telco solutions: High-performance, low-latency infrastructure tailored for telecommunications applications.
Enterprise deployments: Scalable and secure AI infrastructure to support diverse enterprise workloads.
Edge computing: Lightweight, GPU-accelerated solutions designed for resource-constrained edge environments.
Palette is available today for deployment and proof of concept (POC) projects. For more information about Spectro Cloud's Palette platform, visit spectrocloud.com. Learn more about our work with NVIDIA, including technical blogs, here.
About Spectro Cloud
Spectro Cloud delivers simplicity and control to organizations running Kubernetes at any scale.
With its Palette platform, Spectro Cloud empowers businesses to deploy, manage, and scale Kubernetes clusters effortlessly — from edge to data center to cloud — while maintaining the freedom to build their perfect stack.
Trusted by leading organizations worldwide, Spectro Cloud transforms Kubernetes complexity into elegant, scalable solutions, enabling customers to master their cloud-native journey with confidence.
Spectro Cloud is a Gartner Cool Vendor, CRN Tech Innovator, and a 'leader' and 'outperformer' in GigaOm's 2025 Radars for Kubernetes for Edge Computing, and Managed Kubernetes.
Co-founded in 2019 by CEO Tenry Fu, Vice President of Engineering Gautam Joshi and Chief Technology Officer Saad Malik, Spectro Cloud is backed by Alter Venture Partners, Boldstart Ventures, Firebolt Ventures, Growth Equity at Goldman Sachs Alternatives, NEC and Translink Orchestrating Future Fund, Qualcomm Ventures, Sierra Ventures, Stripes, T-Mobile Ventures, TSG and WestWave Capital.
For more information, visit https://www.spectrocloud.com or follow @spectrocloudinc and @spectrocloudgov on X.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Is Quantum Computing (QUBT) Stock a Buy on This Bold Technological Breakthrough?
Is Quantum Computing (QUBT) Stock a Buy on This Bold Technological Breakthrough?

Yahoo

time14 minutes ago

  • Yahoo

Is Quantum Computing (QUBT) Stock a Buy on This Bold Technological Breakthrough?

Quantum computing stocks are heating up again, offering investors a front-row seat to what could be the next massive tech revolution. Even Nvidia (NVDA) CEO Jensen Huang, once skeptical about near-term adoption, recently said quantum computing was at an 'inflection point,' signaling a dramatic shift from his earlier stance that it was 'decades away.' Companies in this space are finally beginning to move from the research lab to real-world commercialization. Quantum Computing (QUBT) just hit a major milestone in that journey. The company announced the successful shipment of its first commercial entangled photon source to a South Korean research institution. This cutting-edge product is a foundational piece of QUBT's quantum cybersecurity platform, which won a 2024 Edison Award. The shipment not only showcases the company's ability to execute globally, but also underscores growing demand for integrated quantum solutions. CoreWeave Just Revealed the Largest-Ever Nvidia Blackwell GPU Cluster. Should You Buy CRWV Stock? AMD Is Gunning for Nvidia's AI Chip Throne. Should You Buy AMD Stock Now? The Saturday Spread: Statistical Signals Flash Green for CMG, TMUS and VALE Our exclusive Barchart Brief newsletter is your FREE midday guide to what's moving stocks, sectors, and investor sentiment - delivered right when you need the info most. Subscribe today! With real momentum behind it and a clear roadmap ahead, QUBT could be a high-risk, high-reward play for investors looking to capitalize on the coming wave of quantum adoption. Based in Hoboken, New Jersey, Quantum Computing is an integrated photonics company that focuses on the development of quantum machines for both commercial and government markets in the United States. The company specializes in thin-film lithium niobate chips. These chips are central to QUBT's mission of building quantum machines that operate at room temperature and require low power. Valued at $2.7 billion by market cap, QUBT shares have exploded over the past year, soaring more than 3,000%. However, the stock has cooled in 2025, rising just 17.4% year-to-date amid growing skepticism over the commercialization timeline for quantum technology. Following last year's sharp rally, QUBT's valuation has reached nosebleed territory, with a staggering price-sales ratio of 7,475x, far above the sector median. This suggests the stock is extremely overvalued compared to its industry peers. On May 16, Shares of QUBT popped nearly 40% in a single trading session after Quantum Computing reported Q1 results that illustrate both nascent revenue traction and the substantial investments required to advance its quantum photonics roadmap. The company recognized approximately $39,000 in revenue for the quarter, representing a 42.7% year-over-year increase from a similarly low base. However, this figure fell roughly 61% short of consensus forecasts, highlighting the early stage nature of commercial adoption. Gross margin contracted to 33.3% from 40.7% a year earlier, While net income was reported at nearly $17 million or $0.13 per share, beating the estimate of $0.08, it was driven primarily by a non-cash gain on the mark-to-market valuation of warrant-related derivative liabilities. Operating expenses rose to approximately $8.3 million, up from $6.3 million in the year-ago quarter, as the company expanded staffing and advanced its Quantum Photonic Chip Foundry in Tempe, Arizona. The balance sheet remains robust: cash and cash equivalents totaled about $166.4 million with no debt, providing a multi-year runway at current expenditure levels. Revenue divisions are still emerging, with initial sales tied to prototype devices, quantum cybersecurity platforms, and early foundry orders, but detailed segment reporting is limited given the infancy of commercial deployments. Looking ahead, management indicated they expect only modest photonic foundry revenue in the back half of 2025, with revenue likely to accelerate in 2026 as additional customers come online. Earlier this year, Quantum Computing disclosed collaborations with NASA's Langley Research Center and the Sanders Tri-Institutional Therapeutics Discovery Institute. These partnerships were formed to validate their quantum photonic technologies in demanding, real-world settings, removing sunlight noise from space-based LiDAR and enhancing drug discovery workflows. On May 12, Quantum Computing said it has completed its Quantum Photonic Chip Foundry in Tempe, Arizona, positioning it to meet demand in data communications and telecommunications. This facility enables scalable production of entangled photon sources, enhancing QCI's competitive standing against established photonics firms and emerging quantum hardware startups. The foundry's completion transitions R&D toward revenue generation. For now, only a single analyst covers QUBT stock, assigning it a 'Strong Buy' rating with a price target of $22, implying upside of 14%. For investors, QUBT remains a highly speculative stock with unique technology but limited commercial traction. Despite partnerships and bold claims, it lags far behind the commercial sucess of industry giants like International Business Machines (IBM) and Nvidia (NVDA). Without a clear path to profitability or a meaningful share of the market, its lofty valuation is difficult to justify in today's competitive and capital-sensitive environment. Lastly, investors should note that quantum computing stocks often move more on hype than fundamentals, making QUBT a highly speculative bet. On the date of publication, Nauman Khan did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on

Make Over a 2.4% One-Month Yield Shorting Nvidia Out-of-the-Money Puts
Make Over a 2.4% One-Month Yield Shorting Nvidia Out-of-the-Money Puts

Yahoo

time14 minutes ago

  • Yahoo

Make Over a 2.4% One-Month Yield Shorting Nvidia Out-of-the-Money Puts

Nvidia Inc. (NVDA) stock is cheap based on free cash flow (FCF) price targets. Investors can short out-of-the-money (OTM) NVDA put options to make a 1-month 2.4% yield. This is at 5% lower exercise prices, providing a cheaper potential buy-in point for investors. NVDA closed at $143.85 on Friday, June 20. In my last Barchart article on May 30, I argued that NVDA stock was worth $191.34 per share. That is still one-third (+33.0%) higher than Friday's price. The Saturday Spread: Statistical Signals Flash Green for CMG, TMUS and VALE Make Over a 2.4% One-Month Yield Shorting Nvidia Out-of-the-Money Puts Stop Missing Market Moves: Get the FREE Barchart Brief – your midday dose of stock movers, trending sectors, and actionable trade ideas, delivered right to your inbox. Sign Up Now! This article will discuss one way to play NVDA by shorting out-of-the-money (OTM) puts. That way, an investor can set a potentially lower buy-in point and get paid for this. But first, let's look at Nvidia's free cash flow and the related price target. In my last Barchart article, I showed that Nvidia's Q1 FCF of $26.125 billion represented an astounding 59.3% of quarterly revenue. That means almost 60% of its sales revenue goes straight into its bank account with no cash outlays on it (even after record-high capex spending). Moreover, I showed that over the last 12 months (LTM), its FCF margin was almost 50% (48.5%). That implies going forward its FCF could rise to a record high. For example, based on analysts' next 12-month (NTM) projections of $225 billion, using a 50% FCF margin free cash flow could exceed $112 billion: $225b x 50% FCF margin = $112.5b FCF How to value NVDA? Let's think about what the market might be projecting. For example, let's assume the market believes Nvidia will make $100 billion in FCF, slightly less than 4 times its Q1 FCF. So, given its market cap today of $3,508 billion, that represents a 2.85% yield: $100b/$3,508 = 0.0285 = 2.850% FCF yield So, using our NTM forecast of $112.5b, its market cap could rise to $3.75 trillion $112.5b / 0.0285 = $3,947 billion NTM mkt cap That represents an upside of 12.5% from today's market cap: $3,947b / $3,508b mkt cap today = 1.125 So, that makes its target price at least 12.5% more: $143.85 x 1.125 = $161.83 However, if Nvidia makes better than 50% FCF margins over the next year, its target price could be much higher. For example, even a 10% higher FCF margin leads to a 24% upside: 0.55 x $225b = $123.75b FCF $123.75b / 0.0285 FCF yield = $4,342 billion mkt cap; $4,342b / $3,508b = 1.2377 = +23.8% upside 1.1238 x $143.85 p/sh = $178 per share The bottom line is that Nvidia's strong FCF and FCF margins will lead to a significantly higher price, between $162 and $178 per share. This coincides with what other analysts are projecting. For example, 66 analysts surveyed by Yahoo! Finance show an average price target of $172.60. Similarly, Barchart's mean survey shows $174.83 per share. In addition, which tracks analysts who have written recently on NVDA stock, has an average price of $179.87 from 40 analysts. My analysis above shows you why these analysts have these higher price targets. But there is no guarantee NVDA will rise to these targets over the next year. So, one way to play this is to sell short out-of-the-money (OTM) puts in nearby expiry periods. In my May 30 Barchart article, I suggest selling short the $128 strike price put expiring July 3 for a 3.125% yield at a 3.72% out-of-the-money (i.e., below the trading price) strike. For example, the midpoint premium was $4.00, so $4.00/$128.00 equals 0.03125. That was for a one-month play (34 days to expiry or DTE). Today, that strike price has a much lower premium of just 39 cents. So, an investor has already made $3.61 (i.e., $4.00-$0.39), or a net 2.82% yield (i.e., $3.61/$128 = 0.028). It makes sense to roll this over and set a new one-month short-put play. That means buying back the short put at 39 cents and reinvesting at a slightly higher strike price one month out. For example, look at the July 25 expiration period (i.e., 34 DTE). It shows that the $137 strike price put options expiring July 25, i.e., 4.7% below Friday's price, have a $3.40 midpoint premium. That means a new short seller of these puts can make a 2.48% yield over the next month (i.e., $3.40/$137.00 = 0.0248). For less risk-averse investors, a 2.70% yield is possible at the $138 strike price(i.e., $3.72/$138.00 = 0.02696). This strike price is just 4% below Friday's close. Moreover, even after rolling the prior trade over, the net yield with the $137 strike put play is still 2.20% (i.e., $3.40-0.39 = $3.01/$137.00 = 0.02197). So, that means over two months, a short seller of these OTM puts will have made 2.82% plus 2.20%, or 5.02% total (2.51% on average for both months). In addition, an investor's breakeven point, even if NVDA falls to $137.00 over the next month, is lower: $137 - $3.40 = $133.60 p/ sh That is -7.125% below Friday's closing price. In other words, this is a good way to set a lower buy-in point for new investors in NVDA stock. For existing investors, it is a way to potentially lower their average cost, as well as produce extra income on their holdings. And don't forget, given the target price of $172.60, the breakeven point presents a potential upside of over 29%: $172.60/$133.60-1 = 1.292 -1 = +29.2% upside The bottom line is that investors can potentially make over a 2% yield over the next month shorting these out-of-the-money (OTM) puts. On the date of publication, Mark R. Hake, CFA did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Using AI in Customer Service? Don't Make These 4 Mistakes
Using AI in Customer Service? Don't Make These 4 Mistakes

Entrepreneur

timean hour ago

  • Entrepreneur

Using AI in Customer Service? Don't Make These 4 Mistakes

AI is revolutionizing customer service in 2025, offering speed, personalization and efficiency. But to avoid frustrating users, businesses must ensure the following things. Opinions expressed by Entrepreneur contributors are their own. AI is omnipresent in 2025 in all areas of the business sphere, including customer service. And for good reason. Used right, AI can provide invaluable insights into your customers' behaviors and preferences, boost the efficiency of your customer service team and increase overall satisfaction. Between dynamic personalization, streamlined purchase processes and predictive customer support, many small businesses are leveraging AI to level the playing field and provide enterprise-grade customer service. However, despite AI's massive potential, there are several potential pitfalls when using AI in customer service. At worst, AI can scare off customers or generate frustration, rather than helping to streamline processes. Here are the four most common mistakes — and how to avoid them. Related: How Small Businesses Can Leverage AI Without Breaking the Bank 1. Frustrating generic chatbots To start with, chatbots can be a great asset to your team members and customers alike. They can speedily handle routine queries, free up your agents' capacities, respond to customers even outside regular business hours and reduce wait times. However, to be effective, chatbots need to be well-trained and personalized. Unfortunately, many companies — in a rush to stay ahead in the AI race — deployed chatbots that ask too many questions, give generic answers and fail to solve queries. In one hilarious example, NYC's MyCity chatbot kept giving wrong answers even six months post-deployment and after $600,000 in investments, misinforming users about legal requirements for business owners and even basic facts such as the minimum wage. Overall, 80% of people reported that interactions with chatbots have increased their frustration rather than leading to quicker solutions to the issues they were facing. To avoid this, it's crucial that chatbots are trained well on company-internal data. Ideally, they should be able to leverage customer-specific data across a number of different channels in order to provide personalized, efficient support to every person who reaches out. 2. Unaccessible siloed data On that note, another common pitfall to avoid when implementing AI in customer service is data siloing. One of AI's greatest strengths is its capacity to process huge amounts of data and unearth patterns and trends, condensed into actionable insights. These insights can then be leveraged for personalization and targeted strategy adjustments. However, that's only possible if AI actually has access to all the necessary data elements — and that is a challenge many small businesses are currently facing. In fact, a recent study by Nextiva, a market leader in customer experience software solutions, found that among company leadership, data siloing was identified as one of the most common barriers to AI implementation. In the study, 39% of respondents agreed that they "struggled with accessibility, aggregation, integration and structure of real-time and historical data." To avoid this limitation, it's essential to audit data storage and integration as soon as you start planning your AI implementation strategy. Making sure from the start that the systems you are considering integrate well — or that bridge solutions are at least available — will avoid unnecessary siloing and frustration down the line. Related: AI Can Give You New Insights About Your Customers for Cheap. Here's How to Make It Work for You. 3. Going overboard on hyper-personalization and automation On the other end of the spectrum are businesses that go overboard in their enthusiasm for AI, to a degree that can appear off-putting to many customers. This includes hyper-personalization and automation processes. While personalization is a key advantage of AI and can boost the efficiency of customer service agents and the satisfaction of the people they interact with, you don't want to appear omniscient either. Having the impression that a company knows everything about them before they even talk to you is seen as acutely creepy by many customers. Salesbots, in particular, often trigger the uncanny valley effect, or scare off potential customers by leveraging information they don't feel they ought to have access to. To steer clear of this particular pitfall, it's essential to carefully calibrate the level of personalisation you implement and weigh its potential benefits in boosting conversions against customers' perception of intrusiveness. 4. Forgetting human escalation options Finally, a widespread mistake small businesses make in leveraging AI for customer service is to neglect human escalation options, especially in customer support. No matter what your AI can do, it's always necessary to offer customers the option to talk to a human agent instead. There is nothing more frustrating for a customer facing an urgent problem than being stuck in an ineffective conversation loop with a chatbot or a virtual phone agent when an actual person would clearly help them reach a solution far more efficiently. Outside business hours, when AI is the only one holding down the fort, it's often enough to offer customers the option to leave a message and assure them you will contact them as soon as possible. Other than that, though, you need to give people the option of a human lifeline to help put out an urgent fire. Related: Does AI Deserve All the Hype? Here's How You Can Actually Use AI in Your Business Conclusion In 2025, AI is an incredible asset that small businesses can leverage to elevate their customer service. It is, however, not a panacea. To effectively harness the potential of AI and avoid common pitfalls, it's necessary to carefully plan and train the systems you're deploying, exercise discretion with respect to personalization and implement a human failsafe option. By sticking to these tenets, though, you'll be able to make the most of the opportunities AI has to offer for small businesses in customer service and increase your overall customer satisfaction.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store