Latest news with #VeriSilicon

National Post
12-06-2025
- Business
- National Post
VeriSilicon's AI-ISP Custom Chip Solution Enables Mass Production of Customer's Smartphones
Article content Providing architecture design, software-hardware co-development, and mass production support, and enhancing AI-powered imaging Article content Article content SHANGHAI — VeriSilicon ( recently announced that its AI-ISP custom chip solution has been successfully adopted in a customer's mass-produced smartphones, reaffirming the company's comprehensive one-stop custom silicon service capabilities in AI vision processing. Article content VeriSilicon's AI-ISP custom chip solution can integrate proprietary or third-party Neural Network Processing Unit (NPU) IP and Image Signal Processing (ISP) IP. By combining traditional image processing techniques with AI algorithms, it significantly enhances image and video clarity, dynamic range, and environmental adaptability. The chip solution offers flexible configurations with RISC-V or Arm-based processors, supports MIPI image input/output interfaces, provides LPDDR5/4X memory integration capability, and is compatible with common peripheral interfaces such as UART, I2C, and SDIO. This makes the solution highly adaptable for deployment across various applications including smartphones, surveillance systems, and automotive electronics. Article content For this collaboration, VeriSilicon designed a low-power AI-ISP system-on-chip (SoC) based on the RISC-V architecture, tailored to the customer's specific requirements. It also included a FreeRTOS real-time Software Development Kit (SDK). The customized SoC was fully optimized for seamless interoperability with the customer's main processor platform and has since been successfully deployed in multiple smart devices, achieving large-scale production. This success highlights VeriSilicon's robust capabilities in heterogeneous computing, software-hardware co-optimization, and system-level integration and verification. Article content 'AI-powered imaging has become a key differentiator in the competitive smartphone market, driving increasing demand for high-performance and low-power image processing solutions,' said Wiseway Wang, Executive Vice President and General Manager of the Custom Silicon Platform Division at VeriSilicon. 'With full-spectrum capabilities ranging from IP licensing and chip architecture design to system-level software and hardware development, tape-out, packaging and testing, as well as mass production, VeriSilicon offers end-to-end custom silicon services leveraging its extensive design service experience and proven mass production capabilities. The successful mass production of this customer's chip further validates our strength in high-end silicon design services. Moving forward, we will continue to innovate and improve our offerings, empowering customers to accelerate the launch of differentiated products with efficient, high-quality custom chip solutions.' Article content Article content
Yahoo
12-06-2025
- Business
- Yahoo
VeriSilicon's AI-ISP Custom Chip Solution Enables Mass Production of Customer's Smartphones
Providing architecture design, software-hardware co-development, and mass production support, and enhancing AI-powered imaging capabilities in smart devices SHANGHAI, June 12, 2025--(BUSINESS WIRE)--VeriSilicon ( recently announced that its AI-ISP custom chip solution has been successfully adopted in a customer's mass-produced smartphones, reaffirming the company's comprehensive one-stop custom silicon service capabilities in AI vision processing. VeriSilicon's AI-ISP custom chip solution can integrate proprietary or third-party Neural Network Processing Unit (NPU) IP and Image Signal Processing (ISP) IP. By combining traditional image processing techniques with AI algorithms, it significantly enhances image and video clarity, dynamic range, and environmental adaptability. The chip solution offers flexible configurations with RISC-V or Arm-based processors, supports MIPI image input/output interfaces, provides LPDDR5/4X memory integration capability, and is compatible with common peripheral interfaces such as UART, I2C, and SDIO. This makes the solution highly adaptable for deployment across various applications including smartphones, surveillance systems, and automotive electronics. For this collaboration, VeriSilicon designed a low-power AI-ISP system-on-chip (SoC) based on the RISC-V architecture, tailored to the customer's specific requirements. It also included a FreeRTOS real-time Software Development Kit (SDK). The customized SoC was fully optimized for seamless interoperability with the customer's main processor platform and has since been successfully deployed in multiple smart devices, achieving large-scale production. This success highlights VeriSilicon's robust capabilities in heterogeneous computing, software-hardware co-optimization, and system-level integration and verification. "AI-powered imaging has become a key differentiator in the competitive smartphone market, driving increasing demand for high-performance and low-power image processing solutions," said Wiseway Wang, Executive Vice President and General Manager of the Custom Silicon Platform Division at VeriSilicon. "With full-spectrum capabilities ranging from IP licensing and chip architecture design to system-level software and hardware development, tape-out, packaging and testing, as well as mass production, VeriSilicon offers end-to-end custom silicon services leveraging its extensive design service experience and proven mass production capabilities. The successful mass production of this customer's chip further validates our strength in high-end silicon design services. Moving forward, we will continue to innovate and improve our offerings, empowering customers to accelerate the launch of differentiated products with efficient, high-quality custom chip solutions." About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP. For more information, please visit: View source version on Contacts Media Contact: press@ Sign in to access your portfolio

National Post
09-06-2025
- Business
- National Post
VeriSilicon's Ultra-Low Energy NPU Provides Over 40 TOPS for On-Device LLM Inference in Mobile Applications
Article content Article content SHANGHAI — VeriSilicon ( today announced that its ultra-low energy and high-performance Neural Network Processing Unit (NPU) IP now supports on-device inference of large language models (LLMs) with AI computing performance scaling beyond 40 TOPS. This energy-efficient NPU architecture is specifically designed to meet the increasing demand for generative AI capabilities on mobile platforms. It not only delivers powerful computing performance for AI PCs and other end devices, but is also optimized to meet the increasingly stringent energy efficiency challenges of AI phones and other mobile platforms. Article content Built on a highly configurable and scalable architecture, VeriSilicon's ultra-low energy NPU IP supports mixed-precision computation, advanced sparsity optimization, and parallel processing. Its design incorporates efficient memory management and sparsity-aware acceleration, which reduce computational overhead and latency, ensuring smooth and responsive AI processing. It supports hundreds of AI algorithms including AI-NR and AI-SR, and leading AI models such as Stable Diffusion and LLaMA-7B. Moreover, it can be seamlessly integrated with VeriSilicon's other processing IPs to enable heterogeneous computing, empowering SoC designers to develop comprehensive AI solutions that meet diverse application needs. Article content VeriSilicon's ultra-low energy NPU IP also supports popular AI frameworks such as TensorFlow Lite, ONNX, and PyTorch, thereby accelerating deployment and simplifying integration for customers across various AI use cases. Article content 'Mobile devices, such as smartphones, are evolving into personal AI servers. With the rapid advancement of AIGC and multi-modal LLM technologies, the demand for AI computing is growing exponentially and becoming a key differentiator in mobile products,' said Weijin Dai, Chief Strategy Officer, Executive Vice President, and General Manager of the IP Division at VeriSilicon. 'One of the most critical challenges in supporting such high AI computing workloads is energy consumption control. VeriSilicon has been continuously investing in ultra-low energy NPU development for AI phones and AI PCs. Through close collaboration with leading SoC partners, we are excited to see that our technology has been realized in silicon for next-generation AI phones and AI PCs.' Article content Article content


Business Wire
09-06-2025
- Business
- Business Wire
VeriSilicon's Ultra-Low Energy NPU Provides Over 40 TOPS for On-Device LLM Inference in Mobile Applications
SHANGHAI--(BUSINESS WIRE)--VeriSilicon ( today announced that its ultra-low energy and high-performance Neural Network Processing Unit (NPU) IP now supports on-device inference of large language models (LLMs) with AI computing performance scaling beyond 40 TOPS. This energy-efficient NPU architecture is specifically designed to meet the increasing demand for generative AI capabilities on mobile platforms. It not only delivers powerful computing performance for AI PCs and other end devices, but is also optimized to meet the increasingly stringent energy efficiency challenges of AI phones and other mobile platforms. Built on a highly configurable and scalable architecture, VeriSilicon's ultra-low power NPU IP supports mixed-precision computation, advanced sparsity optimization, and parallel processing. Built on a highly configurable and scalable architecture, VeriSilicon's ultra-low energy NPU IP supports mixed-precision computation, advanced sparsity optimization, and parallel processing. Its design incorporates efficient memory management and sparsity-aware acceleration, which reduce computational overhead and latency, ensuring smooth and responsive AI processing. It supports hundreds of AI algorithms including AI-NR and AI-SR, and leading AI models such as Stable Diffusion and LLaMA-7B. Moreover, it can be seamlessly integrated with VeriSilicon's other processing IPs to enable heterogeneous computing, empowering SoC designers to develop comprehensive AI solutions that meet diverse application needs. VeriSilicon's ultra-low energy NPU IP also supports popular AI frameworks such as TensorFlow Lite, ONNX, and PyTorch, thereby accelerating deployment and simplifying integration for customers across various AI use cases. 'Mobile devices, such as smartphones, are evolving into personal AI servers. With the rapid advancement of AIGC and multi-modal LLM technologies, the demand for AI computing is growing exponentially and becoming a key differentiator in mobile products,' said Weijin Dai, Chief Strategy Officer, Executive Vice President, and General Manager of the IP Division at VeriSilicon. 'One of the most critical challenges in supporting such high AI computing workloads is energy consumption control. VeriSilicon has been continuously investing in ultra-low energy NPU development for AI phones and AI PCs. Through close collaboration with leading SoC partners, we are excited to see that our technology has been realized in silicon for next-generation AI phones and AI PCs.' About VeriSilicon

National Post
09-06-2025
- Business
- National Post
VeriSilicon's Scalable High-Performance GPGPU-AI Computing IPs Empower Automotive and Edge Server AI Solutions
Article content Provide AI acceleration with high computing density, multi-chip scaling, and 3D-stacked memory integration Article content SHANGHAI — VeriSilicon ( today announced the latest advancements in its high-performance and scalable GPGPU-AI computing IPs, which are now empowering next-generation automotive electronics and edge server applications. Combining programmable parallel computing with a dedicated Artificial Intelligence (AI) accelerator, these IPs offer exceptional computing density for demanding AI workloads such as Large Language Model (LLM) inference, multimodal perception, and real-time decision-making in thermally and power-constrained environments. Article content VeriSilicon's GPGPU-AI computing IPs are based on a high-performance General Purpose Graphics Processing Unit (GPGPU) architecture with an integrated dedicated AI accelerator, delivering outstanding computing capabilities to AI applications. The programmable AI accelerator and sparsity-aware computing engine accelerate transformer-based and matrix-intensive models through advanced scheduling techniques. These IPs also support a broad range of data formats for mixed-precision computing, including INT4/8, FP4/8, BF16, FP16/32/64, and TF32, and are designed with high-bandwidth interfaces of 3D-stacked memory, LPDDR5X, HBM, as well as PCIe Gen5/Gen6 and CXL. They are also capable of multi-chip and multi-card scale-out expansion, offering system-level scalability for large-scale AI application deployments. Article content VeriSilicon's GPGPU-AI computing IPs provide native support for popular AI frameworks for both training and inference, such as PyTorch, TensorFlow, ONNX, and TVM. These IPs also support General Purpose Computing Language (GPCL) which is compatible with mainstream GPGPU programming languages, and widely used compilers. These capabilities are well aligned with the computing and scalability requirements of today's leading LLMs, including models such as DeepSeek. Article content 'The demand for AI computing on edge servers, both for inference and incremental training, is growing exponentially. This surge requires not only high efficiency but also strong programmability. VeriSilicon's GPGPU-AI computing processors are architected to tightly integrate GPGPU computing with AI accelerator at fine-grained levels. The advantages of this architecture have already been validated in multiple high-performance AI computing systems,' said Weijin Dai, Chief Strategy Officer, Executive Vice President, and General Manager of the IP Division at VeriSilicon. 'The recent breakthroughs from DeepSeek further amplify the need for maximized AI computing efficiency to address increasingly demanding workloads. Our latest GPGPU-AI computing IPs have been enhanced to efficiently support Mixture-of-Experts (MoE) models and optimize inter-core communication. Through close collaboration with multiple leading AI computing customers, we have extended our architecture to fully leverage the abundant bandwidth offered by 3D-stacked memory technologies. VeriSilicon continues to work hand-in-hand with ecosystem partners to drive real-world mass adoption of these advanced capabilities.' Article content Article content Article content