
Cornelis Networks releases tech to speed up AI datacenter connections
FILE PHOTO: An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
SAN FRANCISCO (Reuters) -Cornelis Networks on Tuesday released a suite of networking hardware and software aimed at linking together up to half a million artificial intelligence chips.
Cornelis, which was spun out of Intel in 2020 and is still backed by the chipmaker's venture capital fund, is targeting a problem that has bedeviledAI datacenters for much of the past decade: AI computing chips are very fast, but when many of those chips are strung together to work on big computing problems, the network links between the chips are not fast enough to keep the chips supplied with data.
Nvidia took aim at that problem with its $6.9 billion purchase in 2020 of networking chip firm Mellanox, which made networking gear with a network protocol called InfiniBand, which was created in the 1990s specifically for supercomputers.
Networking chip giants such as Broadcom and Cisco Systems are working to solve the same set of technical issues with Ethernet technology, which has connected most of the internet since the 1980s and is an open technology standard.
The Cornelis "CN5000" networking chips usea new network technology created by Cornelis called OmniPath. The chips will ship to initial customers such as the U.S. Department of Energy in the third quarter of this year, Cornelis CEO Lisa Spelman told Reuters on May 30.
Although Cornelis has backing from Intel, its chips are designed to work with AI computing chips from Nvidia, Advanced Micro Devices or any other maker using open-source software, Spelman said. She said that the next version of Cornelis chips in 2026 will also be compatible with Ethernet networks, aiming to alleviate any customer concerns that buying Cornelis chips would leave a data center locked into its technology.
"There's 45-year-old architecture and a 25-year-old architecture working to solve these problems," Spelman said. "We like to offer a new way and a new path for customers that delivers you both the (computing chip) performance and excellent economic performance as well."
(Reporting by Stephen Nellis in San Francisco; Editing by Leslie Adler)
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Free Malaysia Today
25 minutes ago
- Free Malaysia Today
How AI is becoming a secret weapon for workers
Companies should completely rethink their integration of AI, rather than turning a blind eye to those employees that use the technology covertly. (Envato Elements pic) PARIS : Artificial intelligence is fast becoming part of everyday working life, promising productivity gains and a transformation of working methods. Between enthusiasm and caution, companies are trying to harness this tech and integrate it into their processes. But behind the official rhetoric, a very different reality is emerging: many employees are adopting these tools discreetly, out of sight of their managers. A recent survey conducted by software company Ivanti shows the extent of this under-the-radar adoption of AI, revealing one-third of employees surveyed use AI tools without their supervisors' knowledge. There are several distinct reasons for this covert strategy. For 36% of them, it is primarily a matter of gaining a 'secret advantage' over their colleagues, while 30% of respondents fear that revealing their dependence on this technology could cost them their jobs. This is understandable, considering that 29% of employees are concerned that AI will diminish the value of their skills in the eyes of their employer. The figures reveal an explosion in clandestine use: 42% of office workers say they use generative AI tools such as ChatGPT at work. Among IT professionals, this proportion reaches an impressive 74%. And close to half of office workers use AI tools not provided by their company. Underestimating the risks This covert use exposes organisations to considerable risks: unauthorised platforms do not always comply with security standards or corporate data-protection requirements. From confidential data and business strategies to intellectual property, anything and everything can potentially be fed into AI tools unchecked. 'It is crucial for employers to assume this is happening, regardless of any restrictions, and to assess the use of AI to ensure it complies with their security and governance standards,' stressed Brooke Johnson, chief legal counsel at Ivanti. Employers should encourage open dialogue to foster transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively. (Envato Elements pic) The survey also reveals a troubling paradox: while 52% of office workers believe that working more efficiently simply means doing more work, many prefer to keep their productivity gains to themselves. This mistrust is accompanied by an AI-fuelled impostor syndrome, with 27% of users saying they don't want their abilities to be questioned. This situation highlights a huge gap between management and employees: although 44% of professionals surveyed say their company has invested in AI, they simultaneously complain about a lack of training and skills to use these technologies effectively. This disconnect betrays a poorly orchestrated technological transformation. In the face of this silent revolution, Johnson advocates a proactive approach: 'Organisations should implement clear policies and guidelines for the use of AI tools, along with regular training sessions to educate employees on the potential security and ethical implications.' This survey suggests that companies should completely rethink their integration of AI, rather than turning a blind eye to this legion of secret users. The stakes go beyond mere operational optimisation: the most successful organisations will need to balance technological use with the enhancement of human potential. By encouraging open dialogue, employers can foster transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively. Ignoring this silent revolution runs the risk of deepening mutual distrust between management and employees, to everyone's detriment.


The Star
an hour ago
- The Star
Soccer-From fallen giants to giant-killers: Botafogo's remarkable revival
Soccer Football - FIFA Club World Cup - Group B - Paris St Germain v Botafogo - Rose Bowl Stadium, Pasadena, California, U.S. - June 19, 2025 Botafogo players celebrate after the match IMAGN IMAGES via Reuters/Kirby Lee (Reuters) -Once Brazilian football royalty, Botafogo had languished for decades as a debt-ridden sleeping giant before they toppled Paris St Germain at the Club World Cup to cap a resurrection tale three years in the making. When American entrepreneur John Textor acquired the club in 2022, fresh from their promotion back to Brazil's first division, he took on a training ground so decrepit that then-coach Luis Castro dismissed it as "good for parking cars," alongside crushing liabilities exceeding one billion reais ($181.39 million). Botafogo were a storied but shattered institution. The club that once nurtured Brazilian greats - Garrincha, Zagallo, Jairzinho and Nilton Santos, architects of three World Cup triumphs - was drowning in debt, having endured the humiliation of relegation three times in just over a decade. On Thursday, they outplayed European champions Paris St Germain to win 1-0 in the Club World Cup's most eye-catching upset, propelling themselves to the top of the tournament's "group of death" and on the verge of the knockout stage. Their squad, assembled through shrewd bargain-hunting in football's forgotten corners, now faces Diego Simeone's Atletico Madrid in Los Angeles on Monday, sitting comfortably, knowing even a two-goal defeat would still secure their passage to the round of 16. The victory over PSG vindicated Textor's vision, outlined in a Reuters interview three years prior, of "beating the system" through astute scouting in under-explored talent pools. The architects of Thursday's victory exemplified this approach. Match-winner Igor Jesus arrived as a free agent after three anonymous years in the UAE and was transformed into a Brazil international. Argentine defender Alexander Barboza, who neutralised PSG's vaunted attack, was plucked from Paraguay's Club Libertad for nothing. Captain Marlon Freitas came from second-division Atletico Goianiense, while experienced European campaigners Alex Telles and Allan were revitalised after spells in Middle Eastern leagues. Gregore, Jefferson Savarino, John and Cuiabano were all signed for under two million euros ($2.30 million) each. "The goal is to be sustainably competitive every year," Botafogo CEO Thairo Arruda told Reuters. "With a top six payroll, we produce like a top three." The transformation extends far beyond the pitch. Revenues have soared from 140 million reais in 2022 to projected earnings exceeding 1.1 billion by 2025, while liabilities have been slashed by 40%. Textor's Eagle Football empire also encompasses stakes in Ligue 1's Olympique Lyonnais and Premier League Crystal Palace. Botafogo's renaissance - crowned by last year's domestic and continental double - has breathed new life into a club motto once heavy with self-pity: "There are things that only happen to Botafogo." After outclassing Europe's elite, those words now carry an altogether sweeter resonance. ($1 = 5.5129 reais) ($1 = 0.8702 euros) (Reporting by Fernando KallasEditing by Toby Davis)

Malay Mail
2 hours ago
- Malay Mail
Digital deception: Misinformation war escalates as AI deepfakes, fake war footage flood social media amid Iran-Israel conflict
WASHINGTON, June 23 — AI deepfakes, video game footage passed off as real combat, and chatbot-generated falsehoods — such tech-enabled misinformation is distorting the Israel-Iran conflict, fuelling a war of narratives across social media. The information warfare unfolding alongside ground combat — sparked by Israel's strikes on Iran's nuclear facilities and military leadership — underscores a digital crisis in the age of rapidly advancing AI tools that have blurred the lines between truth and fabrication. The surge in wartime misinformation has exposed an urgent need for stronger detection tools, experts say, as major tech platforms have largely weakened safeguards by scaling back content moderation and reducing reliance on human fact-checkers. After Iran struck Israel with barrages of missiles last week, AI-generated videos falsely claimed to show damage inflicted on Tel Aviv and Ben Gurion Airport. The videos were widely shared across Facebook, Instagram and X. Using a reverse image search, AFP's fact-checkers found that the clips were originally posted by a TikTok account that produces AI-generated content. There has been a 'surge in generative AI misinformation, specifically related to the Iran-Israel conflict,' Ken Jon Miyachi, founder of the Austin-based firm BitMindAI, told AFP. 'These tools are being leveraged to manipulate public perception, often amplifying divisive or misleading narratives with unprecedented scale and sophistication.' 'Photo-realism' GetReal Security, a US company focused on detecting manipulated media including AI deepfakes, also identified a wave of fabricated videos related to the Israel-Iran conflict. The company linked the visually compelling videos — depicting apocalyptic scenes of war-damaged Israeli aircraft and buildings as well as Iranian missiles mounted on a trailer — to Google's Veo 3 AI generator, known for hyper-realistic visuals. The Veo watermark is visible at the bottom of an online video posted by the news outlet Tehran Times, which claims to show 'the moment an Iranian missile' struck Tel Aviv. 'It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation and sow confusion,' said Hany Farid, the co-founder of GetReal Security and a professor at the University of California, Berkeley. Farid offered one tip to spot such deepfakes: the Veo 3 videos were normally eight seconds in length or a combination of clips of a similar duration. 'This eight-second limit obviously doesn't prove a video is fake, but should be a good reason to give you pause and fact-check before you re-share,' he said. The falsehoods are not confined to social media. Disinformation watchdog NewsGuard has identified 51 websites that have advanced more than a dozen false claims — ranging from AI-generated photos purporting to show mass destruction in Tel Aviv to fabricated reports of Iran capturing Israeli pilots. Sources spreading these false narratives include Iranian military-linked Telegram channels and state media sources affiliated with the Islamic Republic of Iran Broadcasting (IRIB), sanctioned by the US Treasury Department, NewsGuard said. The surge in wartime misinformation has exposed an urgent need for stronger detection tools, experts say, as major tech platforms have largely weakened safeguards by scaling back content moderation and reducing reliance on human fact-checkers. — Picture by Hari Anggara 'Control the narrative' 'We're seeing a flood of false claims and ordinary Iranians appear to be the core targeted audience,' McKenzie Sadeghi, a researcher with NewsGuard, told AFP. Sadeghi described Iranian citizens as 'trapped in a sealed information environment,' where state media outlets dominate in a chaotic attempt to 'control the narrative.' Iran itself claimed to be a victim of tech manipulation, with local media reporting that Israel briefly hacked a state television broadcast, airing footage of women's protests and urging people to take to the streets. Adding to the information chaos were online clips lifted from war-themed video games. AFP's fact-checkers identified one such clip posted on X, which falsely claimed to show an Israeli jet being shot down by Iran. The footage bore striking similarities to the military simulation game Arma 3. Israel's military has rejected Iranian media reports claiming its fighter jets were downed over Iran as 'fake news.' Chatbots such as xAI's Grok, which online users are increasingly turning to for instant fact-checking, falsely identified some of the manipulated visuals as real, researchers said. 'This highlights a broader crisis in today's online information landscape: the erosion of trust in digital content,' BitMindAI's Miyachi said. 'There is an urgent need for better detection tools, media literacy, and platform accountability to safeguard the integrity of public discourse.' — AFP