
This Indian GenAI Startup Is Reshaping Dubbing and Lip Sync
Co-founders of NeuralGarage after their SXSW win in March 2025.
Ever watched a film that felt odd because you watched the dubbed version? The visuals of lip-syncing often do not match what you hear, right? The Indian startup Neural Garage offers a solution that is an AI-powered one and addressees the long-standing problem of "visual discord" in dubbing. In an exclusive interview, Mandar Natekar, Co-founder and CEO, NeuralGarage shares details on the technology - Visual Dub - which fixes lip-sync, and facial expressions for dubbed content. It even works with changes in script.
In this exclusive interview, Natekar explains how the technology works by perfectly synchronizing actors' lip movements and facial expressions with the dubbed audio. The attempt creates an authentic and immersive viewing experience, eliminating the visual awkwardness often found in traditional dubbing.
Earlier this year, the world's first movie with AI-powered visual dubbing - Swedish sci-fi adventure film Watch the Skies - released in theatres. The Los Angeles-based movie-making AI firm Flawless worked on the visual dub for the English-dubbed version. NeuralGarage's Visual Dub also works on facial expressions and lip movements for dubbed versions without any fresh shoots.
Asked about the ways his innovative technology helps enhance the experience of watching dubbed versions of world cinema, Natekar says, 'We've also developed our own voice cloning technology. Let us say, there's a Tom Cruise film that has been dubbed in Hindi. Obviously, Tom Cruise lines will get dubbed by a Hindi dubbing artist - but he does not sound like Tom Cruise. Apart from ensuring that the lip-sync matches the Hindi version, we can even make the Hindi dubbing artist sound like Tom Cruise.'
'With our lip-sync technology and our voice cloning technology, we can actually now make the dubbed content look and sound absolutely natural as if it has been shot and filmed in the language of the audio itself.'
SXSW win
In March 2025, NeuralGarage created history when they won the SXSW Pitch Competition becoming the first Indian startup to bag the award. NeuralGarage's Visual Dub technology won in "Entertainment, Media, Sports & Content" category.
Recalling the moment, Mandar Natekar says, 'SXSW is one of the most prestigious platforms when it comes to entertainment worldwide. This is a platform where people in the business join talent from across the world, including Hollywood. You get to meet people from Paramount, Universal, Warner Brothers, business executives and actors, directors…..all of them come to the festival. This competition highlights some of the best startups in the world that could contribute to the entertainment industry. Winning meant we were judged by a jury that consisted of people in the business and people in the investor ecosystem - very big VCs were presented and that is the kind of intense validation for the technology we built, both in terms of the potential use cases and also in terms of the potential business valuation.'
'Winning the award gives us a lot of credibility. Being in the US makes us more easily marketable - the entire entertainment industry is kind of located here. Now SXSW award gives us instant credibility. We've been getting queries from some of the largest studios in the world and broadcast operations on how we can work together, ever since the awards."
Challenges of building NeuralGarage
Recalling his early career days, Natekar says, 'As a co-founder of the company - and I have three other co-founders - there are challenges. I spent more than 22 years in the entertainment business in India before co-founding my own. The startup world is totally different from the corporate life. The startup world is completely DIY - you have to do everything yourself. It has been a very interesting adventure - unlike the corporate life where you work to fulfill somebody else's dream, here you have the chance to turn your own dreams into reality and create your own legacy. There are ups and downs, but they are part and parcel of life. Some days you wake up thinking you'll win the world. Some day you go to bed thinking 'Man, is it all worth it?' But then you wake up in the morning and again restart."
'It is all very interesting. It's been four years now since we started up. And in the last one year since we've put our technology out, we've seen massive success and validation. We got selected by AWS and Google in their global accelerators. We got selected by TechCrunch to participate in TechCrunch Battlefield in SF last October. We also won the L'Oreal Big Bang Beauty Tech Innovation Competition, then this win at SXSW recently. Our ambition is to build software in India that can actually create a global brand. And we are on our way there.'
A few years ago, right in the middle of raising funds for his startup, Mandar Natekar faces major medical and personal hurdles. He refuses to revisit the time and delve on the hardships, but agrees to share what he learnt from the period of struggle.
"I'll tell you my biggest learning - in your life, there are three very strong pillars of any successful person. The first one is obviously your own determination and thought process while the second one is family. The third pillar is health. You have to ensure that all of these pillars are on very, very strong foundation. You have to nurture all of these pillars. If anything goes wrong in any one of these three, it can cause massive upheaval in your life.
Suggestions for aspiring tech startups founders
'I tell people to always chase dreams. If you think that you have a compelling idea that can change the world, work on it. And there is no better time to start on anything you want to do than now. Generally, people procrastinate - 'I'll build this after five years' but these plans don't work. If you are passionate about something and have a compelling idea that you want to bring to the world, do it now. There is no better time than this moment. If you base your decision-making on goalposts, you will always be calculating,' Natekar signs off with his bits of suggestions for aspiring founders of tech startups all across.
(This conversation has been edited and condensed for clarity.)
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
30 minutes ago
- Bloomberg
Jony Ive Deal Removed from OpenAI Site Over Trademark Suit
Marketing materials and video related to a blockbuster partnership between former Apple Inc. designer Jony Ive and OpenAI Inc. were removed from the web due to a trademark dispute. Social media users noticed Sunday that a video and website hosted by OpenAI that announced the artificial intelligence company's $6.5 billion acquisition of Ive's secretive AI hardware startup IO Products were no longer online, prompting speculation about a hiccup in the agreement.


TechCrunch
an hour ago
- TechCrunch
Tesla launches robotaxi rides in Austin with big promises and unanswered questions
Tesla has started giving rides in driverless Model Y SUVs in Austin, a decade after CEO Elon Musk began making — and breaking — myriad promises about his company's ability to launch such a service. The rollout will become the first big test of Musk's belief that it's possible to safely deploy fully autonomous vehicles using just cameras and end-to-end AI – an approach that differs from other players in the space like Waymo. On Sunday, numerous videos shared on social media as well as sources in the city, confirmed what Musk has been teasing for months: that the rides are finally happening, at a surely coincidental flat fee of $4.20 per ride. Tesla sent early-access invitations in the past week to vetted customers, who were able to download and use the new robotaxi app on Sunday to hail rides. It's unclear how many people have received this invitation. But posts on Musk's social media platform X show that many of them went to Tesla's loudest online supporters. The invitations, along with a new robotaxi information page published on Tesla's website on June 22, confirm the service will operate every day from 6:00 a.m. to 12:00 a.m but 'may be limited or unavailable in the event of inclement weather.' And, notably, a Tesla employee will be sitting in the right front passenger seat as a 'safety monitor.' The robotaxi information page also includes instructions on downloading the app, how to report a lost item, and general rules for riders. It still glosses over the kind of specifics that Waymo — the Alphabet-owned AV company that operates commercial robotaxis in Phoenix, Los Angeles, San Francisco, and Austin — has historically provided. The robotaxi service will be small to start, according to Musk. The initial fleet will be about 10 or so 2025 Model Y SUVs operating in a narrowly defined area of South Austin. That's in line with a first-hand account by Ed Niedermeyer, author of 'Ludicrous, The Unvarnished Story of Tesla Motors,' who is in Austin to monitor the robotaxi rollout. (Niedermeyer is a co-host of The Autonocast with TechCrunch editor Kirsten Korosec.) Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Neidermeyer found what appears to be a Tesla robotaxi depot — a nondescript parking lot dotted with trees near Oltorf Street in South Austin. The day before the launch, he spotted several driverless Model Ys — always with an employee behind the steering wheel — entering and exiting the parking lot. Groups of other Tesla Model Y vehicles, most with manufacturer plates, were also parked there. This morning, he spotted the branded Tesla Model Y robotaxis, this time with the employee in the front passenger seat, leaving the holding area. He observed one of the branded robotaxis, which had not yet picked up a rider, suddenly hitting its brakes two separate times — once in the middle of an intersection. It's unclear why the vehicle behaved that way. However, in a video, which TechCrunch has viewed, both instances occurred as the Tesla passed by police vehicles that were located in parking lots adjacent to the roadway. Information gaps Leading up to the launch, Musk shared dribs and drabs about the Tesla robotaxi launch in a few interviews and posts on X. Even now, nearly all of the information about the robotaxi launch has been provided by the company's biggest supporters. In fact, Tesla has actively tried to suppress information about the robotaxi service. Tesla tried to block TechCrunch's public records request with the Texas Department of Transportation (TxDOT). The company has also tried to block the city of Austin from fulfilling a records request by Reuters, according to the news service. 'Tesla seeks to be as transparent as possible, however, as explained further below, some of the requested information cannot be released because it is confidential information, trade secrets, and/or business information exchanged with the TxDOT in conjunction with conducting business with TxDOT,' Taylor White, senior counsel on infrastructure for Tesla, wrote in a letter to the Texas Attorney General's office in April. One of the more interesting rollout strategies is the company's use of a human 'safety monitor.' It's unclear what role these safety monitors will play and how much, if any control, they will have. These employees are likely not meant to try and intervene if the software is about to do something wrong. But they may have access to some sort of kill switch that can stop the car if that does happen. Historically, autonomous vehicle companies like Waymo and former Cruise tested their respective self-driving technology by having a human safety operator behind the wheel and a second engineer in the front passenger seat. Eventually, that might be reduced to one person sitting in the passenger seat before removing them altogether. This practice was traditionally done during the testing phase — not commercial operations. Tesla is not using the futuristic vehicles, dubbed Cybercabs, that were revealed on October 10, 2024. Instead, the 2025 Tesla Model Y vehicles are equipped with what Musk describes as a new, 'unsupervised' version of Tesla's Full Self-Driving software. Tesla will not be using its in-cabin camera during rides by default. The company says it will only be used if a rider requests support or in the case of an emergency. It will use the camera after a ride ends to 'confirm Robotaxi's readiness for its next trip.' Tesla is encouraging early access riders to take photos and video of their experiences, although it says it 'may suspend or terminate Robotaxi access' if riders violate its rules, including if they 'disseminate content on a social media platform or similar medium depicting a violation of these Rules or misuse of the Robotaxi.' (That includes riders agreeing not to smoke, vape, drink alcohol, do drugs, or use the robotaxi in connection with a crime.) Musk and other Tesla executives praised the milestone on X, with Ashok Elluswamy, the head of the company's self-driving team, posting a photo of the 'Robotaxi launch party' from an undisclosed location. 'Super congratulations to the @Tesla_AI software & chip design teams on a successful @Robotaxi launch!! Culmination of a decade of hard work,' Musk wrote. But at least one rider on Sunday reported having an experience where Tesla's remote support team had to help in some way. It's not immediately clear what happened during that ride, but that same rider later said the ride was very smooth.
Yahoo
2 hours ago
- Yahoo
Your AI use could have a hidden environmental cost
Sign up for CNN's Life, But Greener newsletter. Our limited newsletter series guides you on how to minimize your personal role in the climate crisis — and reduce your eco-anxiety. Whether it's answering work emails or drafting wedding vows, generative artificial intelligence tools have become a trusty copilot in many people's lives. But a growing body of research shows that for every problem AI solves, hidden environmental costs are racking up. Each word in an AI prompt is broken down into clusters of numbers called 'token IDs' and sent to massive data centers — some larger than football fields — powered by coal or natural gas plants. There, stacks of large computers generate responses through dozens of rapid calculations. The whole process can take up to 10 times more energy to complete than a regular Google search, according to a frequently cited estimation by the Electric Power Research Institute. So, for each prompt you give AI, what's the damage? To find out, researchers in Germany tested 14 large language model (LLM) AI systems by asking them both free-response and multiple-choice questions. Complex questions produced up to six times more carbon dioxide emissions than questions with concise answers. In addition, 'smarter' LLMs with more reasoning abilities produced up to 50 times more carbon emissions than simpler systems to answer the same question, the study reported. 'This shows us the tradeoff between energy consumption and the accuracy of model performance,' said Maximilian Dauner, a doctoral student at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study published Wednesday. Typically, these smarter, more energy intensive LLMs have tens of billions more parameters — the biases used for processing token IDs — than smaller, more concise models. 'You can think of it like a neural network in the brain. The more neuron connections, the more thinking you can do to answer a question,' Dauner said. Complex questions require more energy in part because of the lengthy explanations many AI models are trained to provide, Dauner said. If you ask an AI chatbot to solve an algebra question for you, it may take you through the steps it took to find the answer, he said. 'AI expends a lot of energy being polite, especially if the user is polite, saying 'please' and 'thank you,'' Dauner explained. 'But this just makes their responses even longer, expending more energy to generate each word.' For this reason, Dauner suggests users be more straightforward when communicating with AI models. Specify the length of the answer you want and limit it to one or two sentences, or say you don't need an explanation at all. Most important, Dauner's study highlights that not all AI models are created equally, said Sasha Luccioni, the climate lead at AI company Hugging Face, in an email. Users looking to reduce their carbon footprint can be more intentional about which model they chose for which task. 'Task-specific models are often much smaller and more efficient, and just as good at any context-specific task,' Luccioni explained. If you are a software engineer who solves complex coding problems every day, an AI model suited for coding may be necessary. But for the average high school student who wants help with homework, relying on powerful AI tools is like using a nuclear-powered digital calculator. Even within the same AI company, different model offerings can vary in their reasoning power, so research what capabilities best suit your needs, Dauner said. When possible, Luccioni recommends going back to basic sources — online encyclopedias and phone calculators — to accomplish simple tasks. Putting a number on the environmental impact of AI has proved challenging. The study noted that energy consumption can vary based on the user's proximity to local energy grids and the hardware used to run AI partly why the researchers chose to represent carbon emissions within a range, Dauner said. Furthermore, many AI companies don't share information about their energy consumption — or details like server size or optimization techniques that could help researchers estimate energy consumption, said Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside who studies AI's water consumption. 'You can't really say AI consumes this much energy or water on average — that's just not meaningful. We need to look at each individual model and then (examine what it uses) for each task,' Ren said. One way AI companies could be more transparent is by disclosing the amount of carbon emissions associated with each prompt, Dauner suggested. 'Generally, if people were more informed about the average (environmental) cost of generating a response, people would maybe start thinking, 'Is it really necessary to turn myself into an action figure just because I'm bored?' Or 'do I have to tell ChatGPT jokes because I have nothing to do?'' Dauner said. Additionally, as more companies push to add generative AI tools to their systems, people may not have much choice how or when they use the technology, Luccioni said. 'We don't need generative AI in web search. Nobody asked for AI chatbots in (messaging apps) or on social media,' Luccioni said. 'This race to stuff them into every single existing technology is truly infuriating, since it comes with real consequences to our planet.' With less available information about AI's resource usage, consumers have less choice, Ren said, adding that regulatory pressures for more transparency are unlikely to the United States anytime soon. Instead, the best hope for more energy-efficient AI may lie in the cost efficacy of using less energy. 'Overall, I'm still positive about (the future). There are many software engineers working hard to improve resource efficiency,' Ren said. 'Other industries consume a lot of energy too, but it's not a reason to suggest AI's environmental impact is not a problem. We should definitely pay attention.'