logo
DeepL launches Clarify, introducing groundbreaking interactivity for superior business translations

DeepL launches Clarify, introducing groundbreaking interactivity for superior business translations

Yahoo05-03-2025

"Clarify" brings next-level personalization to DeepL's Language AI platform, addressinggrowing demand among its 200k+ business customers
NEW YORK, March 5, 2025 /PRNewswire/ -- DeepL, a leading global Language AI company, today announced the launch of Clarify, a groundbreaking feature bringing interactivity to AI translations, powered by its next-generation LLM. Clarify transforms the user experience by actively engaging throughout the translation process, serving as an interactive AI companion and language expert that resolves ambiguities and provides greater control over translations. This results in a more personalized, engaging experience that yields higher quality, nuanced translations — driving seamless cross-border communication for DeepL's rapidly expanding network of over 200,000 business customers worldwide.
"AI is increasingly becoming an essential sparring partner in our professional lives, which is what we're introducing with Clarify – a more personalized approach to interacting with the DeepL platform throughout the translation process. Its role is similar to that of a colleague, proactively engaging with users and helping them fine-tune translations to achieve the high quality required for business communication," said Jarek Kutylowski, CEO and Founder, DeepL. "This powerful addition brings an added layer of personalization and accuracy to the DeepL experience, so that our customers can get even greater value out of our platform. Businesses want to trust AI for the right answers, so shifting from a static, one-way experience to more of a dialogue with the technology will become the norm."
AI investment is booming, surpassing $184 billion in 2024, with 72% of business leaders aiming to incorporate AI into their daily operations in 2025 and 92% planning to boost their AI budgets in the next three years. Focus is now moving from excitement over the possibilities of AI to whether deployments can deliver results — driving a shift away from general models towards highly specialized solutions that can be customized and personalized to meet specific business needs. A quarter of global businesses are looking to invest in AI for specialized tasks, like translation, over the coming year. Moreover, human-AI collaboration is often considered key to the successful deployment of AI in businesses, especially for applications that require deep contextual understanding, such as translation, and in high-stakes, highly regulated environments like legal and manufacturing. For example, 51% of in-house legal teams see AI as a key tool for enhancing translations, combining AI solutions with human expertise and oversight.
While DeepL's specialized Language AI platform is already trusted for its accurate, context-aware translations — proven to require 2-3 times fewer edits than Google Translate and ChatGPT-4 to achieve the same quality — and leverages the expertise of thousands of human translators to train its models, the launch of Clarify elevates the user experience by introducing interactive human-AI collaboration for the first time. Clarify allows professionals and knowledge workers to engage more deeply throughout the translation process, ensuring their specific context and requirements are met. The feature can also identify nuances that non-native speakers may overlook or find challenging, ensuring that translations always achieve the highest levels of accuracy and clarity essential for business-critical use cases.
How Clarify works
Once text is entered into DeepL Translator, Clarify proactively seeks to clarify context, prompting users with questions on topics such as multiple meanings, gender references, names, numbers, idioms, cultural references, abbreviations, and specialized terms. After users respond, Clarify then adapts the translations to ensure proper syntax, tense, and grammar.
Clarify knows the right questions to ask because it is designed specifically for translation use-cases, powered by DeepL's highly specialized LLMs trained by professional language experts. And in contrast to other AI tools that rely on user-driven interaction — where users type a prompt, receive a response, and iterate until achieving a desired result — Clarify can save users time by operating in a system-driven manner, intuitively identifying the necessary context and prompting users accordingly.
"This is a tremendous milestone for our company from a technological perspective and is just the beginning of many exciting innovations we have coming to the DeepL Language AI platform to enhance its interactivity," said Sebastian Enderlein, CTO, DeepL. "Our goal is to ensure our fast-growing network of customers – which has now surpassed 200,000 businesses worldwide – are equipped with the highest quality, most secure and cutting-edge solutions to meet their evolving language and communication needs."
The launch of Clarify coincides with a period of significant growth for DeepL, as the company has established itself as the leading Language AI solution provider for businesses worldwide. Over the last year, DeepL's global customer network has grown significantly, now totaling over 200,000 businesses across sectors ranging from manufacturing to legal, retail, healthcare and more — including notable brands like Softbank, Mazda, Harvard Business Publishing, The Ifo Institute, Panasonic Connect and more. This success stems from the company's market-leading innovation, which combines exceptional quality, reliability, and security in its comprehensive Language AI platform, featuring advanced written and spoken translation tools, writing solutions, and the DeepL API.
Clarify is now available for DeepL Pro users worldwide via the DeepL Translator web interface, for English and German translations – with more languages coming in the future. The feature maintains the same enterprise-grade security and compliance standards as the rest of the DeepL Pro experience. For more information and to try out DeepL Pro for your business today, visit https://www.deepl.com/en/pro.
About DeepL
DeepL is on a mission to break down language barriers for businesses everywhere. Over 200,000 businesses and governments and millions of individuals across 228 global markets trust DeepL's Language AI platform for human-like translation in both written and spoken formats, as well as natural, improved writing. Designed with enterprise security in mind, companies around the world leverage DeepL's AI solutions that are specifically tuned for language to transform business communications, expand markets and improve productivity. Founded in 2017 by CEO Jaroslaw (Jarek) Kutylowski, DeepL today has over 1,000 passionate employees and is supported by world-renowned investors including Benchmark, IVP and Index Ventures.
Logo - https://mma.prnewswire.com/media/2447716/DeepL_Logo.jpg
View original content:https://www.prnewswire.com/apac/news-releases/deepl-launches-clarify-introducing-groundbreaking-interactivity-for-superior-business-translations-302389307.html
SOURCE DeepL

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Why is AI halllucinating more frequently, and how can we stop it?
Why is AI halllucinating more frequently, and how can we stop it?

Yahoo

timean hour ago

  • Yahoo

Why is AI halllucinating more frequently, and how can we stop it?

When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."

Bosses want you to know AI is coming for your job
Bosses want you to know AI is coming for your job

Yahoo

timean hour ago

  • Yahoo

Bosses want you to know AI is coming for your job

SAN FRANCISCO - Top executives at some of the largest American companies have a warning for their workers: Artificial intelligence is a threat to your job. CEOs from Amazon to IBM, Salesforce and JPMorgan Chase are telling their employees to prepare for disruption as AI either transforms or eliminates their jobs in the future. Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post. AI will 'improve inventory placement, demand forecasting and the efficiency of our robots,' Amazon CEO Andy Jassy said in a Tuesday public memo that predicted his company's corporate workforce will shrink 'in the next few years.' He joins a string of other top executives that have recently sounded the alarm about AI's impact in the workplace. Economists say there aren't yet strong signs that AI is driving widespread layoffs across industries. But there is evidence that workers across the United States are increasingly using AI in their jobs and the technology is starting to transform some roles such as computer programming, marketing and customer service. At the same time, CEOs are under pressure to show they are embracing new technology and getting results - incentivizing attention-grabbing predictions that can create additional uncertainty for workers. 'It's a message to shareholders and board members as much as it is to employees,' Molly Kinder, a Brookings Institution fellow who studies the impact of AI, said of the CEO announcements, noting that when one company makes a bold AI statement, others typically follow. 'You're projecting that you're out in the future, that you're embracing and adopting this so much that the footprint [of your company] will look different.' Some CEOs fear they could be ousted from their job within two years if they don't deliver measurable AI-driven business gains, a Harris Poll survey conducted for software company Dataiku showed. Tech leaders have sounded some of the loudest warnings - in line with their interest in promoting AI's power. At the same time, the industry has been shedding workers the last few years after big hiring sprees during the height of the coronavirus pandemic and interest rate hikes by the Federal Reserve. At Amazon, Jassy told the company's workers that AI would in 'the next few years' reduce some corporate roles like customer service representatives and software developers, but also change work for those in the company's warehouses. IBM, which recently announced job cuts, said it replaced a couple hundred human resource workers with AI 'agents' for repetitive tasks such as onboarding and scheduling interviews. In January, Meta CEO Mark Zuckerberg suggested on Joe Rogan's podcast that the company is building AI that might be able to do what some human workers do by the end of the year. 'We, at Meta as well as the other companies working on this, are going to have an AI that can effectively be sort of a mid-level engineer at your company,' Zuckerberg said. 'Over time we'll get to the point where a lot of the code in our apps … is actually going to be built by AI engineers instead of people engineers.' Dario Amodei, CEO of Anthropic, maker of the chatbot Claude, boldly predicted last month that half of all white-collar entry-level jobs may be eliminated by AI within five years. Leaders in other sectors have also chimed in. Marianne Lake, JPMorgan's CEO of consumer and community banking, told an investor meeting last month that AI could help the bank cut headcount in operations and account services by 10 percent. The CEO of BT Group Allison Kirkby suggested that advances in AI would mean deeper cuts at the British telecom company. Even CEOs who reject the idea of AI replacing humans on a massive scale are warning workers to prepare for disruption. Jensen Huang, CEO of AI chip designer Nvidia said last month, 'You're not going to lose your job to an AI, but you're going to lose your job to someone who uses AI.' Google CEO Sundar Pichai said at Bloomberg's tech conference this month that AI will help engineers be more productive but that his company would still add more human engineers to its team. Meanwhile, Microsoft is planning more layoffs amid heavy investment in AI, Bloomberg reported this week. Other tech leaders at Shopify, Duolingo and Box have told workers they are now required to use AI at their jobs, and some will monitor usage as part of performance reviews. Some companies have indicated that AI could slow hiring. Salesforce CEO Marc Benioff recently called Amodei's prognosis 'alarmist' on an earnings call, but on the same call chief operating and financial officer Robin Washington said that an AI agent has helped to reduce hiring needs and bring $50 million in savings. Despite corporate leaders' warnings, economists don't yet see broad signs that AI is driving humans out of work. 'We have little evidence of layoffs so far,' said Columbia Business School professor Laura Veldkamp, whose research explores how companies' use of AI affects the economy. 'What I'd look for are new entrants with an AI-intensive business model, entering and putting the existing firms out of business.' Some researchers suggest there is evidence AI is playing a role in the drop in openings for some specific jobs, like computer programming, where AI tools that generate code have become standard. Google's Pichai said last year that more than a quarter of new code at the company was initially suggested by AI. Many other workers are increasingly turning to AI tools, for everything from creating marketing campaigns to helping with research - with or without company guidance. The percentage of American employees who use AI daily has doubled in the last year to 8 percent, according to a Gallup poll released this week. Those using it at least a few times a week jumped from 12 percent to 19 percent. Some AI researchers say the poll may not actually reflect the total number of workers using AI as many may use it without disclosing it. 'I would suspect the numbers are actually higher,' said Ethan Mollick, co-director of Wharton School of Business' generative AI Labs, because some workers avoid disclosing AI usage, worried they would be seen as less capable or breaching corporate policy. Only 30 percent of respondents to the Gallup survey said that their company had general guidelines or formal policies for using AI. OpenAI's ChatGPT, one of the most popular chatbots, has more than 500 million weekly users around the globe, the company has said. It is still unclear what benefits companies are reaping from employees' use of AI, said Arvind Karunakaran, a faculty member of Stanford University's Center for Work, Technology, and Organization. 'Usage does not necessarily translate into value,' he said. 'Is it just increasing productivity in terms of people doing the same task quicker or are people now doing more high value tasks as a result?' Lynda Gratton, a professor at London Business School, said predictions of huge productivity gains from AI remain unproven. 'Right now, the technology companies are predicting there will be a 30% productivity gain. We haven't yet experienced that, and it's not clear if that gain would come from cost reduction … or because humans are more productive.' The pace of AI adoption is expected to accelerate even further if more companies use advanced tools such as AI agents and they deliver on their promise of automating work, Mollick said. AI labs are hoping to prove their agents are reliable within the next year or so, which will be a bigger disrupter to jobs, he said. While the debate continues over whether AI will eliminate or create jobs, Mollick said 'the truth is probably somewhere in between.' 'A wave of disruption is going to happen,' he said. Related Content 3-pound puppy left in trash is rescued, now thriving How to meet street cats around the world 'Jaws' made people fear sharks. 50 years later, can it help save them?

Week in Review: Meta reveals its Oakley smart glasses
Week in Review: Meta reveals its Oakley smart glasses

Yahoo

timean hour ago

  • Yahoo

Week in Review: Meta reveals its Oakley smart glasses

Welcome back to Week in Review! Lots in store for you today, including Wix's latest acquisition, Meta's new smart glasses, a look at the new Digg, and much more. Have a great weekend! Smart specs: Meta and Oakley have teamed up on a new pair of smart glasses that can record 3K video, play music, handle calls, and respond to Meta AI prompts. They start at $399 and have double the battery life of Meta's Ray-Bans. A $499 limited-edition Oakley Meta HSTN model will be available starting July 11. Unicorn watch: Wix bought 6-month-old solo startup Base44 for $80 million in cash after it quickly gained traction as a no-code AI tool for building web apps. Created by a single founder and already profitable, Base44's rapid rise made scooping it up irresistible. Sand to the rescue: Finland just turned on the world's largest sand battery — yes, actual sand — which stores heat to help power the small town of Pornainen's heating system and cut its carbon emissions. The low-tech, low-cost system is built from discarded fireplace soapstone, is housed in a giant silo, and can store heat for weeks, proving you don't need fancy lithium to fight climate change. You just need a pile of hot rocks. This is TechCrunch's Week in Review, where we recap the week's biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here. We're back, baby: VanMoof is back from the brink with the S6, its first e-bike since bankruptcy — and it's sticking to its signature custom design, despite that being what nearly killed the company. Backed by McLaren tech and a beefed-up repair network, the new VanMoof promises smoother rides, smarter features, and (hopefully) fewer stranded cyclists. Space lasers: Baiju Bhatt, best known for co-founding Robinhood, is now building lasers in space. His new startup, Aetherflux, has raised $60 million to prove that beaming solar power from orbit isn't a fantasy, with a demo satellite set to launch next year and early backing from the Department of Defense. Oh no: One of SpaceX's Starship rockets exploded during a test in Texas, likely pushing back the vehicle's next launch, which had been tentatively set for June 29. SpaceX says the blast, caused by a pressurized tank failure, didn't injure anyone, but it's yet another setback in a rocky year for the company's ambitious mega-rocket program. That lossless feeling: Spotify's long-awaited lossless audio tier still hasn't launched, but fresh hints buried in the latest app code suggest that it's under active development and could be closer than ever. But with years of delays and no official timeline, fans might want to temper their excitement until Spotify confirms the rollout. I can Digg it: Digg's reboot has entered alpha testing with a fresh iOS app aimed at becoming an AI-era Reddit alternative. The app offers a clean, simple design with curated communities, AI-powered article summaries, and gamified features like 'Gems' and daily leaderboards. We want you: The U.S. Navy is speeding up how it works with startups, cutting red tape and zeroing in on real wins like saved time and better morale. Department of the Navy CTO Justin Fanelli says it's leading with problems, hunting for game-changing tech in AI, GPS, and system upgrades. And with Silicon Valley finally paying attention, the Navy's becoming a go-to partner for innovators ready to shake things up. Cash ain't king: Mark Zuckerberg is throwing out massive cash — up to $100 million — to lure top AI talent from OpenAI and DeepMind. But OpenAI's Sam Altman says none of his key people have bitten, praising his team's mission over money. Meanwhile, OpenAI keeps pushing ahead with new AI models and even hints at launching an AI-powered social app that could outpace Meta's own shaky attempts. San Francisco's latest startup saga? Cluely's after-party for YC's AI Startup School blew up on Twitter, drawing 2,000 party crashers, but it became the 'most legendary party that never happened' after getting shut down by cops before a single drink was spilled. Founder Roy Lee's viral marketing may have promised chaos, but the real party's waiting. Maybe once the weather warms up?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store