Latest news with #language


The Verge
9 hours ago
- Science
- The Verge
You sound like ChatGPT
Join any Zoom call, walk into any lecture hall, or watch any YouTube video, and listen carefully. Past the content and inside the linguistic patterns, you'll find the creeping uniformity of AI voice. Words like 'prowess' and 'tapestry,' which are favored by ChatGPT, are creeping into our vocabulary, while words like 'bolster,' 'unearth,' and 'nuance,' words less favored by ChatGPT, have declined in use. Researchers are already documenting shifts in the way we speak and communicate as a result of ChatGPT — and they see this linguistic influence accelerating into something much larger. In the 18 months after ChatGPT was released, speakers used words like 'meticulous,' 'delve,' 'realm,' and 'adept' up to 51 percent more frequently than in the three years prior, according to researchers at the Max Planck Institute for Human Development, who analyzed close to 280,000 YouTube videos from academic channels. The researchers ruled out other possible change points before ChatGPT's release and confirmed these words align with those the model favors, as established in an earlier study comparing 10,000 human- and AI-edited texts. The speakers don't realize their language is changing. That's exactly the point. One word, in particular, stood out to researchers as a kind of linguistic watermark. 'Delve' has become an academic shibboleth, a neon sign in the middle of every conversation flashing ChatGPT was here. 'We internalize this virtual vocabulary into daily communication,' says Hiromu Yakura, the study's lead author and a postdoctoral researcher at the Max Planck Institute of Human Development. ''Delve' is only the tip of the iceberg.' But it's not just that we're adopting AI language — it's about how we're starting to sound. Even though current studies mostly focus on vocabulary, researchers suspect that AI influence is starting to show up in tone, too — in the form of longer, more structured speech and muted emotional expression. As Levin Brinkmann, a research scientist at the Max Planck Institute of Human Development and a coauthor of the study, puts it, ''Delve' is only the tip of the iceberg.' AI shows up most obviously in functions like smart replies, autocorrect, and spellcheck. Research out of Cornell looks at our use of smart replies in chats, finding that use of smart replies increases overall cooperation and feelings of closeness between participants, since users end up selecting more positive emotional language. But if people believed their partner was using AI in the interaction, they rated their partner as less collaborative and more demanding. Crucially, it wasn't actual AI usage that turned them off — it was the suspicion of it. We form perceptions based on language cues, and it's really the language properties that drive those impressions, says Malte Jung, Associate Professor of Information Science at Cornell University and a co-author of the study. This paradox — AI improving communication while fostering suspicion — points to a deeper loss of trust, according to Mor Naaman, professor of Information Science at Cornell Tech. He has identified three levels of human signals that we've lost in adopting AI into our communication. The first level is that of basic humanity signals, cues that speak to our authenticity as a human being like moments of vulnerability or personal rituals, which say to others, 'This is me, I'm human.' The second level consists of attention and effort signals that prove 'I cared enough to write this myself.' And the third level is ability signals which show our sense of humor, our competence, and our real selves to others. It's the difference between texting someone, 'I'm sorry you're upset' versus 'Hey sorry I freaked at dinner, I probably shouldn't have skipped therapy this week.' One sounds flat; the other sounds human. For Naaman, figuring out how to bring back and elevate these signals is the path forward in AI-mediated communication, because AI is not only changing language — but what we think. 'Even on dating sites, what does it mean to be funny on your profile or in chat anymore where we know that AI can be funny for you?' Naaman asks. The loss of agency starting in our speech and moving into our thinking, in particular, is what he is worried about. 'Instead of articulating our own thoughts, we articulate whatever AI helps us to articulate…we become more persuaded.' Without these signals, Naaman warns, we'll only trust face-to-face communication — not even video calls. We lose the verbal stumbles, regional idioms, and off-kilter phrases that signal vulnerability, authenticity, and personhood The trust problem compounds when you consider that AI is quietly establishing who gets to sound 'legitimate' in the first place. University of California, Berkeley research found that AI responses often contained stereotypes or inaccurate approximations when prompted to use dialects other than Standard American English. Examples of this include ChatGPT repeating the prompt back to the non-Standard-American-English user due to lack of comprehension and exaggerating the input dialect significantly. One Singaporean English respondent commented, 'the super exaggerated Singlish in one of the responses was slightly cringeworthy.' The study revealed that AI doesn't just prefer Standard American English, it actively flattens other dialects in ways that can demean their speakers. This system perpetuates inaccuracies not only about communities but also about what 'correct' English is. So the stakes aren't just about preserving linguistic diversity — they're about protecting the imperfections that actually build trust. When everyone around us starts to sound 'correct,' we lose the verbal stumbles, regional idioms, and off-kilter phrases that signal vulnerability, authenticity, and personhood. We're approaching a splitting point, where AI's impacts on how we speak and write move between the poles of standardization, like templating professional emails or formal presentations, and authentic expression in personal and emotional spaces. Between those poles, there are three core tensions at play. Early backlash signals, like academics avoiding 'delve' and people actively trying not to sound like AI, suggests we may self-regulate against homogenization. AI systems themselves will likely become more expressive and personalized over time, potentially reducing the current AI voice problem. And the deepest risk of all, as Naaman pointed to, is not linguistic uniformity but losing conscious control over our own thinking and expression. The future isn't predetermined between homogenization and hyperpersonalization: it depends on whether we'll be conscious participants in that change. We're seeing early signs that people will push back when AI influence becomes too obvious, while technology may evolve to better mirror human diversity rather than flatten it. This isn't a question about whether AI will continue shaping how we speak — because it will — but whether we'll actively choose to preserve space for the verbal quirks and emotional messiness that make communication recognizably, irreplaceably human.
Yahoo
a day ago
- Yahoo
Once You Notice ChatGPT's Weird Way of Talking, You Start to See It Everywhere
It's not written by humans, it's written by AI. It's not useful, it's slop. It's not hard to find, it's everywhere you look. As AI-generated text is becoming increasingly ubiquitous on the internet, some distinctive linguistic patterns are starting to emerge — maybe more so than anything else, that pattern of negating statements typified by "it's not X, it's Y." Once you notice it, you start to see it everywhere. One teacher on Reddit even noticed that certain AI phrase structures are making the jump into spoken language. "Comments and essays (I'm a teacher) are the obvious culprits, but I've straight up noticed the 'that's not X, it's [Y]' structure being said out loud more often than it used to be in video essays and other similar content," they wrote. It's a fascinating observation that makes a striking amount of AI-generated text easily identifiable. It also raises some interesting questions about how AI chatbot tech is informing the way we speak — and how certain stylistic choices, like the em-dash in this very sentence, are becoming looked down upon for resembling the output of a large language model. "Now I know that linguistic style existed before GPT, and it was common enough, but now I just can't unsee or unhear it," the Reddit user wrote, saying they now "assume AI was involved" when they see it. "Makes me grimace just a bit on the inside," they added. Others quickly chimed in, agreeing and riffing on the phenomenon. "You're not just seeing it — you're saying something," one user wrote in a tongue-in-cheek comment, imitating ChatGPT. "And that's not illusion — that's POWER." "It's almost as if AI use is becoming the preferred way of communication," another user commented. "It's not just frustrating — it's insulting." Beyond a prolific use of em-dashes, which have quickly become a telltale sign of AI-generated text, others pointed out the abundant use of emojis, including green checkboxes and a red X. It's a particularly pertinent topic now that the majority of students are owning up to using tools like ChatGPT to generate essays or do their homework. Even teachers are using the tech for grading, closing the loop on a trend that experts warn could prove incredibly destructive in the field of education. Tech companies have struggled to come up with trustworthy and effective AI detection tools, more often than not leaving educators to their own devices. And the stakes are as high as they've ever been. The internet is being flooded with AI slop, drowning out text that's actually being authored by a human. AI's oddly stunted use of language isn't surprising. After all, large language models are trained on enormous training datasets and employ mad-libs style tricks to calculate the probability of each sequential word. In other words, LLMs are imitators of human speech and attempt to form sentences that are most likely to be appreciated by the person writing the prompts, sometimes to an absurd degree. It's an unnerving transition to a different — and consistently error-laden — way of writing that simply doesn't mesh with the messiness of human language. It's gotten to the point where teachers have become incredibly wary of submitted work that sounds too polished. To many, it's enough to call for messier writing to quell a surge in low-effort AI slop. "GPT is always going to sound polished," one Reddit user offered. "It's a machine that rewards coherence, which is why incoherence has never been more precious." "We need the rough edges," they added. "The voice cracks. The unexpected pause. The half-formed metaphor that never quite lands. Because that's how you can tell a human is still in there, pushing back." More on AI chatbots: AI Chatbots Are Becoming Even Worse At Summarizing Data


Washington Post
2 days ago
- Washington Post
How to meet street cats around the world
Before a foreign trip, Jeff Bogle will learn a few key phrases in the country's official tongue. For the Philadelphia writer, one term is as essential as please, thank you and bathroom. The word he can't travel without is 'cat.' 'If I'm planning on asking people where the cats are or where I can find cats, I definitely put that in the arsenal,' said Bogle, who can say 'cat' in French, Spanish, Italian, Arabic, Turkish, Croatian and Japanese.


Daily Mail
2 days ago
- Business
- Daily Mail
We'd like a Biadh Sona (that's a McDonald's Happy Meal in Gaelic)
Campaigners have called on McDonald's to let hungry Scots order their Big Macs in Gaelic. Those visiting the fast food giant's restaurants north of the Border can order in Welsh but not in the language which has been spoken in Scotland for more than 1,500 years. Customers can use the self-service screens to order a Brechdan McChicken, Dogn Mawr o Sglodion Tenau and Coffi Du Rheolaidd – Welsh for a McChicken sandwich, large fries and Americano – but not the equivalent in Gaelic. Now campaigners have called on the firm to add Gaelic to the list of languages offered to customers using the kiosks. Alasdair Laing, a spokesman for Gaelic campaign group Misneachd, said: 'If we can have Welsh in Scotland we should be able to have Gaelic in Wales and vice versa. 'These things are not difficult to do, it's just a case of the company bothering.' The calls come after MSPs this week unanimously backed new powers aimed at encouraging the greater use of Gaelic and Scots. As well as Welsh, the fast food chain's digital kiosks also let customers in Scotland order in Polish, Spanish, French and German. Latest census statistics showed Gaelic is a growing language, with more than 130,000 having some skill in it in 2022, up by just over 43,000 compared to 2011. Mr Laing called on the firm to introduce the option in the same way they had done with Welsh and with Gaeilge in Ireland. The Scottish Government has been pushing Gaelic, with its use expanding in the public sector through use on police cars, trains and ambulances. Mr Laing said: 'It needs to be supported and implemented in the private sector as well. If they can do it for Wales and Welsh, they can do it for Scotland and Gaelic.' A McDonald's spokesman said: 'We constantly review the availability of all languages in our restaurants.'


Geeky Gadgets
3 days ago
- Business
- Geeky Gadgets
ElevenLabs Adds 38 New Languages in Multilingual AI Update
What if your words could transcend borders, cultures, and languages with ease? The latest update to Eleven v3 makes this bold vision a reality by introducing support for a wider array of languages, redefining how we connect in a multilingual world. In an era where inclusivity and accessibility are paramount, this breakthrough isn't just a technical upgrade—it's a statement. By embracing underrepresented languages and refining its ability to handle regional nuances, Eleven v3 positions itself as a leader in bridging linguistic divides. Imagine a world where professionals collaborate seamlessly across continents, or where communities long excluded from the digital conversation finally find their voice. That's the promise of Eleven v3's newest evolution. ElevenLabs has expanded the companies language capabilities and is reshaping the landscape of global communication. From the advanced linguistic processing technology powering this transformation to the platform's commitment to regional adaptability, you'll discover how Eleven v3 is setting a new standard for multilingual tools. But this isn't just about technology—it's about people. How does this update empower individuals, businesses, and communities to thrive in an interconnected world? And what does it mean for the future of inclusive software design? These questions—and their answers—may just change the way you think about language in the digital age. Eleven v3 Language Expansion Advanced Linguistic Processing: The Technology Behind the Update At the heart of this update lies the integration of advanced linguistic processing technology, which enables Eleven v3 to handle a greater variety of languages with exceptional precision and efficiency. This innovation allows the system to better understand complex grammatical structures, idiomatic expressions, and regional linguistic nuances, making sure seamless interaction for users from diverse backgrounds. For instance, whether users are navigating intricate sentence constructions or colloquial phrases, Eleven v3 adapts to deliver accurate comprehension and text generation. This ensures that communication remains fluid, contextually relevant, and tailored to the specific needs of each language. By using these advancements, the platform sets a new benchmark for linguistic adaptability and user-centric design. Enhancing Accessibility Through Language Diversity The inclusion of new languages directly addresses the need for greater accessibility, particularly for speakers of underrepresented languages. By bridging communication gaps, Eleven v3 enables individuals and communities to participate more fully in the digital landscape. Professionals can now use multilingual tools to collaborate effectively across borders. Individuals relying on assistive technologies gain access to content in their native languages, improving usability and engagement. Communities previously excluded due to limited language options can now join global conversations and access digital resources. This update minimizes language barriers, fostering inclusivity and allowing users to connect in meaningful ways. By prioritizing linguistic diversity, Eleven v3 ensures that its tools are accessible to a broader audience, promoting equity in digital communication. New Languages in ElevenLabs Eleven v3 Watch this video on YouTube. Here are additional guides from our expansive article library that you may find useful on ElevenLabs. Regional Adaptability: Beyond Translation Eleven v3's expanded language support goes beyond basic translation by embracing regional adaptability. The platform considers cultural and linguistic variations, tailoring its functionality to meet the unique needs of specific communities. This approach ensures that users experience tools that feel both familiar and relevant, regardless of their location. Whether you're in Europe, Asia, Africa, or the Americas, Eleven v3 adapts to regional preferences, enhancing the overall user experience. By addressing local nuances and cultural contexts, the platform strengthens its connection with a global audience. This localized approach not only improves usability but also underscores Eleven v3's dedication to creating tools that resonate with diverse populations. Multilingual Support: A Necessity in a Connected World In today's interconnected world, multilingual support is no longer a luxury—it is a necessity. Eleven v3's language expansion reflects this reality, offering tools that assist seamless communication across borders and cultural divides. Businesses can expand their reach into diverse markets with greater ease, fostering international growth. Educational institutions can provide better support for students from various linguistic backgrounds, enhancing learning outcomes. Individuals can connect across cultures, promoting understanding and collaboration on a global scale. By broadening its language capabilities, Eleven v3 positions itself as an indispensable resource for navigating the complexities of a multilingual world. This update not only enhances the platform's utility but also reinforces the importance of language diversity in fostering global connections. Inclusive Software Design: A Core Principle This update exemplifies Eleven v3's dedication to inclusive software design. By prioritizing language diversity, the platform ensures its features are accessible to users from all walks of life. The update supports both widely spoken languages and those with limited digital representation, striking a balance that aligns with broader industry efforts to create equitable and user-friendly tools. For example, speakers of languages with limited online resources can now access tools that cater to their needs, while users of more common languages benefit from enhanced precision and adaptability. This commitment to inclusivity highlights Eleven v3's role in setting a standard for software that serves a truly global audience. Shaping the Future of Global Communication The introduction of new languages in Eleven v3 represents a pivotal advancement in linguistic technology and accessibility. By expanding its language repertoire, the platform enhances usability for diverse linguistic groups, promotes regional adaptability, and champions inclusive communication. This update not only broadens Eleven v3's global reach but also underscores the critical role of language diversity in technology. By addressing the needs of a multilingual world, Eleven v3 paves the way for more inclusive and accessible digital experiences, making sure that no one is left behind in an increasingly interconnected society. Media Credit: ElevenLabs Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.