logo
AI Isn't Fully Automated — It Runs on Hidden Human Labor

AI Isn't Fully Automated — It Runs on Hidden Human Labor

Welcome to Tech Times' AI EXPLAINED, where we look at the tech of today and tomorrow. Brought to you by
Imagine this scenario, one that's increasingly common: You have a voice AI listen to your meeting at work, you get a summary and analysis of that meeting, and you assume AI did all the work.
In reality, though, none of these tools work alone. PLAUD AI, Rabbit, ChatGPT, and more all rely on a layer of human labor that most of us don't hear about. Behind that clean chat interface on your phone or computer, there are data labelers that tag speech samples, contractors that rate AI answers , and testers feeding the system more examples to learn from. Some are highly trained while others focus on more of the tedious aspects of the work. No matter what, though, your AI isn't just automated - it's a complex blend of code and human effort. Without it, your AI wouldn't work at all. The Invisible Workforce Behind Everyday AI
AI tools don't just appear out of thin air, of course. They learn similarly to the way we do: by example. That learning process often relies on what's called human-in-the-loop (HITL) training.
As data-annotation company Encord says in a blog post:
"In machine learning and computer vision training, Human-in-the-Loop (HITL) is a concept whereby humans play an interactive and iterative role in a model's development. To create and deploy most machine learning models, humans are needed to curate and annotate the data before it is fed back to the AI. The interaction is key for the model to learn and function successfully," the company wrote.
Annotators, data scientists, and data operations teams play a significant role in collecting, supplying, and annotating the necessary data, the post continued. The amount of human input varies with how involved the data is and how much human interaction it will be expected to offer.
Of course, as with many business activities, there are ethical concerns. Many content moderators complain of low pay and traumatic content to review. There can also be a language bias in AI training , something researchers and companies are likely working on to solve as AI becomes more complex and global. Case Study: PLAUD AI
Various ways users wear the PLAUD Note device—on a wristband, clipped to a lapel, or hanging as a pendant—highlighting its flexibility for hands-free voice capture throughout the day. PLAUD AI
PLAUD AI's voice assistant offers an easy, one-button experience. Just press a button, speak, and then let it handle the rest. As the company said on its website , the voice assistant lets you "turn voices and conversations into actionable insights."
Behind the scenes, this "magic" started with pre-trained automatic speech recognition (ASR) models like Whisper or other custom variants , that have been refined with actual user recordings. The models not only have to transcribe words, but also try to understand the structure, detect speakers , and interpret tone of voice. The training involves hours and hours of labeled audio and feedback from real conversations. It's likely that every time you see an improvement in the output, it's thanks to thousands of micro-adjustments based on user corrections or behind-the-scenes testing.
According to reviewers, PLAUD AI leverages OpenAI's Whisper speech-to-text model running on its own servers. There are likely many people managing the PLAUD AI version of the model for its products, too. Every neat paragraph that comes out of the voice assistant likely reflects countless iterations of fine-tuning and A/B testing by prompt engineers and quality reviewers. That's how you get your results without having to deal with all that back-end work yourself. Case Study 2: ChatGPT and Otter.ai
The ChatGPT logo represents one of the most widely used AI assistants—powered not just by models, but by human trainers, raters, and user feedback. ilgmyzin/Unsplash
When you use ChatGPT, it can feel like an all-knowing helpful assistant with a polished tone and helpful answers. Those are based, of course, on a foundation of human work. OpenAI used reinforcement learning from human feedback , or RLHF, to train its models. That means actual humans rating responses so the system could learn what responses were the most helpful or accurate, not to mention the most polite.
"On prompts submitted by our customers to the API, our labelers provide demonstrations of the desired model behavior and rank several outputs from our models," wrote the company in a blog post . "We then use(d) this data to fine-tune GPT‑3."
Otter.ai, a popular online voice transcription service, also relies on human work to improve its output. It doesn't use RLHF like OpenAI does, but it does include feedback tools for users to note inaccurate transcriptions, which the company then uses to fine-tune its own models.
The company also uses synthetic data (generated pairs of audio and text) to help train its models, but without user corrections, these synthetic transcripts can struggle with accents, cross talk, or industry jargon; things only humans can fix. Case Study 3: Rabbit R1's Big Promise Still Needs Human Help
The Rabbit R1 made a splash with its debut: a palm-sized orange gadget promising to run your apps for you, no screen-tapping required. Just talk to it, and it's supposed to handle things like ordering takeout or cueing up a playlist. At least, that's the idea.
Rabbit says it built the device around something called a Large Action Model (LAM), which is supposed to "learn" how apps work by watching people use them. What that means in practice is that humans record themselves doing things like opening apps, clicking through menus, or completing tasks and those recordings become training data. The R1 didn't figure all this out on its own; it was shown how to do it, over and over.
Since launch, people testing the R1 have noticed that it doesn't always feel as fluid or "intelligent" as expected. Some features seem more like pre-programmed flows than adaptive tools. In short, it's not magic—it's a system that still leans on human-made examples, feedback, and fixes to keep improving.
That's the pattern with almost every AI assistant right now: what feels effortless in the moment is usually the result of hours of grunt work—labeling, testing, and tuning—done by people you'll never see. AI Still Relies On Human Labor
For all the talk of artificial intelligence replacing human jobs, the truth is that AI still leans hard on human labor to work at all. From data labelers and prompt raters to everyday users correcting transcripts, real people are constantly training, guiding, and cleaning up after the machines. The smartest AI you use today is only as good as the humans behind it. For now, that's the part no algorithm can automate away.
Originally published on Tech Times

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tech-fueled Misinformation Distorts Iran-Israel Fighting
Tech-fueled Misinformation Distorts Iran-Israel Fighting

Int'l Business Times

time8 hours ago

  • Int'l Business Times

Tech-fueled Misinformation Distorts Iran-Israel Fighting

AI deepfakes, video game footage passed off as real combat, and chatbot-generated falsehoods -- such tech-enabled misinformation is distorting the Israel-Iran conflict, fueling a war of narratives across social media. The information warfare unfolding alongside ground combat -- sparked by Israel's strikes on Iran's nuclear facilities and military leadership -- underscores a digital crisis in the age of rapidly advancing AI tools that have blurred the lines between truth and fabrication. The surge in wartime misinformation has exposed an urgent need for stronger detection tools, experts say, as major tech platforms have largely weakened safeguards by scaling back content moderation and reducing reliance on human fact-checkers. After Iran struck Israel with barrages of missiles last week, AI-generated videos falsely claimed to show damage inflicted on Tel Aviv and Ben Gurion Airport. The videos were widely shared across Facebook, Instagram and X. Using a reverse image search, AFP's fact-checkers found that the clips were originally posted by a TikTok account that produces AI-generated content. There has been a "surge in generative AI misinformation, specifically related to the Iran-Israel conflict," Ken Jon Miyachi, founder of the Austin-based firm BitMindAI, told AFP. "These tools are being leveraged to manipulate public perception, often amplifying divisive or misleading narratives with unprecedented scale and sophistication." GetReal Security, a US company focused on detecting manipulated media including AI deepfakes, also identified a wave of fabricated videos related to the Israel-Iran conflict. The company linked the visually compelling videos -- depicting apocalyptic scenes of war-damaged Israeli aircraft and buildings as well as Iranian missiles mounted on a trailer -- to Google's Veo 3 AI generator, known for hyper-realistic visuals. The Veo watermark is visible at the bottom of an online video posted by the news outlet Tehran Times, which claims to show "the moment an Iranian missile" struck Tel Aviv. "It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation and sow confusion," said Hany Farid, the co-founder of GetReal Security and a professor at the University of California, Berkeley. Farid offered one tip to spot such deepfakes: the Veo 3 videos were normally eight seconds in length or a combination of clips of a similar duration. "This eight-second limit obviously doesn't prove a video is fake, but should be a good reason to give you pause and fact-check before you re-share," he said. The falsehoods are not confined to social media. Disinformation watchdog NewsGuard has identified 51 websites that have advanced more than a dozen false claims -- ranging from AI-generated photos purporting to show mass destruction in Tel Aviv to fabricated reports of Iran capturing Israeli pilots. Sources spreading these false narratives include Iranian military-linked Telegram channels and state media sources affiliated with the Islamic Republic of Iran Broadcasting (IRIB), sanctioned by the US Treasury Department, NewsGuard said. "We're seeing a flood of false claims and ordinary Iranians appear to be the core targeted audience," McKenzie Sadeghi, a researcher with NewsGuard, told AFP. Sadeghi described Iranian citizens as "trapped in a sealed information environment," where state media outlets dominate in a chaotic attempt to "control the narrative." Iran itself claimed to be a victim of tech manipulation, with local media reporting that Israel briefly hacked a state television broadcast, airing footage of women's protests and urging people to take to the streets. Adding to the information chaos were online clips lifted from war-themed video games. AFP's fact-checkers identified one such clip posted on X, which falsely claimed to show an Israeli jet being shot down by Iran. The footage bore striking similarities to the military simulation game Arma 3. Israel's military has rejected Iranian media reports claiming its fighter jets were downed over Iran as "fake news." Chatbots such as xAI's Grok, which online users are increasingly turning to for instant fact-checking, falsely identified some of the manipulated visuals as real, researchers said. "This highlights a broader crisis in today's online information landscape: the erosion of trust in digital content," BitMindAI's Miyachi said. "There is an urgent need for better detection tools, media literacy, and platform accountability to safeguard the integrity of public discourse."

Amazon's AI Revolution Brings Job Cut Warnings, Relocation Mandate Adds to Employee Uncertainty
Amazon's AI Revolution Brings Job Cut Warnings, Relocation Mandate Adds to Employee Uncertainty

Int'l Business Times

time19 hours ago

  • Int'l Business Times

Amazon's AI Revolution Brings Job Cut Warnings, Relocation Mandate Adds to Employee Uncertainty

Amazon, a global e-commerce and tech giant, which is on the cusp of an artificial intelligence revolution, is sparking significant concerns among its workforce. With whispers of impending job cuts and a recent mandate requiring some employees to relocate, a palpable sense of uncertainty is now hanging over Amazon's vast employee base. Amazon's AI Revolution: A Time of Unease for Employees Amazon CEO Andy Jassy has hinted at workforce reductions, attributing them to the rapid advancements in AI. He has confirmed that the company will require fewer employees as Amazon pursues greater efficiency with Generative AI. The severity of the situation, including how many roles will be cut, remains unconfirmed, but the message clearly tells employees to prepare themselves for upcoming layoffs. Jassy's note, also shared on his public blog, highlights AI's rapid progress and profound impact on business efficiency and operations. Furthermore, Bloomberg reports that Amazon is poised to announce these cuts next month. 'As we roll out more Generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,' Jassy wrote in his note to the employees. He added, 'It's hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.' Efficiency Driven by AI: A Changing Workforce Amazon claims to have developed over 1,000 Generative AI services, with additional applications under development. This year, the company pledged to invest $100 million in AI technologies. Jassy hailed AI as a 'one-in-a-lifetime technology,' stating it will reshape Amazon's operational approach. Beyond the impact of AI, Amazon's workforce is grappling with another pressing issue: a strict relocation policy that leaves many employees with a difficult choice. Forced Moves: The Relocation Ultimatum Citing sources familiar with the situation, Bloomberg reports that Amazon has issued an ultimatum to its corporate employees: move closer to their managers and teams or resign without any severance. Amazon requires employee relocation: No severance offered via My Northwest local news feed. This is really interesting, and a sign of things to come at other companies. AI means fewer employees, which means Corporate will start demanding 'do this or goodbye' workforce… — stevemur (@stevemur) June 20, 2025 According to the report, employees must relocate to key centres, including Seattle, Arlington (Virginia), and Washington, D.C., a mandate that may necessitate cross-country moves for many. Employee Concerns Mount Amidst Changes This policy has sparked fresh concern among Amazon employees, who are already uneasy due to ongoing job reductions and warnings that AI might diminish their roles in the years ahead. A source cited by Bloomberg suggests this new directive could impact thousands of employees across different teams, particularly mid-career staff reluctant to move because of family commitments and spousal careers. The 30-Day Relocation Mandate The company operates numerous satellite offices throughout the US, including in key cities such as New York, Boston, Los Angeles, and Austin. These locations have typically offered employees some choice in where they reside. One employee revealed to Bloomberg that this message was given during a team meeting. Their manager reportedly informed them they had 30 days to decide on relocating. Should they opt against it, they would receive 60 days to resign or start the relocation process. The manager said, ' There would be no severance for employees who resigned in lieu of relocating.' Originally published on IBTimes UK This article is copyrighted by the business news leader

DW appoints Barbara Massing as new Director General – DW – 06/20/2025
DW appoints Barbara Massing as new Director General – DW – 06/20/2025

DW

time21 hours ago

  • DW

DW appoints Barbara Massing as new Director General – DW – 06/20/2025

Germany's international broadcaster will be headed by a woman for the first time, after the DW Broadcasting Council appointed Barbara Massing as new Director General. She will replace Peter Limbourg on October 1, 2025. German international broadcaster Deutsche Welle announced Friday that Barbara Massing will replace Peter Limbourg as the company's director general on October 1, 2025. "I am thrilled to appoint Barbara Massing as the next director general," said Karl Jüsten, chair of the DW Broadcasting Council and its selection committee. "She brings not only top-tier leadership and journalistic expertise but also the strategic foresight needed to position Deutsche Welle for long-term success in a challenging global media environment." As managing director for Business Administration, Massing has been key to expanding DW programming as well as streamlining the organization, said Jüsten, who emphasized that she "is exactly the leader Deutsche Welle needs to strengthen its role as a trusted, independent global voice for democracy and freedom." Achim Dercks, deputy director of DW's Advisory Board, also praised Massing's success in expanding and restructuring DW activities and pledged to work alongside her to insure that DW "remains a relevant voice in the world, providing people with free information" in what he described as "geopolitically challenging times." Massing thanked the council for its trust in her leadership and for the opportunity to help shape DW's future. "Fact-based, reliable journalism is our most valuable asset and it is more important now, in times of AI-manipulated content and disinformation, than it has ever been," said Massing on Friday. Massing's nomination was put forth after a unanimous decision by the Broadcasting Council's seven-member selection committee. Massing will replace outgoing Director General Peter Limbourg, who announced his retirement in September 2024, after holding the position since 2013. Barbara Massing will be first woman to lead DW A fully qualified lawyer, Massing joined DW in 2006 and became part of its Management Team in 2014 after previously working as a producer for German public broadcaster ARD and for the Franco-German broadcaster Arte. Massing, who holds, among others, positions on the advisory boards for the city of Bonn's International Beethovenfest and the University Hospital Bonn, will become the first woman to lead DW since its founding on May 3, 1953. During her career Massing has focused on digital transformation, organizational culture and sustainability. The director general is responsible for steering and coordinating DW's strategic and operational activities in close collaboration with its governing bodies. According to the DW Act, the director general must be elected via secret ballot by the Broadcasting Council for a term of six years. Re-election to the post is permitted, and a two-thirds majority is required for appointment. DW is Germany's independent international broadcaster and provides news and information in 32 languages around the world with TV, online and radio services reaching 320 million users every week and employs around 4,000 people from 140 different countries DW's work focuses on topics such as freedom and human rights, democracy and the rule of law, world trade and social justice, health education and environmental protection, technology and innovation.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store