logo
#

Latest news with #TechTimes'

AI Is Rewriting Your Email Habits—One Auto-Suggest at a Time
AI Is Rewriting Your Email Habits—One Auto-Suggest at a Time

Int'l Business Times

time2 days ago

  • Int'l Business Times

AI Is Rewriting Your Email Habits—One Auto-Suggest at a Time

Welcome to Tech Times' AI EXPLAINED, where we look at the tech of today and tomorrow. Brought to you by Human hands on deck: AI tools may handle the output, but real people are still behind the keyboards training and correcting the systems that power them. Even when we think we're in control, AI is often guiding what we type—and sometimes finishing our sentences for us. Your inbox is changing, and not just because there's a new Gmail theme. AI is working in the background and nudging you to send "quick replies," summarizing long email conversations, and deciding which email should show up at the top of your list. This goes beyond simply managing your email and is now more about shaping how you deal with it. This stuff isn't limited to chatbots or photo apps anymore. AI is baked right into your everyday tools—offering replies in Gmail, pulling key points out of Outlook threads, and surfacing messages in Slack before you even search. It's doing more than we realize, and sometimes it's even talking for us. So what happens when your inbox starts running itself—and you barely notice? The Silent Invasion AI has been in your inbox longer than you probably realize. At first, it was spam filters, defending you from the endless come-on of advertisers. Then came Smart Replies, then Smart Compose, and now there are a ton of background processes to help us manage the firehose of email we get on a daily basis. They're quiet, not loud and flashy, and they offer us a gentle, time-saving nudge that most of us are all too happy to take advantage of. And that's really the point: AI has gotten really good at blending in. Most of us don't even notice it's there at this point. You think you're managing your inbox on your own, but, honestly, a lot of the sorting, responding, and prioritizing happens before you even put your hands on the keyboard. It's invisible, and that's by design. Who's Doing the Talking? AI is becoming a visible presence in our communication tools—bridging human intention and machine suggestion. When was the last time you typed out a full email reply instead of just hitting one of those suggested buttons. I'll bet it's been a while for all of us but the most directly AI-averse of us. It's tempting to lean on AI-generated resopnses. They're quick, clean, and get the tone close enough to right not to matter. Unfortunately, these words we send aren't really ours. AI shapes the way we communicate, too, with autocomplete suggestions and full-blown thread summaries. It's not just Google, either. Even Outlook offers writing help (as does Apple's Mail apps), and even Slack uses AI to surface what it thinks is important. All these little direction shifts add up to a larger one: you're still hitting the Send button, but AI is the one steering the tone, structure, and maybe even the intent. It might not be a bad thing, but it does make us wonder who's really driving this bus. Gmail: The Friendly Ghostwriter Gmail offers Smart Compose, one of the most visible examples of AI guiding our email behaviors. It's still easy to ignore, but it does offer real-time sentence suggestions while you type. It will finish phrases, add little pleasantries, and keep you on track for a professional, clean tone in your final product. Type "I hope y..." and it will likely finish with "...you're doing well." You can start with "Let me know..." and you'll probably get something like "...if you have any questions." It might be the most likely set of words that follow your initial typing (LLMs work with statistical models), but it ends up subtly shaping how we communicate. It's polite, neutral, corporate, and more than a little bland. Of course, there's also Smart Reply, which hangs out at the bottom of messages from your Inbox with single-click responses like "Sounds good," "Thanks for the update," or "Will do!" It's a bit better than LinkedIn's vapid auto-replies, but not by much. Google says that these types of AI-generated replies make up a large portion of email responses on mobile. They sure save time, which makes them irresistible, but they do standardize our tone and responses. The more we use them, the more we're letting Gmail do the talking. It's likely you wouldn't have responded to your best friend from high school that way, but it's just easier to hit the Reply button. Sure, that's way better than attending the meeting, but who is creating your professional voice? Outlook: Your AI Meeting Translator Now that Microsoft's Copilot is literally everywhere in their operating systems and PCs, it's no surprise that Outlook, the ubiquitous email program, uses AI to manage your tone. It takes it even further, though, and even manages your content. Thread summarization will come into play when you open a long email chain, generating a short summary of all the key decisions, deadlines, and action items from within the chain. It's super great for time management, but it's also a sort of filter. You're relying on what the AI believes are the key takeaways, a summary that can strip away more subtle context and softer verbal cues like parentheticals. Copilot even drafts full replies and meeting recaps based on your own calendar or email history. If you miss a meeting, for example, Copilot can just create a follow-up email that summarizes what was discussed based on transcripts and notes. Sure, that's way better than attending the meeting, but who is creating your professional voice? You or AI? Behavioral Shifts AI makes inbox management easier, but at what cost to human attention, tone, and authenticity? What's changing isn't just how we write, but how we think about email at all. The more we use the replies and summaries that sand off the rough edges of true human communication, the more everything starts to sound the same. If you're relying on quick replies and other AI-generated responses, so is everyone else, making it less likely to produce real progress or work product. We're all writing fewer full replies, reading less of the full threads, and probably feeling less urgent to respond at all. AI makes writing emails easier, but it also flattens it all out into something that would feel right at home in a tech support ticket. What Now? Of course, the big question is now what do we do with all this AI in our inboxes? You don't have to swear it off completely to have a little more control. First, just notice when AI steps in - are you clicking "sounds good" out of habit, or is that really what you want to say? Try something a little quirkier, just for the heck of it. Maybe a, "If you say so, my lord" if it's someone you know and like. If you get summaries, try reading the whole thread. It might take a few extra minutes, but seeing the subtleties of human communication (assuming the emails weren't also written by AI) can help you really understand what people are trying to say, beyond the circling back and action items. If your email lets you tweak the amount of AI in your settings, do it. Turn off Smart Compose for a week and see how it feels to just write stuff on your own and in your own voice. You don't have to ditch the tools completely, but it's good to stay aware of how they're subtly changing the way communicate. Then you can decide for yourself how much or little of that you want. Because let's face it, not every email from your boss warrants much more than a "sounds good." Originally published on Tech Times

AI Isn't Fully Automated — It Runs on Hidden Human Labor
AI Isn't Fully Automated — It Runs on Hidden Human Labor

Int'l Business Times

time12-06-2025

  • Business
  • Int'l Business Times

AI Isn't Fully Automated — It Runs on Hidden Human Labor

Welcome to Tech Times' AI EXPLAINED, where we look at the tech of today and tomorrow. Brought to you by Imagine this scenario, one that's increasingly common: You have a voice AI listen to your meeting at work, you get a summary and analysis of that meeting, and you assume AI did all the work. In reality, though, none of these tools work alone. PLAUD AI, Rabbit, ChatGPT, and more all rely on a layer of human labor that most of us don't hear about. Behind that clean chat interface on your phone or computer, there are data labelers that tag speech samples, contractors that rate AI answers , and testers feeding the system more examples to learn from. Some are highly trained while others focus on more of the tedious aspects of the work. No matter what, though, your AI isn't just automated - it's a complex blend of code and human effort. Without it, your AI wouldn't work at all. The Invisible Workforce Behind Everyday AI AI tools don't just appear out of thin air, of course. They learn similarly to the way we do: by example. That learning process often relies on what's called human-in-the-loop (HITL) training. As data-annotation company Encord says in a blog post: "In machine learning and computer vision training, Human-in-the-Loop (HITL) is a concept whereby humans play an interactive and iterative role in a model's development. To create and deploy most machine learning models, humans are needed to curate and annotate the data before it is fed back to the AI. The interaction is key for the model to learn and function successfully," the company wrote. Annotators, data scientists, and data operations teams play a significant role in collecting, supplying, and annotating the necessary data, the post continued. The amount of human input varies with how involved the data is and how much human interaction it will be expected to offer. Of course, as with many business activities, there are ethical concerns. Many content moderators complain of low pay and traumatic content to review. There can also be a language bias in AI training , something researchers and companies are likely working on to solve as AI becomes more complex and global. Case Study: PLAUD AI Various ways users wear the PLAUD Note device—on a wristband, clipped to a lapel, or hanging as a pendant—highlighting its flexibility for hands-free voice capture throughout the day. PLAUD AI PLAUD AI's voice assistant offers an easy, one-button experience. Just press a button, speak, and then let it handle the rest. As the company said on its website , the voice assistant lets you "turn voices and conversations into actionable insights." Behind the scenes, this "magic" started with pre-trained automatic speech recognition (ASR) models like Whisper or other custom variants , that have been refined with actual user recordings. The models not only have to transcribe words, but also try to understand the structure, detect speakers , and interpret tone of voice. The training involves hours and hours of labeled audio and feedback from real conversations. It's likely that every time you see an improvement in the output, it's thanks to thousands of micro-adjustments based on user corrections or behind-the-scenes testing. According to reviewers, PLAUD AI leverages OpenAI's Whisper speech-to-text model running on its own servers. There are likely many people managing the PLAUD AI version of the model for its products, too. Every neat paragraph that comes out of the voice assistant likely reflects countless iterations of fine-tuning and A/B testing by prompt engineers and quality reviewers. That's how you get your results without having to deal with all that back-end work yourself. Case Study 2: ChatGPT and The ChatGPT logo represents one of the most widely used AI assistants—powered not just by models, but by human trainers, raters, and user feedback. ilgmyzin/Unsplash When you use ChatGPT, it can feel like an all-knowing helpful assistant with a polished tone and helpful answers. Those are based, of course, on a foundation of human work. OpenAI used reinforcement learning from human feedback , or RLHF, to train its models. That means actual humans rating responses so the system could learn what responses were the most helpful or accurate, not to mention the most polite. "On prompts submitted by our customers to the API, our labelers provide demonstrations of the desired model behavior and rank several outputs from our models," wrote the company in a blog post . "We then use(d) this data to fine-tune GPT‑3." a popular online voice transcription service, also relies on human work to improve its output. It doesn't use RLHF like OpenAI does, but it does include feedback tools for users to note inaccurate transcriptions, which the company then uses to fine-tune its own models. The company also uses synthetic data (generated pairs of audio and text) to help train its models, but without user corrections, these synthetic transcripts can struggle with accents, cross talk, or industry jargon; things only humans can fix. Case Study 3: Rabbit R1's Big Promise Still Needs Human Help The Rabbit R1 made a splash with its debut: a palm-sized orange gadget promising to run your apps for you, no screen-tapping required. Just talk to it, and it's supposed to handle things like ordering takeout or cueing up a playlist. At least, that's the idea. Rabbit says it built the device around something called a Large Action Model (LAM), which is supposed to "learn" how apps work by watching people use them. What that means in practice is that humans record themselves doing things like opening apps, clicking through menus, or completing tasks and those recordings become training data. The R1 didn't figure all this out on its own; it was shown how to do it, over and over. Since launch, people testing the R1 have noticed that it doesn't always feel as fluid or "intelligent" as expected. Some features seem more like pre-programmed flows than adaptive tools. In short, it's not magic—it's a system that still leans on human-made examples, feedback, and fixes to keep improving. That's the pattern with almost every AI assistant right now: what feels effortless in the moment is usually the result of hours of grunt work—labeling, testing, and tuning—done by people you'll never see. AI Still Relies On Human Labor For all the talk of artificial intelligence replacing human jobs, the truth is that AI still leans hard on human labor to work at all. From data labelers and prompt raters to everyday users correcting transcripts, real people are constantly training, guiding, and cleaning up after the machines. The smartest AI you use today is only as good as the humans behind it. For now, that's the part no algorithm can automate away. Originally published on Tech Times

You're Already Using Voice AI at Work — Here's How It's Changing Everything
You're Already Using Voice AI at Work — Here's How It's Changing Everything

Int'l Business Times

time05-06-2025

  • Business
  • Int'l Business Times

You're Already Using Voice AI at Work — Here's How It's Changing Everything

Welcome to Tech Times' AI EXPLAINED , where we look at the tech of today and tomorrow. Brought to you by Using voice AI at work is already happening. You might think AI at work means typing prompts into ChatGPT or getting a slick summary from your inbox. But the real shift is happening in your ears. From whispered prompts during Zoom calls to voice bots taking fast food orders, AI is quietly changing how we work, and you might not even realize it's there. Voice AI isn't just a flashy gadget or customer service gimmick. It's becoming a built-in layer across industries: transcribing your meetings, answering your calls, and even drafting your doctor's notes. As tools like Microsoft Copilot, and drive-thru voice bots quietly embed themselves in daily workflows, they're not only reshaping tasks, they're shifting what it means to show up, speak up, and stay relevant in a workplace that's learning to listen. Game studio founder and voice AI advisor Mike Sorrenti is bullish on the tech. "Voice ai is an excellent thing. It can be used for translation and many other things and if a very natural interface for kids, and older adults especially those with mild disabilities such as arthritis," he said in an email. Enhancing Workplace Productivity with Voice AI Visitors interact with Microsoft Copilot demos at a tech event, where AI-powered assistants like voice-enabled copilots are reshaping how we collaborate at work. A recent UK government study showed that civil servants who used Microsoft's Copilot AI for administrative tasks saved an average of 26 minutes a day, which works out to about two weeks of gained time per year. Voice AI plays a key role in this shift from the pre-AI workplace to the current AI workplace, becoming useful in transcription, summarization, and virtual meeting assistants. Previously, you'd have to have a human assistant sit in on your Zoom or Microsoft Teams meetings to take notes, or delegate someone who may or may not be good at live note-taking to manage these. Voice AI assistants can now log, transcribe and summarize your meetings, often without the people in the meeting really understanding what's being recorded or how it's being analyzed. Workers are already adjusting how and when they speak during meetings, knowing that their words might be captured and re-contextualized by AI summaries. has also introduced a voice-activated AI version of its AI meeting assistant, letting users join and manage meetings without having to take notes. Now that we're used to asking Siri to take a note or set a timer, asking your AI voice agent to start recording will be a cinch. The shift from passive transcription to an interactive agentic AI system is subtle, but it is here. Users can even give verbal commands mid-meeting, obviating the need for someone who understands the user interface of an app, for example, or web controls, all in real-time. This isn't a backend utility, either; rather, tools like Otter are becoming active participants in meetings, giving us a glimpse at a future where AI agents might take on even more collaborative roles. While tools like this save time and offer easier control, the issues of privacy, consent, surveillance, and etiquette in meetings (especially in hybrid or remote work environments can be substantive. But first, a quick casting call for our sponsor and partner of AI EXPLAINED this month, PlaudAI - the No.1 AI note-taker that does exactly what you'd imagine an AI assistant can do in 2025. Do you use voice AI in your daily work? Are you a journalist, doctor, lawyer, educator, or creator doing something smart with your voice? We're spotlighting modern professionals for a new series with our sponsor, — share your story with us here , and you could be featured and receive a sample of PlaudAI. Voice AI Has Already Transformed Customer Service and Retail Experiences A call center employee works with a headset, illustrating how voice-based tools have long been part of the workplace — and how AI is now transforming those interactions. As a consumer, driving through the fast-food lane can be a common experience. You pull up to the speaker, ask for the food you want, and go grab it after you pay at the next window. If you've been in one recently, you may have heard some rudimentary voice AI ask you if you used a mobile ordering app. But that's all about to get even more AI-powered. Sophie Fennelly knows that voice AI is happening now. "As CEO of Sales TQ, with over a decade in enterprise sales and an analytical background from MIT, I've helped small businesses adopt emerging technologies—including voice AI—to improve operations," she wrote in an email. "Voice AI is no longer just futuristic hype; it is actively reshaping how we work and live." Hungry Jack's, a fast food chain in Australia, just began trials of a new AI-driven voice assistant for its own drive-through orders. Customers appreciated the system's politeness while the company likely appreciates the potential for late-night staffing. Employees, though, might start to feel the pinch, or start moving to parts of the job that tech can't yet replicate, like solving problems, connecting with people, or hopping in when the system makes a mistake. Australian franchises of KFC have also started testing out AI-driven voice ordering, though customers there have pushed back a bit, saying they prefer human interaction and their orders being misinterpreted by the AI window jockey. Still, even this workplace is evolving as AI starts to support and replace humans in these high-turnover, low-wage roles. As a consumer, you might have to learn how to talk to AI bots, and as a worker, you'll likely need to up your skills to fit roles that need a more human touch. Revolutionizing Healthcare with Ambient Voice AI A doctor speaks during an interview, highlighting how voice AI tools are increasingly being adopted in healthcare settings — from clinical dictation to patient notes. Microsoft's DAX Copilot automates recordkeeping, updates patient records, and can draft care plans all from just listening in to conversations between doctors and patients. The idea is that physicians can spend more time on patient care than on record keeping, thereby improving health outcomes for more people. Microsoft is also transforming the workplace of nurses, with ambient voice and other AI technologies becoming integrated into nursing workflows, alleviating the burden of so much documentation, reducing burnout, and allowing nurses to spend more time on patient care. Global Reach and No-Code Tools for Voice AI A startup in Bengaluru (formerly Bangalore) in southern India is offering a no-code conversational AI platform that lets other businesses deploy real-time, multilingual voice agents for customer support and services. Ring AI has expanded beyond India to the Middle East and Latin America, supporting languages like Arabic and Spanish. Global workplaces that couldn't afford the high-cost development of their own voice AI systems can now contract with Ringg AI and deploy AI agents to help them with their businesses. While that does mean that AI tools may replace workers at major companies, it does mean that smaller shops can afford the cost of entry to compete on the level of voice AI at larger corporations. Gev Balyan, CEO of Hoory AI says, "Startups and solopreneurs now have an always-on assistant — answering calls, qualifying leads, booking — at enterprise scale, without enterprise cost." What It All Means For You A glowing AI key on a sleek keyboard symbolizes how artificial intelligence — including voice-powered assistants — is becoming a built-in part of everyday work tools. So what does the current adoption of Voice AI mean for you. Whether you're running (or taking) meetings, answering customer support calls, or just trying to get through all the email you receive in a day, Voice AI is likely becoming a bigger part of your day, whether you realize it or not. The tools are changing how we connect with colleagues and higher-ups, the tasks that make up our daily work, and even what kinds of jobs are growing (or shrinking). As an employee, it could mean you'll do fewer routine tasks and spend more time thinking, making decisions, or even managing these AI tools directly. But you'll still need to learn to work with AI, knowing what it's good at, how to manage it when if makes mistakes, and how to communicate with AI tools that are always listening. As a business owner, you might feel like AI will boost your workers' productivity while reducing costs, but you'll also need to keep an eye out on issues of privacy, trust, and training. You may want to start hiring people who have higher levels of judgment, empathy, and communication to be able to work with your new AI systems. Bottom line, AI isn't coming for your job all at once, but it is coming to change parts of your job already. The best thing here is to stay relevant by focusing on what you can do that AI can't, yet: connect with others, understand nuance, and adapt when things go off the rails. Originally published on Tech Times

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store