logo
#

Latest news with #Tobac

AI is perfecting scam emails, making phishing hard to catch
AI is perfecting scam emails, making phishing hard to catch

Axios

time27-05-2025

  • Business
  • Axios

AI is perfecting scam emails, making phishing hard to catch

AI chatbots have made scam emails harder to spot and the tells we've all been trained to look for — clunky grammar, weird phrasing — utterly useless. Why it matters: Scammers are raking in more than ever from basic email and impersonation schemes. Last year, the FBI estimates, they made off with a whopping $16.6 billion. Thwarting AI-written scams will require a new playbook that leans more on users verifying messages and companies detecting scams before they hit inboxes, experts say. The big picture: ChatGPT and other chatbots are helping non-English-speaking scammers write typo-free messages that closely mimic trusted senders. Before, scammers relied on clunky tools like Google Translate, which often were too literal in their translations and couldn't capture grammar and tone. Now, AI can write fluently in most languages, making malicious messages far harder to flag. What they're saying:"The idea that you're going to train people to not open [emails] that look fishy isn't going to work for anything anymore," Chester Wisniewski, global field CISO at Sophos, told Axios. "Real messages have some grammatical errors because people are bad at writing," he added. "ChatGPT never gets it wrong." The big picture: Scammers are now training AI tools on real marketing emails from banks, retailers and service providers, Rachel Tobac, an ethical hacker and CEO of SocialProof Security, told Axios. "They even sound like they are in the voice of who you're used to working with," Tobac said. Tobac said one Icelandic client who had never before worried about employees falling for phishing emails was now concerned. "Previously, they've been so safe because only 350,000 people comfortably speak Icelandic," she said. "Now, it's a totally new paradigm for everybody." Threat level: Beyond grammar, the real danger lies in how these tools scale precision and speed, Mike Britton, CISO at Abnormal Security, told Axios. Within minutes, scammers can use chatbots to create dossiers about the sales teams at every Fortune 500 company and then use those findings to write customized, believable emails, Britton said. Attackers now also embed themselves into existing email threads using lookalike domains, making their messages nearly indistinguishable from legitimate ones, he added. "Our brain plays tricks on us," Britton said. "If the domain has a W in it, and I'm a bad guy, and I set up a domain with two Vs, your brain is going to autocorrect." Yes, but: Spotting scam emails isn't impossible. In Tobac's red team work, she typically gets caught when: Someone practices what she calls polite paranoia, or when they text or call the organization or person being impersonated to confirm if they sent a suspicious message. A target uses a password manager and has complex, long passwords. They have multifactor authentication enabled. What to watch: Britton warned that low-cost generative AI tools for deepfakes and voice clones could soon take phishing to new extremes. "It's going to get to the point where we all have to have safe words, and you and I get on a Zoom and we have to have our secret pre-shared key," Britton said. "It's going to be here before you know it."

Scammers using AI to dupe the lonely looking for love
Scammers using AI to dupe the lonely looking for love

Yahoo

time12-02-2025

  • Business
  • Yahoo

Scammers using AI to dupe the lonely looking for love

Meta on Wednesday warned internet users to be wary of online acquaintances promising romance but seeking cash as scammers use deep fakes to prey on those looking for love. "This is a new tool in the toolkit of scammers," Meta global threat disruption policy director David Agranovich told journalists during a briefing. "These scammers evolve consistently; we have to evolve to keep things right." Detection systems in Meta's family of apps including Instagram and WhatsApp rely heavily on behavior patterns and technical signals rather than on imagery, meaning it spies scammer activity despite the AI trickery, according to Agranovich. "It makes our detection and enforcement somewhat more resilient to generative AI," Agranovich said. He gave the example of a recently disrupted scheme that apparently originated in Cambodia and targeted people in Chinese and Japanese languages. Researchers at OpenAI determined that the "scam compound" seemed to be using the San Francisco artificial intelligence company's tools to generate and translate content, according to Meta. Generative AI technology has been around for more than a year, but in recent months its use by scammers has grown strong, "ethical hacker" and SocialProof Security chief executive Rachel Tobac said during the briefing. GenAI tools available for free from major companies allow scammers to change their faces and voices on video calls as they pretend to be someone they are not. "They can also use these deep fake bots that allow you to build a persona or place phone calls using a voice clone and a human actually doesn't even need to be involved," Tobac said. "They call them agents, but they're not being used for customer support work. They're being used for scams in an automated fashion." Tobac urged people to be "politely paranoid" when an online acquaintance encourages a romantic connection, particularly when it leads to a request for money to deal with a supposed emergency or business opportunity. - Winter blues - The isolation and glum spirits that can come with winter weather along with the Valentine's Day holiday is seen as a time of opportunity for scammers. "We definitely see an influx of scammers preying on that loneliness in the heart of winter," Tobac said. The scammer's main goal is money, with the tactic of building trust quickly and then contriving a reason for needing cash or personal data that could be used to access financial accounts, according to Tobac. "Being politely paranoid goes a long way, and verifying people are who they say they are," Tobac said. Scammers operate across the gamut of social apps, with Meta seeing only a portion of the activity, according to Agranovich. Last year, Meta took down more than 408,000 accounts from West African countries being used by scammers to pose as military personnel or businessmen to romance people in Australia, Britain, Europe, the United States and elsewhere, according to the tech titan. Along with taking down nefarious networks, Meta is testing facial recognition technology to check potential online imposters detected by its systems or reported by users. gc/arp/dw

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store