logo
#

Latest news with #DeepFaceLab

What The Next Iteration Of Cyberattacks Could Look Like
What The Next Iteration Of Cyberattacks Could Look Like

Forbes

time4 days ago

  • Forbes

What The Next Iteration Of Cyberattacks Could Look Like

Ankush Chowdhary is a cybersecurity executive and author. He is the vice president and CISO at Hewlett Packard Enterprise. getty Eric, a senior executive, is winding down after a long day, mindlessly scrolling through a social media app. Amid the usual noise, one video catches his eye. She's different. Clever, tuned into his tastes in vintage watches, obscure jazz and dry humor. A comment turns into a thread, then into regular chats. Over the weeks, she becomes a familiar presence. One evening, she sends a message: 'Join me in VR? There's this jazz lounge I think you'd love.' It seems harmless. Maybe even fun. But it's bait. Eric accepts. He enters the virtual lounge, chats with her avatar, laughs and downloads a file supposedly containing backstage photos. He clicks on the file and unknowingly steps into one of the most sophisticated cyberattacks in play today. Eric sleeps soundly. He thinks he's made a harmless new connection online. But the malware hiding in that file has already started working. Here's what it does, and this is where things get disturbing. Attackers scrape hours of your voice from podcasts, calls or videos. With just five minutes of new audio, AI (like Vall-E) clones not just your voice but your cadence, hesitations and tone. Using public footage and VR session data, tools like DeepFaceLab create real-time avatars that mimic your expressions—blinking, nodding, even smirking on Zoom calls. Malware logs keystrokes, mouse movements and screen habits. Attackers replicate how you work, bypassing behavioral biometric security. • Session Hijacking: Stolen cookies and API keys bypass MFA. • Live Impersonation: A deepfake attends meetings, messages colleagues or approves fraudulent transactions. • Undetectable Breaches: Every action looks legitimate—because it's your identity, weaponized. Most cyberattacks we know today rely on technical weaknesses: vulnerable ports, poor password hygiene, unpatched systems. But this new form of attack exploits something more fundamental: human trust. It is psychological, not just technological. A social media app isn't just a content engine—it's a profiling machine. Its algorithm builds a behavioral model from everything you do, including what you pause on, what you comment on, when you swipe. Combined with public content like blog posts and interviews, this allows attackers to create an AI persona that feels eerily tailored. They build an influencer around your interests. Someone who talks about your niche hobbies. Someone who shares your worldview. They interact until it feels real. This is where it stops feeling like an attack and starts feeling like a friendship. The persona shares personal stories. Maybe they are having a bad week. They go into detail about their failed startup. They note their love for the same obscure jazz artist you mentioned. It's fake, but it feels intimate. That reciprocity builds trust. Before any malware shows up, the attacker runs small, low-stakes tests: • 'Hey, can you check if this file opens on Mac?' • 'Mind reviewing this link real quick?' • 'Does this message look like phishing? I know you'd spot it.' These tests measure how much influence they have. Each success lowers your defenses a bit more. Their interactions with you drive up visibility. The more you engage, the more you see them. Soon, they're everywhere in your feed. It's a feedback loop designed to deepen the illusion of connection. Eventually, they become your digital confidant. And you stop questioning their presence. By the time the real ask comes, you don't feel manipulated—you feel seen. The goal isn't to access your account. It's to become you. A cloned voice places a call. A deepfake sits in your meeting. Your Slack messages ask someone to override a safeguard. Your credentials log into the company's cloud. None of it looks suspicious. Because from the outside, it is you. This is the chilling truth: When your identity becomes the weapon, most security tools don't know how to defend against it. They're built to spot intruders. Not replicas. So why is this different? Let's be honest: Traditional phishing was always a numbers game. Spray and pray. This isn't that. This is slow. Personal. Surgical. And in many ways, it's more dangerous because it doesn't look like an attack. Use two-step validation for critical actions. If it involves money, data or elevated access, verify through a different channel—especially if the request feels familiar. Limit public audio and video. Don't overshare. If you don't need to speak at that panel, then don't. Or at least watermark and encrypt the output. Train your teams to expect synthetic attacks. Simulate fake voice calls, videos and messages. Help them recognize not just the tech but the psychological setup behind the bait. Use tools that track more than login data. Look for subtle behavioral shifts in typing speed, mouse paths and application usage patterns that don't match the real person. Move beyond point-in-time authentication. Use ongoing signals to decide if a user remains trustworthy throughout a session. A social media app isn't the villain. But it may be the starting point. Because the next major breach won't be a technical exploit. It'll feel like a conversation with someone who gets you. Someone who remembers your favorite song. Someone who asks about your day. Someone who sends you a file, and you open it. And the only real defense is a new mindset. Trust, once assumed, must now be earned and continuously verified. In the future of cybersecurity, identity is no longer something you prove once. It is something you must protect constantly. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Exclusive: Sony Music backs AI rights startup Vermillio
Exclusive: Sony Music backs AI rights startup Vermillio

Axios

time03-03-2025

  • Business
  • Axios

Exclusive: Sony Music backs AI rights startup Vermillio

Vermillio, the Chicago-based AI licensing and protection platform, has raised a $16 million Series A co-led by DNS Capital and Sony Music, executives exclusively tell Axios Why it matters: Sony Music's first investment in AI licensing seeks to protect its artists and support them in responsibly using generative AI tools. How it works: Vermillio's TraceID tool monitors online content for use of intellectual property, as well as name, image and likeness. The platform can automatically send takedown requests and manage payments for licensed content. The company charges $4,000 per month for the software and takes a transaction fee for its licensing tool. Clients include movie studios like Sony Pictures, record labels like Sony Music, talent agencies like WME, as well as individual talent. With Sony Pictures, Vermillio let fans create AI-generated Spider-Verse characters, and it partnered with The Orb and David Gilmour, alongside Sony Music and Legacy Recordings, on AI tools for creating tracks and artwork inspired by "Metallic Spheres In Colour." Context: CEO Dan Neely has worked in AI for more than 20 years. The serial entrepreneur sold his last startup, Networked Insights, to American Family Insurance in 2017 and founded Vermillio in 2019. He says he was inspired to build the "guardrails for generative internet" after seeing the release of deepfake creation software, DeepFaceLab, and rapper Jay-Z's efforts to take down a deepfake of himself. Flashback: Prior, Vermillio raised $7.5 million in seed funding from angel investors. Dennis Kooker, president, global digital business at Sony Music Entertainment, says he was introduced to Neely about a year and a half ago and was impressed by his knowledge of and the startup's strategy. "The first project we did together was a proof of concept with David Gilmore and The Orb to show and highlight that intellectual property and generative AI can work hand in hand," Kooker says. "Training the right way, ethically and principally, can be accomplished." Zoom out: Some companies like Sony Music are seeking legal action on cases where generative AI impacts the core of IP companies. These companies want to protect and monetize creators and content along with nearly every other aspect of their businesses. Sony Music, along with Universal Music Group and Warner Records, sued AI startups Suno and Udio for copyright infringement. But content companies also want to embrace these technologies. Artists can use the tech for their own content creation and for fan engagement. What's next: Neely says Vermillio plans to expand to sports and work with major sports leagues this year. It's also releasing a free version of the product that shows whether someone is at high or low risk of AI copyright.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store