logo
#

Latest news with #ParyaLotfi

How Deepfakes Are Disrupting KYC And Financial Security
How Deepfakes Are Disrupting KYC And Financial Security

Forbes

time9 hours ago

  • Business
  • Forbes

How Deepfakes Are Disrupting KYC And Financial Security

Parya Lotfi is CEO & Cofounder of DuckDuckGoose, helping lead AI-driven deepfake detection in the fight against crime. Financial institutions are being increasingly targeted by deepfake-enabled fraud during know your customer (KYC) processes. These sophisticated attacks threaten the integrity of identity-verification frameworks that support anti-money laundering (AML) and counter-terrorism financing (CTF) systems. The U.S. Treasury's FinCEN has reported an increase in suspicious activity involving AI-generated media. It warns that "bad actors are seeking to exploit [generative AI]Meanwhile, Wall Street's FINRA has issued its warning: Deepfake audio and video scams could cost the financial sector as much as $40 billion by 2027, according to research from Deloitte's Center for Financial Services cited by the Wall Street Journal. Biometric checks can no longer be relied on as the sole defense. A 2024 survey by Regula found that 49% of businesses across industries, including banking and fintech, have already encountered fraud schemes using audio or video deepfakes, with average losses approaching $450,000 per incident. As these figures escalate, understanding the anatomy of a deepfake intrusion becomes critical for safeguarding customers, reputations and the global financial system. Real-World Breach: Over 1,100 Deepfake Attempts In Indonesia In late 2024, an Indonesian bank saw more than 1,100 attempts to bypass its digital KYC loan-application process in just three months, according to cybersecurity firm Group-IB. Fraudsters combined AI-powered face-swapping with virtual-camera tools to spoof the bank's liveness-detection controls, despite the institution's "robust, multi-layered security measures." Potential losses from these intrusions have been estimated at $138.5 million in Indonesia alone. As stated by Group-IB, 'AI-driven face-swapping tools enabled fraudsters to replace a victim's facial features with those of another person.' Thus, enabling them to exploit 'virtual camera software to manipulate biometric data ... deceiving institutions into approving fraudulent transactions' during KYC processes. Inside The Deepfake KYC Fraud Playbook Deepfake-enabled KYC fraud follows a methodical, multistage process: 1. Data Acquisition: Fraudsters begin by collecting personal data, in many instances using malware, social networking sites, phishing scams or the dark web. This data is then used to create convincing fake identities. 2. Manipulation: Deepfake technology is then used to alter identity documents. Fraudsters swap photos, adjust details or even re-create entire identities to bypass traditional KYC checks. 3. Exploitation: Fraudsters use virtual cameras or prerecorded deepfake videos to supply spurious biometric data to verification systems. This helps them evade detection of liveness by simulating real-time interactions. 4. Execution: With these tools in place, fraudsters can open fraudulent accounts, apply for loans and carry out high-value transactions, all while appearing completely legitimate. This opens up a tough reality: The conventional authentication procedures, including facial recognition or document verification, are no longer sufficient to counter these advanced attacks. Consider that, on average, there has been one deepfake attempt every five minutes over the past 12 months, while, in a recent 2025 study, only 0.1% of people can spot deepfakes. Fortifying KYC: A Multilayer Defense Strategy Together, these issues highlight an urgent need for financial institutions to evolve from reactive incident response toward proactive, AI-powered detection and multilayer defenses. Some of the technologies that companies should be considering in the fight against deepfakes include: 1. Multimodal Biometrics: Combine facial recognition with voice biometrics, behavioral patterns (e.g., typing rhythms) and advanced liveness cues to create overlapping verification barriers. 2. Explainable-AI Detection: Deploy AI tools trained to spot deepfake artifacts, such as unnatural flickering, mismatched body movement or inconsistencies between speech and facial expressions. 3. Layered Verification: Integrate document‐authenticity checks, geolocation validation and transaction‐pattern analytics alongside biometric scans to catch anomalies before account approval. 4. Continuous Monitoring: Extend fraud detection beyond onboarding. Real‐time AI monitoring of account behavior can detect suspicious transfers or device changes indicative of post-admission compromise. 5. Employee Training: Arm employees with deepfake-awareness training so they can spot red flags, such as off-sync audio or unnatural facial movement, in live or recorded customer interactions. Beyond technology, institutions must establish robust internal protocols and cross-functional collaboration. Traditional injection or presentation attack detection methods are inadequate, as deepfakes convincingly mimic human behaviors, even replicating nuanced physiological traits like our heartbeat pattern's influence on the skin color. Thus, it's imperative that dedicated fraud response teams comprising compliance officers, cybersecurity analysts and customer-relations managers should regularly analyze fraud patterns and update KYC procedures. Regular onboarding audits and deepfake attack simulations proactively identify vulnerabilities. Clear escalation pathways ensure rapid, consistent responses to suspicious activities. Implementing comprehensive governance policies is also essential for securely integrating new detection methodologies, ensuring compliance with emerging regulations such as the EU AI Act and privacy laws. Regular risk assessments and tabletop exercises stress-test KYC and AML protocols against evolving deepfake scenarios, allowing ongoing strategic adjustments. Future Challenges And Evolution Looking ahead, deepfake technologies will continue to evolve rapidly, driven by innovations like real-time voice cloning, hyper-realistic lip syncs and advanced text-to-video models such as Google's Veo 3 or OpenAI's Sora. Meanwhile, the increasing digitization of financial interactions and growing consumer demand for convenience inadvertently open new avenues for fraudsters using unpredictable, sophisticated generative AI methods. To stay ahead, organizations must invest in cutting-edge research and collaborate with industry and academia to anticipate and adapt to these continually evolving threats. Conclusion: A Continuous Battle For Digital Integrity As deepfakes grow more sophisticated and widespread, financial institutions face a critical juncture: proactively adapting to new technological threats or risking severe financial and reputational damage. By adopting multilayered defenses, fostering continuous innovation and promoting internal readiness, banks and fintech firms can build resilient strategies capable of addressing the evolving threat landscape. Staying ahead in the AI arms race is not just beneficial, it's essential to preserving digital integrity and customer trust. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

How Deepfake Identities Are Rewriting The Rules Of Financial Crime
How Deepfake Identities Are Rewriting The Rules Of Financial Crime

Forbes

time17-04-2025

  • Business
  • Forbes

How Deepfake Identities Are Rewriting The Rules Of Financial Crime

Parya Lotfi is CEO & Co-founder of DuckDuckGoose, helping lead AI-driven deepfake detection in the fight against crime. Financial crime is evolving at a pace that regulators and compliance teams are struggling to match. While most financial institutions have invested heavily in fraud prevention, a new and insidious threat is slipping through the cracks—deepfake-generated synthetic identities. Fraudsters no longer need stolen documents or hacked credentials; they can now fabricate entirely realistic personas that pass biometric authentication, clear "know your customer" (KYC) checks and gain full access to financial systems. And they are doing so at scale. In 2023 alone, deepfake-related fraud attempts increased 700% in fintech—a staggering indicator of how criminals are weaponizing AI-powered deception. Unlike traditional fraud, deepfakes introduce a fundamental identity risk problem for financial institutions. Today, a deepfake-generated selfie can pass liveness detection, a manipulated video can fool facial recognition and a synthetic voice can impersonate a CEO or compliance officer. The result? Unauthorized accounts, fraudulent transactions and systemic vulnerabilities that compliance frameworks were never designed to handle. Once fraudsters create a deepfake-based account, the real financial crime begins. Money laundering operations are increasingly leveraging synthetic identities to obscure illicit financial flows. Here's how: • Synthetic Account Creation: Fraudsters generate a deepfake identity—often a blend of real and fake biometric data—to bypass KYC verification at banks, fintech firms and crypto exchanges. • Layering Through Digital Transactions: These synthetic accounts engage in seemingly legitimate activities—opening credit lines, initiating high-frequency transactions and routing money through multiple financial institutions to erase the trail. • Mule Networks And Cashing Out: The laundered funds are ultimately withdrawn through crypto-to-fiat conversions, offshore transfers or ATM withdrawals using synthetic ID-linked payment cards. • Scaling Through Fraud-As-A-Service (FaaS): Dark web marketplaces now sell ready-made deepfake identities, allowing even low-level criminals to access advanced laundering techniques. Regulators have long relied on transaction monitoring and identity verification as cornerstones of anti-money laundering (AML) compliance. But when fraudulent identities appear real, these traditional methods can fall apart. The financial sector has already suffered billions in losses due to AI-driven fraud. Deepfake scams are no longer a future risk—they are here right now: • Financial institutions will lose an estimated $40 billion to AI-driven fraud by 2027, up from $12.3 billion in 2023. • According to a Deloitte poll, 25.9% of financial executives reported experiencing at least one deepfake-related fraud incident in the past year (pg. 3). • Almost 52% of financial executives expect deepfake-enabled fraud to increase in the next 12 months, highlighting the urgency for action (pg. 4). • Despite this, 9.9% of organizations surveyed have taken no action against deepfake threats, leaving them wide open to risk (pg. 5). Fraudsters are moving faster than financial institutions, and every delay in adapting compliance frameworks leaves organizations more vulnerable. While banks and neobanks invest heavily in digital security, deepfake detection remains an overlooked gap in fraud prevention strategies. The challenge is that most KYC and AML compliance programs were designed for human fraudsters, not AI-generated identities. • KYC verification needs AI-powered defense. Traditional KYC relies on document checks, facial recognition and liveness detection—all of which deepfakes can now bypass with shocking accuracy. Advanced AI-based detection can help identify synthetic identities before they infiltrate financial systems. • Transaction monitoring alone isn't enough. AI-generated fraud can mimic legitimate transaction behaviors, making it invisible to traditional monitoring tools. Compliance teams should integrate behavioral analysis and biometric authentication audits to flag anomalies. • Manual review is unsustainable. A high-quality deepfake can be indistinguishable to human reviewers. Automated deepfake detection technologies can accelerate operational efficiency gains, allowing compliance teams to focus on real threats instead of false positives. Financial institutions can no longer afford to ignore the deepfake threat. Fraudsters are already deploying synthetic identities at scale, and regulatory frameworks are years behind in addressing this risk. To maintain trust, compliance and security, banks and fintech firms must: • Embed deepfake detection into KYC and fraud prevention workflows to preempt synthetic identity fraud before accounts are approved. • Conduct deepfake audits as part of AML compliance reviews to assess vulnerabilities across onboarding, authentication and transaction monitoring. • Leverage AI-driven solutions that can adapt to evolving deepfake threats—because fraudsters are already doing the same. The financial industry is at an inflection point. Deepfakes are no longer an emerging risk—they are here, reshaping financial crime in real time. Institutions that fail to adapt could face not only financial losses but also regulatory scrutiny, reputational damage and customer trust erosion. The question is no longer if banks should act, but how. And in the fight against financial crime, waiting is the worst strategy of all. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store