logo
#

Latest news with #identityTheft

RIP Microsoft Passwords: Here's How to Set Up a Passkey Before the August Deadline
RIP Microsoft Passwords: Here's How to Set Up a Passkey Before the August Deadline

CNET

time19 hours ago

  • CNET

RIP Microsoft Passwords: Here's How to Set Up a Passkey Before the August Deadline

Risky password habits can have big consequences, and some companies are making it easier to stay secure online by ditching decades-old password methods and implementing passkeys instead. Microsoft intends to do the same starting in August. Whether you have an easy-to-guess password or it's leaked in a company data breach, if hackers get hold of it, it can open the door to identity theft and fraud. A recent CNET survey found that 49% of US adults have risky password habits, like using the same password for multiple accounts and even using personal information, like your name, as a part of your password. If you use Microsoft Authenticator to store your passwords, here's what you need to know about the transition and how to set up passkeys before the deadline. Microsoft Authenticator won't support passwords after August Currently, Microsoft Authenticator houses all of your passwords and lets you sign into all of your Microsoft accounts using a PIN, facial recognition, such as Windows Hello, or other biometric data, like a fingerprint. Authenticator can be used in other ways, such as verifying that you're logging in if you forgot your password, or using two-factor authentication as an extra layer of security for your Microsoft accounts. How you use the app will change starting this month, according to Microsoft: June 2025 - Microsoft said you'll no longer be able to add passwords to the Authenticator app. Microsoft said you'll no longer be able to add passwords to the Authenticator app. July 2025 - You won't be able to use the autofill password function. You won't be able to use the autofill password function. Aug. 2025 - You'll no longer be able to use saved passwords. If you still want to use passwords instead of passkeys, you can store them in Microsoft Edge. But CNET experts recommend adopting passkeys during this transition. "Passkeys use public key cryptography to authenticate users, rather than relying on users themselves creating their own (often weak or reused) passwords to access their online accounts," said Attila Tomaschek, CNET software senior writer and digital security expert. Why passkeys are a better alternative to passwords So what exactly is a passkey? It's a credential created by the Fast Identity Online Alliance that uses biometric data or a PIN to verify your identity and access your account. Think about using your fingerprint or Face ID to log into your account. That's generally safer than using a password that is easy to guess or susceptible to a phishing attack. "Passwords can be cracked, whereas passkeys need both the public and the locally stored private key to authenticate users, which can help mitigate risks like falling victim to phishing and brute-force or credential-stuffing attacks," Tomaschek added. Passkeys aren't stored on servers like passwords. Instead, they're stored only on your personal device. More conveniently, this takes the guesswork out of remembering your passwords and the need for a password manager. How to set up a passkey in Microsoft Authenticator Microsoft said in a May 1 blog post that it will automatically detect the best passkey to set up and make that your default sign-in option. "If you have a password and 'one-time code' set up on your account, we'll prompt you to sign in with your one-time code instead of your password. After you're signed in, you'll be prompted to enroll a passkey. Then the next time you sign in, you'll be prompted to sign in with your passkey," according to the blog post. To set up a new passkey, open your Authenticator app on your phone. Tap on your account and select "Set up a passkey." You'll be prompted to log in with your existing credentials. After you're logged in, you can set up the passkey.

Hardworking teacher, 28, lived as recluse to build impressive nest egg...then scammers got him on the phone
Hardworking teacher, 28, lived as recluse to build impressive nest egg...then scammers got him on the phone

Daily Mail​

time20 hours ago

  • Business
  • Daily Mail​

Hardworking teacher, 28, lived as recluse to build impressive nest egg...then scammers got him on the phone

A hardworking young teacher lived a reclusive life to build up his savings - but fell victim to evil phone scammers who stole it all with a few strokes of a keyboard. Russell Leahy, 28, practiced a frugal lifestyle by avoiding going out on the weekends and traveling, but his life came to a screeching halt when he realized he'd been a victim of a scam. Leahy, of Fort Worth in Texas, lost over $32,000 after he gave his bank account information to a fraudster who manipulated him into believing they were with Chase Bank's fraud department. The teacher said the scammers mastered Chase's protocol by playing the bank's recording at the start of the call that says, 'This call is being recorded for quality and training purposes.' The fraudsters quoted Leahy's exact bank balances and manipulated him into believing that his account had been compromised. Leahy believed that he needed to move his money into a new account to protect his savings, so he gave the information to protect the cash he had worked for. The scammers also sent him text messages and told him not to alert the tellers at his bank, as they were investigating a leak. Leahy hadn't thought anything of it because of the scammers' ability to mirror Chase's fraud process until he noticed his money was gone. 'I had literally never felt like the wind had been taken out of my sails before,' Leahy told local ABC affiliate, WFAA News. 'I'd never really felt like I was gonna pass out before, but it really felt like the end of the world for me.' Leahy said the experience was 'violating' and felt like he was 'being taken advantage of.' He filed a claim with Chase Bank, but only received $2,247.85. 'These types of scams are heartbreaking. We urge all consumers to ignore phone, text or internet requests for money or access to their computer or bank accounts,' a representative for Chase told 'Banks and legitimate companies won't make these requests, but scammers will.' Fraud differs from scams as fraud involves someone illegally gaining access to an account without the holder's permission. Scams, on the other hand, are 'a deceptive scheme or trick used to cheat someone out of their money or other valuable assets,' according to Chase. Scammers use manipulative information to deceive victims with non-existent products, phishing emails, fake websites, and spoofed Caller IDs. Leahy has started a fundraiser on GoFundMe to help alleviate the stress of living paycheck-to-paycheck while he fights Chase for his money. 'I've hired a lawyer. I've filed complaints with the CFPB, the Texas Attorney General, and the FTC,' he wrote in the description. 'I've done everything a person can do and I'm still left trying to survive on what little I have left.' Leahy said he filed a fraud claim with Chase, including a police report, screenshots of the texts and calls between the scammers, IRS documentation, and ATM receipts. He also claimed the bank sent a PSA email just days after he submitted a claim, describing the scam that he was a victim of. Despite the stresses he's endured, Leahy said he hopes the silver lining is that others learn from his story. 'I'd rather me be the sacrificial lamb for the rest of these people and maybe save other people's money from being stolen,' Leahy told WFAA. Chase advises customers not to answer calls or texts from a representative telling them to send money to another account. The bank never asks customers to send money to themselves. Customers who receive similar calls should hang up and call the number on the back of their Chase card.

Deepfake interviews: Navigating the growing AI threat in recruitment and organizational security
Deepfake interviews: Navigating the growing AI threat in recruitment and organizational security

Fast Company

time2 days ago

  • Business
  • Fast Company

Deepfake interviews: Navigating the growing AI threat in recruitment and organizational security

The breakneck speed of artificial intelligence (AI) technology has fundamentally reshaped how businesses manage recruitment, communication, and information dissemination. Among these developments, deepfake technology has emerged as a significant threat, particularly through its use in fraudulent interviews. Deepfake interviews leverage advanced AI techniques, predominantly Generative Adversarial Networks (GANs), to generate hyper-realistic but entirely fabricated audio, video, or imagery. These synthetic media forms convincingly manipulate appearances, voices, and actions, making it exceedingly difficult for average users—and even experts—to discern authenticity. IMPLICATIONS AND MOTIVATIONS FOR DEEPFAKE USE The motivations behind deploying deepfake technology for scams and fraud are varied but consistently damaging. Criminals use deepfakes primarily for financial gain, identity theft, psychological manipulation and disinformation. For instance, deepfakes can facilitate vishing (voice phishing), whereby scammers convincingly mimic a trusted individual's voice, deceiving victims into transferring funds or revealing sensitive information. Additionally, these AI-generated falsifications enable sophisticated blackmail, extortion, and reputation sabotage by disseminating maliciously altered content. Further, deepfakes significantly disrupt corporate trust and operational integrity. Financial crimes involving deepfakes include unauthorized transactions orchestrated by impersonating company executives. A notable case occurred in Hong Kong, where cybercriminals successfully impersonated executives, causing multi-million-dollar losses and severe reputational harm. Beyond immediate financial damage, deepfake attacks can erode consumer trust, destabilize markets, and inflict lasting damage to brand reputation. Moreover, malicious actors exploit deepfake technology politically, disseminating misinformation designed to destabilize governments, provoke conflicts, and disrupt public order. Particularly during elections or significant political events, deepfakes have the potential to manipulate public opinion significantly, challenging the authenticity of democratic processes. TECHNOLOGICAL MECHANISMS AND ACCESSIBILITY The core technological mechanism behind deepfake interviews involves GANs, where AI systems are trained to produce realistic synthetic media by learning from authentic audio and video datasets. The recent democratization of this technology means anyone can produce deepfakes cheaply or freely using readily accessible online tools, exacerbating risks. The emergence of ' deepfake-as-a-service ' models on dark web platforms further compounds these concerns, enabling sophisticated attacks without extensive technical expertise. In recruiting scenarios, deepfake candidates use synthetic identities, falsified resumes, fabricated references, and convincingly altered real-time video interviews to infiltrate organizations. These fraudulent candidates pose acute threats, particularly within industries that rely heavily on remote hiring practices, such as IT, finance, healthcare, and cybersecurity. According to Gartner predictions, one in four job candidates globally will be fake by 2028, highlighting the scale and urgency of addressing this issue. ORGANIZATIONAL RISKS AND CONSEQUENCES Organizations face numerous operational and strategic threats from deepfake attacks. Financially, companies victimized by deepfake fraud experience significant losses, averaging $450,000 per incident. Deepfake infiltration can also lead to data breaches, loss of intellectual property, and compromised cybersecurity infrastructure, all of which bear significant financial and regulatory repercussions. Moreover, deepfake-driven scams lead to broader social engineering attacks. For instance, remote IT workers fraudulently hired through deepfakes have successfully conducted espionage activities, extracting sensitive data or installing malware within corporate networks. Often linked to state-sponsored groups, such incidents further emphasize deepfake-related geopolitical threats. PROACTIVE STRATEGIES FOR MITIGATION AND DEFENSE Given the complexity and severity of deepfake threats, organizations must adopt comprehensive mitigation strategies. Technological solutions include deploying sophisticated AI-powered detection tools designed explicitly for deepfake identification. Platforms such as GetReal Security (no relationship)offer integrated solutions providing proactive detection, advanced forensic analysis, and real-time authentication of digital content. Combining AI-driven solutions with manual forensic analysis has proven particularly effective, as human expertise can spot contextual inconsistencies that AI alone might miss. Furthermore, businesses should enhance cybersecurity awareness and employee training programs. Regular training on recognizing visual, audio, and behavioral anomalies in deepfake content is crucial. Organizations can adopt robust authentication measures like multi-factor authentication (MFA), biometric verification, and blockchain-based methods for verifying digital authenticity, although scalability remains challenging. Additionally, continuous investment in adaptive threat intelligence platforms ensures rapid responses to emerging threats. It's now a necessity to adopt scalable deepfake detection technologies integrated seamlessly within recruitment workflows and organizational infrastructures. My team has encountered a few deepfake interviews ourselves, through contractors. Since then, we've required deeper vendor due diligence and vendor technology to mitigate as well as recruiter training to detect red flags. COLLABORATIVE AND REGULATORY ACTIONS Addressing deepfake threats effectively requires robust collaborative efforts across tech companies, government agencies, and industry bodies. Regulatory frameworks, such as the European Union's AI Act and various U.S. federal and state initiatives, represent important steps toward transparency, accountability, and comprehensive protection against malicious AI misuse. Nevertheless, current regulations remain fragmented and incomplete, underscoring the urgent need for standardized, comprehensive legislation tailored to the risks posed by deepfakes. Deepfake technology presents profound ethical, societal, and cybersecurity challenges. The increasing prevalence and sophistication of AI-driven fraud in recruitment and beyond require proactive, multi-layered defensive measures. Organizations must enhance technical defenses, raise employee awareness, and advocate for robust regulatory frameworks. By taking informed, collaborative, and proactive approaches, businesses can significantly mitigate the risks associated with deepfake technology while leveraging its beneficial applications responsibly.

Houston man, 65, loses $500K life savings in elder scam — now he doesn't know if he'll ever be able to retire
Houston man, 65, loses $500K life savings in elder scam — now he doesn't know if he'll ever be able to retire

Yahoo

time3 days ago

  • Business
  • Yahoo

Houston man, 65, loses $500K life savings in elder scam — now he doesn't know if he'll ever be able to retire

Like most of us, Hiep Nguyen regularly receives scam phone calls. Although he usually ignores these unexpected phone calls, one recent scammer had a convincing trap. After the caller ID identified the unknown caller as the Vietnamese Embassy, Nguyen picked up. The caller claimed that someone was perpetrating crimes, like money laundering, in his name, which meant he needed to rearrange his finances. However, as he had received an official IRS letter two weeks prior warning him that his identity might have been stolen, he immediately thought the situation was legitimate and started following the scammers' directions. Within five months, he had redirected — and lost — around $500,000. 'Now I can't sleep,' said Nguyen. Thanks to Jeff Bezos, you can now become a landlord for as little as $100 — and no, you don't have to deal with tenants or fix freezers. Here's how I'm 49 years old and have nothing saved for retirement — what should I do? Don't panic. Here are 6 of the easiest ways you can catch up (and fast) Nervous about the stock market in 2025? Find out how you can access this $1B private real estate fund (with as little as $10) Nguyen immigrated to the United States over 50 years ago. Since arriving, he has worked tirelessly and saved his money along the way. Now aged 65, he had planned to retire in the near future. But after losing his life savings, retirement might no longer be feasible for him. 'I lost maybe $500,000,' said Nguyen. 'I don't know when I could retire or if I have to work until I die.' The scam worked in part because, with the IRS's recent warning in mind, he thought that this was a legitimate government agency reaching out to help him protect his identity. So when the caller said he would need to send money to clear his name, Nguyen believed them. Over the coming months, the scammers exchanged messages with him through Viber, an encrypted messaging app, with forged government documents and AI-generated videos of official protocols. And, as is typical of many scams, scammers directed him to transfer money via a wire transfer. In the quest to clear his name, he transferred most of his life savings. Eventually, he determined that the money wasn't coming back and worked up the courage to reach out for help by sharing the situation with his daughter. 'I was in shock, I did not know that this was going on for the past five months,' said his daughter, Kathy Nguyen. 'He didn't have anything left and he needed to reach out for help, but he was ashamed." Currently, Nguyen is in the process of selling his house with the goal of paying off the debts incurred throughout this process. His daughter is doing everything she can to help him get back on his feet, including starting a GoFundMe, which has already raised five figures to help him get back on his feet. Read more: Want an extra $1,300,000 when you retire? Dave Ramsey says — and that 'anyone' can do it Elder fraud is heartbreaking. But it's also more common than you might think. And it's on the rise. According to the FBI, elderly Americans lose more than $3 billion per year to scams. In 2023, that number was $3.4 billion — an increase of 11% from the previous year, with government impersonation scams accounting for $180 million in losses. Although this government impersonation scam hurt the Nguyen family, it's not the only type of fraud out there. Some of the most common elder scams include romance scams, lottery scams, tech support scams, sweepstakes scams and loved ones in trouble scams. But vigilance can help you stay safe. Start by treating any unsolicited phone calls, mailings and other offers with caution and skepticism. If you do receive an unsolicited offer, search for the appropriate contact information of the alleged party online. For example, if 'your bank' calls to ask for a funds transfer, consider hanging up and dialing the official number on your bank statements to sort out any issues. If you feel any pressure to act quickly, resist the urge. Scammers are known to use pressure tactics, such as threatening arrest, that could encourage you to make a rash decision and limit time available to second-guess what you're being told. Avoid making a decision on a tight deadline. If you do fall victim to a fraud, reporting it to the FBI is a good idea. Even if they cannot help you recoup your funds, your tip could protect potential future victims and help raise awareness of any new tactics scammers are using. Rich, young Americans are ditching the stormy stock market — here are the alternative assets they're banking on instead Robert Kiyosaki warns of a 'Greater Depression' coming to the US — with millions of Americans going poor. But he says these 2 'easy-money' assets will bring in 'great wealth'. How to get in now This tiny hot Costco item has skyrocketed 74% in price in under 2 years — but now the retail giant is restricting purchases. Here's how to buy the coveted asset in bulk Here are 5 'must have' items that Americans (almost) always overpay for — and very quickly regret. How many are hurting you? Like what you read? Join 200,000+ readers and get the best of Moneywise straight to your inbox every week. This article provides information only and should not be construed as advice. It is provided without warranty of any kind.

Healthcare records of 8m Americans leaked online... and the clue YOU are affected
Healthcare records of 8m Americans leaked online... and the clue YOU are affected

Daily Mail​

time4 days ago

  • Health
  • Daily Mail​

Healthcare records of 8m Americans leaked online... and the clue YOU are affected

A massive data leak has compromised the healthcare records of more than eight million Americans. Cybersecurity researchers found the information was exposed in an unprotected dental marketing database, allowing anyone to see the details online. The dataset included roughly 2.7 million patient profiles and 8.8 million appointment records. It included names, dates of birth, addresses, contact details, and sensitive healthcare metadata enough to form a detailed profile of each patient. Experts warned the leak is enough for attackers to carry out identity theft for financial gain. They are also urging Americans to keep a close eye on medical and insurance records for signs of unauthorized activity. Anyone who has had a dental appointment recently may also want to enroll in an identity theft monitoring service. The database is owned by Gargle, a Utah-based company that builds websites and offers marketing tools for dental practices, which has since secured the database this month. It is unclear how long the database remained exposed or who may have accessed it before it was secured. Cybernews researchers discovered a third-party entity was behind the leak. While Gargle did not issue a statement acknowledging ownership, Cybernews said all clues point to the company. The database lacked basic protections and cybersecurity monitoring, likely due to human error. Although Gargle is not a healthcare provider itself, it operates key patient-facing systems, such as scheduling tools, online forms and payment services, which, if left unsecured, can become high-risk points of entry for a data breach. has contacted Gargle for comment. The leak has raised concerns about third-party companies handling patient data, as the Health Insurance Portability and Accountability Act (HIPAA) mandates strong security protections for entities that deal with this sensitive information. And it comes after researchers at cyber watchdog Check Point revealed a staggering 276 million patient records were compromised in 2024. The report suggested that eight in 10 Americans had some form of medical data stolen last year. The biggest hack in 2024 was also one of the largest healthcare data breaches in US history, affecting 190 million patients tied to Change Healthcare. Now, the team at Check Point has identified a new healthcare cyberattack that could expose even more sensitive information than the previous year. According to the team, cybercriminals are impersonating practicing doctors to trick patients into revealing Social Security numbers, medical histories, insurance details, and other personal data. The phishing campaign has been active since March 20, and researchers estimate that 95 percent of its targets are in the US. 'In some versions of these phishing emails, cybercriminals deploy images of real, practicing doctors but pair them with fake names,' the Check Point team reported. The emails instruct recipients to contact a listed healthcare provider using a specific phone number, which is part of the scam. Researchers noted that Zocdoc has become a key tool in the attackers' arsenal, allowing them to use images of real doctors while disguising their identities with fake credentials. In one case, cybercriminals created a fake profile on Zocdoc using a real doctor's photo but a fake name and sent a fake pre-appointment message, booking confirmation, and additional instructions. To safeguard patients' private information and finances, healthcare organizations are urged to install advanced phishing filters, conduct regular cybersecurity training and mock drills, and equip their IT teams to respond quickly to cyberthreats. In response to the rise in medical record breaches, a new set of HIPAA regulations was proposed in January 2025. The goal is to enhance the protection of medical records through stronger data encryption and stricter compliance checks. The proposed rule is expected to cost $9 billion in the first year and $6 billion annually over the next four years. Patients affected by data breaches are urged to monitor their financial accounts, request credit reports, and consider placing fraud alerts. 'Patients are encouraged to review statements from their healthcare providers and report any inaccuracies immediately,' said Yale New Haven Health.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store