logo
#

Latest news with #KnowBe4

Encrypted QR Codes are here. Should workplaces be using them?
Encrypted QR Codes are here. Should workplaces be using them?

Yahoo

time3 days ago

  • Business
  • Yahoo

Encrypted QR Codes are here. Should workplaces be using them?

Companies go to great lengths to protect sensitive personal and financial information. But as cybercriminals become increasingly sophisticated, scams are on the rise, putting key information at risk of being compromised. Scammers often use phishing techniques to access secure data or personal information. Cybersecurity company Egress reports that QR Code scams in emails accounted for 12.4% of all phishing emails in 2023, jumping from 1.4% just a year prior. Health care, hospitality, education, and insurance industries are most likely to be targeted with phishing schemes, according to a 2024 report from cybersecurity firm KnowBe4. While companies have long been aware of email phishing scams, QR Code "quishing" (QR Code + phishing) scams have presented a new way to manipulate and deceive. Quishing targets businesses in a few ways: Instead of taking users to legitimate websites, QR Codes direct users to fake websites that may prompt them to provide banking access or enter personal information like passwords. Fake QR Codes can also prompt users to download malware onto their devices, which can wreak havoc. Phishing is one of the most common crimes, according to the Federal Bureau of Investigation's Internet Crime Complaint Center. Phishing crimes were reported nearly 299,000 times in 2023—a 161% increase since 2019, according to a 2023 report from the Bureau. Some of those scams include the ubiquitous black-and-white squares that people have hijacked in phishing scams, prompting the Federal Trade Commission to issue a warning about the risks of QR Codes in 2023. Savvy workers might assume they're not susceptible to scams, but those geometric codes, difficult to distinguish from one another, may appear more benign than a sketchy URL. "They want you to scan the QR Code and open the URL without thinking about it," Alvaro Puig, a consumer education specialist for the FTC, said in a statement. Quishing scams often leverage brand names or familiar emails to dupe busy employees. A common scam is phishing emails that impersonate Docusign. Bad actors ask people to access funds from a "funds settlement agreement" by scanning a QR Code, which points them to a fake Docusign website where they can fork over their sign-in credentials. Other common Quishing attempts include Zoom meeting invitations and HR reminders for policy reviews, according to KnowBe4. The company's research found that globally, nearly 49% of clicks on phishing links in the third quarter of 2024 purported to be emails about HR or IT matters. Consumers may be familiar with public-facing scams, such as fraudsters tricking people into sharing payment information on fraudulent websites using fake QR Codes stickered onto parking meters. Fraud investigations into fake QR Codes have been underway across the nation, with cities like Austin, Houston, and San Antonio in Texas, as well as Newtown, Massachusetts, for instance. Meanwhile, however, businesses may face a different kind of threat from within if employees aren't aware of the risks of QR Codes or if hackers infiltrate systems to access information protected by QR Codes that lack extra security. As workplaces grapple with new security threats, businesses may seek extra protective measures. Uniqode examined industry reports and news coverage to find out how new technologies like encrypted QR Codes and ink authentication can help protect workplaces from scams. Companies can protect themselves by using encrypted QR Codes to secure their links further. There are varying levels of encryption, but they generally utilize a secret decryption key that allows the scanner, typically a phone, to read the QR Code before moving forward. For example, Google experimented in 2012 with encrypted QR Codes that allowed people to log into their email from a public computer. Users could scan the QR Code on their smartphone with the approved credentials; once this decryption passed, the email on the public computer would automatically log in. QR Codes can also be password protected, requiring a code to open the link and proceed. Dynamic QR Codes allow users to add passwords that can be changed later, whereas static QR Codes, once created, stay the same. Encrypted QR Codes can be helpful in settings where the shared information requires confidentiality or added security, such as health care records, event tickets, and legal documentation. Encryption helps protect the data stored inside a QR Code so only authorized users can access it. Businesses can use encrypted information to protect confidential consumer information during a breach and as an extra safeguard against extortion. An extra layer of security to guard data from bad actors can also give companies greater peace of mind. Massachusetts Institute of Technology researchers pioneered Invisible ink authentication, an advanced security measure designed to hide QR Codes in plain sight, such as on documents, to prevent counterfeiting. Invisible QR Codes have fluorescence—so they cannot be seen by the naked eye or detected by a camera lens. Only users aware of the fluorescence can utilize a specialized filter to detect and scan the code, accessing key information securely. Another advancement is algorithm-driven anti-copy technology, which adds a layer of security to QR Codes by preventing counterfeiters from passing off knock-offs as the real thing. Unlike regular QR Codes, which can be copied or modified to point users to a fake website, anti-copy security QR Codes have a subtle watermark and use an algorithm for authentication that makes them extremely hard to fake. They're particularly useful when used in pairs for shipping a product, where the top QR Code can be used multiple times along the shipping route, but the bottom QR Code can only be used once—authenticating the product. Old-school, street-smart techniques should not be underestimated in their capacity to protect people from quishing scams. "The good news is that the way to [be] safe from this malicious activity is to use the steps we have already learned from phishing and other social engineering attacks, such as only scanning codes from trusted sources, verifying links are legitimate and looking out for other red flags," Garrett McManaway, chief information security officer of Wayne State University, wrote in a blog post. Duke University security guidelines recommend only using native QR Code scanners, checking the URL to see if the code sends the user to the anticipated website, and other website details that signal authenticity, such as a matching logo and color scheme. Another hallmark of a quishing scam is a false sense of urgency—attempts to push victims to act quickly without thinking, such as contacting someone immediately to deliver a package or log in to an account due to alleged suspicious activity. People can protect themselves by being wary. Take a moment to think before scanning, avoid short URLs, and, if it seems fishy, there's a good chance it is phishy. Story editing by Alizah Salario. Additional editing by Elisa Huang. Copy editing by Kristen Wegrzyn. Photo selection by Lacy Kerrick. This story was produced by Uniqode (Beaconstac) and was produced and distributed in partnership with Stacker.

Social Engineering 2.0: When artificial intelligence becomes the ultimate manipulator
Social Engineering 2.0: When artificial intelligence becomes the ultimate manipulator

Zawya

time6 days ago

  • Business
  • Zawya

Social Engineering 2.0: When artificial intelligence becomes the ultimate manipulator

Once the domain of elite spies and con artists, social engineering is now in the hands of anyone with an internet connection – and AI is the accomplice. Supercharged by generative tools and deepfake technology, today's social engineering attacks are no longer sloppy phishing attempts. They're targeted, psychologically precise, and frighteningly scalable. Welcome to Social Engineering 2.0, where the manipulators don't need to know you personally. Their AI already does. Deception at machine levels Social engineering works because it bypasses firewalls and technical defences. It attacks human trust. From fake bank alerts to long-lost Nigerian princes, these scams have traditionally relied on generic hooks and low-effort deceit. But that's changed, and continues to. 'AI is augmenting and automating the way social engineering is carried out,' says Anna Collard, SVP of Content Strategy&Evangelist at KnowBe4 Africa. 'Traditional phishing markers like spelling errors or bad grammar are a thing of the past. AI can mimic writing styles, generate emotionally resonant messages, and even recreate voices or faces ( – all within minutes.' The result? Cybercriminals now wield the capabilities of psychological profilers. By scraping publicly available data – from social media to company bios – AI can construct detailed personal dossiers. 'Instead of one-size-fits-all lures, AI enables criminals to create bespoke attacks,' Collard explains. 'It's like giving every scammer access to their own digital intelligence agency.' The new face of manipulation: Deepfakes One of the most chilling evolutions of AI-powered deception is the rise of deepfakes – synthetic video and audio designed to impersonate real people. 'There are documented cases where AI-generated voices have been used to impersonate CEOs and trick staff into wiring millions ( notes Collard. In South Africa, a recent deepfake video circulating on WhatsApp featured a convincingly faked endorsement by FSCA Commissioner Unathi Kamlana promoting a fraudulent trading platform. Nedbank had to publicly distance itself from the scam ( 'We've seen deepfakes used in romance scams, political manipulation, even extortion,' says Collard. One emerging tactic involves simulating a child's voice to convince a parent they've been kidnapped ( – complete with background noise, sobs, and a fake abductor demanding money. 'It's not just deception anymore,' Collard warns. 'It's psychological manipulation at scale.' The Scattered Spider effect One cybercrime group exemplifying this threat is Scattered Spider. Known for its fluency in English and deep understanding of Western corporate culture, this group specialises in highly convincing social engineering campaigns. 'What makes them so effective,' notes Collard, 'is their ability to sound legitimate, form quick rapport, and exploit internal processes – often tricking IT staff or help-desk agents.' Their human-centric approach, amplified by AI tools, such as using audio deepfakes to spoof victims' voices for obtaining initial access, shows how the combination of cultural familiarity, psychological insight, and automation is redefining what cyber threats look like. It's not just about technical access – it's about trust, timing, and manipulation. Social engineering at scale What once required skilled con artists days or weeks of interaction – establishing trust, crafting believable pretexts, and subtly nudging behaviour – can now be done by AI in the blink of an eye. 'AI has industrialised the tactics of social engineering,' says Collard. 'It can perform psychological profiling, identify emotional triggers, and deliver personalised manipulation with unprecedented speed.' The classic stages – reconnaissance, pretexting, rapport-building – are now automated, scalable, and tireless. Unlike human attackers, AI doesn't get sloppy or fatigued; it learns, adapts, and improves with every interaction. The biggest shift? 'No one has to be a high-value target anymore,' Collard explains. 'A receptionist, an HR intern, or a help-desk agent; all may hold the keys to the kingdom. It's not about who you are – it's about what access you have.' Building cognitive resilience In this new terrain, technical solutions alone won't cut it. 'Awareness has to go beyond ' don't click the link,'' says Collard. She advocates for building 'digital mindfulness' and 'cognitive resilience' – the ability to pause, interrogate context, and resist emotional triggers ( This means: Training staff to recognise emotional manipulation, not just suspicious URLs. Running simulations using AI-generated lures, not outdated phishing templates. Rehearsing calm, deliberate decision-making under pressure, to counter panic-based manipulation. Collard recommends unconventional tactics, too. 'Ask HR interviewees to place their hand in front of their face during video calls – it can help spot deepfakes in hiring scams,' she says. Families and teams should also consider pre-agreed code words or secrets for emergency communications, in case AI-generated voices impersonate loved ones. Defence in depth – human and machine While attackers now have AI tools, so too do defenders. Behavioural analytics, real-time content scanning, and anomaly detection systems are evolving rapidly. But Collard warns: 'Technology will never replace critical thinking. The organisations that win will be the ones combining human insight with machine precision.' And with AI lures growing more persuasive, the question is no longer whether you'll be targeted – but whether you'll be prepared. 'This is a race,' Collard concludes. 'But I remain hopeful. If we invest in education, in critical thinking and digital mindfulness, in the discipline of questioning what we see and hear – we'll have a fighting chance.' Distributed by APO Group on behalf of KnowBe4.

Entwistle & Cappucci LLP Files Amended Securities Class Action Complaint Against KnowBe4, Inc. and Related Defendants
Entwistle & Cappucci LLP Files Amended Securities Class Action Complaint Against KnowBe4, Inc. and Related Defendants

Business Wire

time13-06-2025

  • Business
  • Business Wire

Entwistle & Cappucci LLP Files Amended Securities Class Action Complaint Against KnowBe4, Inc. and Related Defendants

NEW YORK--(BUSINESS WIRE)--Entwistle & Cappucci LLP today announced that it filed a First Amended Class Action Complaint ('Complaint') against KnowBe4, Inc. ('KnowBe4'), certain of KnowBe4's directors, KKR & Co. Inc., Elephant Partners, Vista Equity Partners Management, LLC ('Vista') and certain of their affiliates (collectively, 'Defendants') on behalf of a class ('Class') consisting of all persons or entities that: (a) sold shares of KnowBe4 common stock from October 12, 2022 through February 1, 2023, including those who sold shares into the 'take private' acquisition ('Merger') of KnowBe4 by Vista and its affiliates on February 1, 2023; and/or (b) held shares of KnowBe4 as of the December 7, 2022 record date and were entitled to vote on the Merger. The action ('Action') seeks to recover damages on behalf of investors that were damaged as a result of allegedly false and misleading statements and omissions of material facts in the October 12, 2022 press release issued by KnowBe4 and Vista announcing the Merger, December 22, 2022 proxy statement and subsequent amendment issued by Defendants on January 18, 2023 ('Proxy'), and related filings with the U.S. Securities and Exchange Commission ('SEC'). Among other things, the Complaint alleges the Proxy and other solicitation materials misled investors regarding the true value of KnowBe4's shares, omitted that KKR increased its equity rollover into the post-Merger entity after it learned of the Merger price, and failed to disclose advantages Defendants provided to Vista over other potential bidders during the sales process leading to the Merger. The Action was filed in the United States District Court for the Southern District of Florida and is captioned: Water Island Event-Driven Fund v. KnowBe4, Inc., No. 25-cv-22574-CMA. The Complaint asserts claims under Sections 10(b), 14(a) and 20(a) of the Exchange Act and SEC Rules 10b-5 and 14a-9 promulgated thereunder. As indicated in a prior press release dated June 6, 2025, if you wish to serve as a lead plaintiff in this matter, you must file a motion with the Court no later than August 5, 2025. Any member of the proposed Class may move the Court to serve as a lead plaintiff through counsel of their choice, or they may choose to do nothing and remain a member of the Class. If you wish to discuss this Action or have any questions concerning this notice or your rights or interests, please contact: Robert N. Cappucci, Esq. or Andrew M. Sher, Esq. of Entwistle & Cappucci at (212) 894-7200 or via e-mail at rcappucci@ or asher@ About Entwistle & Cappucci Entwistle & Cappucci is a national law firm providing exceptional legal representation to clients in the most complex and challenging legal matters. Our practice encompasses all areas of litigation, corporate transactions, bankruptcy, insurance, corporate investigations and white-collar defense. Our clients include public and private corporations, major hedge funds, public pension funds, governmental entities, leading institutional investors, domestic and foreign financial services companies, emerging business enterprises and individual entrepreneurs.

Entwistle & Cappucci LLP Files a Securities Class Action Against KnowBe4, Inc. and Related Defendants
Entwistle & Cappucci LLP Files a Securities Class Action Against KnowBe4, Inc. and Related Defendants

Business Wire

time06-06-2025

  • Business
  • Business Wire

Entwistle & Cappucci LLP Files a Securities Class Action Against KnowBe4, Inc. and Related Defendants

NEW YORK--(BUSINESS WIRE)--Entwistle & Cappucci LLP today announced that its ongoing investigation has led to the filing of a class action ('Action') against KnowBe4, Inc. ('KnowBe4'), certain of KnowBe4's directors, KKR & Co. Inc., Elephant Partners, Vista Equity Partners Management, LLC ('Vista') and certain of their affiliates (collectively, 'Defendants') on behalf of a class ('Class') consisting of all persons or entities that: (a) sold shares of KnowBe4 common stock from October 12, 2022 through February 1, 2023, including those who sold shares into the 'take private' acquisition ('Merger') of KnowBe4 by Vista and its affiliates on February 1, 2023; and/or (b) held shares of KnowBe4 as of the December 7, 2022 record date and were entitled to vote on the Merger. The Action seeks to recover damages on behalf of investors that were damaged as a result of allegedly false and misleading statements and omissions of material facts in the October 12, 2022 press release issued by KnowBe4 and Vista announcing the Merger, December 22, 2022 proxy statement and subsequent amendment issued by Defendants on January 18, 2023 ('Proxy'), and related filings with the U.S. Securities and Exchange Commission ('SEC'). Among other things, the complaint alleges the Proxy and other solicitation materials misled investors regarding the true value of KnowBe4's shares, omitted that KKR increased its equity rollover into the post-Merger entity after it learned of the Merger price, and failed to disclose advantages Defendants provided to Vista over other potential bidders during the sales process leading to the Merger. The Action was filed in the United States District Court for the Southern District of Florida and is captioned: Water Island Event-Driven Fund v. KnowBe4, Inc., No. 25-cv-22574. The complaint asserts claims under Sections 10(b), 14(a) and 20(a) of the Exchange Act and SEC Rules 10b-5 and 14a-9 promulgated thereunder. If you wish to serve as a lead plaintiff in this matter, you must file a motion with the Court no later than August 5, 2025. Any member of the proposed Class may move the Court to serve as a lead plaintiff through counsel of their choice, or they may choose to do nothing and remain a member of the Class. If you wish to discuss this Action or have any questions concerning this notice or your rights or interests, please contact: Robert N. Cappucci, Esq. or Andrew M. Sher, Esq. of Entwistle & Cappucci at (212) 894-7200 or via e-mail at rcappucci@ or asher@ About Entwistle & Cappucci Entwistle & Cappucci is a national law firm providing exceptional legal representation to clients in the most complex and challenging legal matters. Our practice encompasses all areas of litigation, corporate transactions, bankruptcy, insurance, corporate investigations and white-collar defense. Our clients include public and private corporations, major hedge funds, public pension funds, governmental entities, leading institutional investors, domestic and foreign financial services companies, emerging business enterprises and individual entrepreneurs.

Perilous prompts: How generative Artificial Intelligence (AI) is leaking companies' secrets
Perilous prompts: How generative Artificial Intelligence (AI) is leaking companies' secrets

Zawya

time02-06-2025

  • Business
  • Zawya

Perilous prompts: How generative Artificial Intelligence (AI) is leaking companies' secrets

Beneath the surface of GenAI's outputs lies a massive, mostly unregulated engine powered by data – your data. And whether it's through innocent prompts or habitual oversharing, users are feeding these machines with information that, in the wrong hands, becomes a security time bomb. A recent Harmonic report ( found that 8.5% of employee prompts to generative AI tools like ChatGPT and Copilot included sensitive data – most notably customer billing and authentication information – raising serious security, compliance, and privacy risks. Since ChatGPT's 2022 debut, generative AI has exploded in popularity and value – surpassing $25 billion in 2024 ( – but its rapid rise brings risks many users and organisations still overlook. 'One of the privacy risks when using AI platforms is unintentional data leakage,' warns Anna Collard, SVP Content Strategy&Evangelist at KnowBe4 Africa. 'Many people don't realise just how much sensitive information they're inputting.' Your data is the new prompt It's not just names or email addresses that get hoovered up. When an employee asks a GenAI assistant to 'rewrite this proposal for client X' or 'suggest improvements to our internal performance plan,' they may be sharing proprietary data, customer records, or even internal forecasts. If done via platforms with vague privacy policies or poor security controls, that data may be stored, processed, or – worst-case scenario – exposed. And the risk doesn't end there. 'Because GenAI feels casual and friendly, people let their guard down,' says Collard. 'They might reveal far more than they would in a traditional work setting – interests, frustrations, company tools, even team dynamics.' In aggregate, these seemingly benign details can be stitched into detailed profiles by cybercriminals or data brokers – fuelling targeted phishing, identity theft, and sophisticated social engineering. A surge of niche platforms, a bunch of new risks Adding fuel to the fire is the rapid proliferation of niche AI platforms. Tools for generating product mock-ups, social posts, songs, resumes, or legalese are sprouting up at speed – many of them developed by small teams using open-source foundation models. While these platforms may be brilliant at what they do, they may not offer the hardened security architecture of enterprise-grade tools. 'Smaller apps are less likely to have been tested for edge-case privacy violations or undergone rigorous penetration tests and security audits,' says Collard. 'And many have opaque or permissive data usage policies.' Even if an app's creators have no malicious intent, weak oversight can lead to major leaks. Collard warns that user data could end up in: ● Third-party data broker databases ● AI training sets without consent ● Cybercriminal marketplaces following a breach In some cases, the apps might themselves be fronts for data-harvesting operations. From individual oversights to corporate exposure The consequences of oversharing aren't limited to the person typing the prompt. 'When employees feed confidential information into public GenAI tools, they can inadvertently expose their entire company,' ( explains Collard. 'That includes client data, internal operations, product strategies – things that competitors, attackers, or regulators would care deeply about.' While unauthorised shadow AI remains a major concern, the rise of semi-shadow AI – paid tools adopted by business units without IT oversight – is increasingly risky, with free-tier generative AI apps like ChatGPT responsible for 54% of sensitive data leaks due to permissive licensing and lack of controls, according to the Harmonic report. So, what's the solution? Responsible adoption starts with understanding the risk – and reining in the hype. 'Businesses must train their employees on which tools are ok to use, and what's safe to input and what isn't," says Collard. 'And they should implement real safeguards – not just policies on paper. 'Cyber hygiene now includes AI hygiene.' 'This should include restricting access to generative AI tools without oversight or only allowing those approved by the company.' 'Organisations need to adopt a privacy-by-design approach ( when it comes to AI adoption,' she says. 'This includes only using AI platforms with enterprise-level data controls and deploying browser extensions that detect and block sensitive data from being entered.' As a further safeguard, she believes internal compliance programmes should align AI use with both data protection laws and ethical standards. 'I would strongly recommend companies adopt ISO/IEC 42001 ( an international standard that specifies requirements for establishing, implementing, maintaining and continually improving an Artificial Intelligence Management System (AIMS),' she urges. Ultimately, by balancing productivity gains with the need for data privacy and maintaining customer trust, companies can succeed in adopting AI responsibly. As businesses race to adopt these tools to drive productivity, that balance – between 'wow' and 'whoa' – has never been more crucial. Distributed by APO Group on behalf of KnowBe4.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store