logo
#

Latest news with #CyberSignals

The digital divide's dark side: cybersecurity in African higher education
The digital divide's dark side: cybersecurity in African higher education

IOL News

time27-05-2025

  • IOL News

The digital divide's dark side: cybersecurity in African higher education

Educational institutions have fallen prey to social engineering and spoofing attacks, says the writer. The digital revolution is transforming African education, with universities embracing online learning and digital systems. However, this progress brings a crucial challenge: cybersecurity. Are African higher education institutions (HEIs) prepared for the escalating cyber threats? The Growing Threat Landscape African HEIs are increasingly targeted by cybercriminals. Microsoft's Cyber Signals report highlights education as the third most targeted sector globally, with Africa being a particularly vulnerable region. Incidents like the theft of sensitive data at Tshwane University of Technology (TUT) and the hacking of a master's degree platform at Abdelmalek Essaadi University in Morocco demonstrate the reality of these threats. Several factors contribute to HEI vulnerability. Universities hold vast amounts of sensitive data, including student records, research, and intellectual property. Their open nature, with diverse users and international collaborations, creates weaknesses, especially in email systems. Limited resources, legacy systems, and a lack of awareness further exacerbate these issues. Examples of Cyber Threats in African Education Educational institutions have fallen prey to social engineering and spoofing attacks. For example, universities in Mpumalanga and schools in the Eastern Cape have been notably victimised by cybercriminals, using link-based ransomware attacks, with some institutions being locked out of their data for over a year.

AI-assisted cyber-attacks on the rise
AI-assisted cyber-attacks on the rise

Observer

time03-05-2025

  • Business
  • Observer

AI-assisted cyber-attacks on the rise

Scams and crimes involving Artificial Intelligence (AI) have been on the rise, with most scams occurring in e-commerce, including potential business deals, proposed partnerships, job offers, and offers to provide technical support, according to the latest Cyber Signals report on AI-assisted scams issued by Microsoft. In its most recent report, Microsoft stated that in the past year alone, it has thwarted $4 billion in fraud attempts and approximately 1.6 million bot sign-up attempts per hour. Between April 2024 and April 2025, they have prevented $4 billion in fraud attempts, rejected 49,000 fraudulent partnership enrolments, and blocked around 1.6 million bot sign-up attempts per hour. The report noted, "An increase in cyber-attacks involving AI has been observed as AI has lowered the barrier to entry, allowing even low-skilled attackers to create sophisticated scams, ranging from deepfake-driven phishing to AI-generated sham websites mimicking legitimate businesses." The report also added, "With AI, tactics that used to take scammers days or weeks to create can now be done in minutes." Precautions that need to be taken include avoiding falling for "limited-time" deals and countdown timers, clicking on verified advertisements and to be sceptical of social proof as scammers can use AI-generated reviews, influencer endorsements, and testimonials to exploit trust. The report warned that job applicants are particularly vulnerable to cyber-attacks and should take precautions when applying for jobs, attending interviews, and accepting job offers. Employers should never ask for personal or financial information, payment for a job opportunity, or communication via unofficial channels. The report also mentioned that AI has made it easier and cheaper for fraud and cybercrime actors to generate believable content for cyberattacks at a rapid rate. AI software used in fraud attempts ranges from legitimate apps misused for malicious purposes to fraud-oriented tools used by bad actors in the cybercrime underground.

Microsoft reveals how AI tools have made e-commerce fraud, job scams and tech support frauds more dangerous
Microsoft reveals how AI tools have made e-commerce fraud, job scams and tech support frauds more dangerous

Time of India

time21-04-2025

  • Business
  • Time of India

Microsoft reveals how AI tools have made e-commerce fraud, job scams and tech support frauds more dangerous

Microsoft, in its latest Cyber Signals report, says that artificial intelligence has significantly lowered barriers for cybercriminals, enabling more sophisticated and convincing fraud schemes. Between April 2024 and April 2025, Microsoft thwarted $4 billion in fraud attempts, rejected 49,000 fraudulent partnership enrollments, and blocked approximately 1.6 million bot signup attempts per hour. E-commerce fraud: AI creates convincing fake storefronts in minutes AI tools now allow fraudsters to create convincing e-commerce websites in minutes rather than days or weeks. These sites feature AI-generated product descriptions, images, and fake customer reviews that mimic legitimate businesses. AI-powered customer service chatbots add another layer of deception, interacting with customers and stalling complaints with scripted excuses to delay chargebacks. Microsoft reports that much of this AI-powered fraud originates from China and Germany, with the latter being targeted due to its status as one of the largest e-commerce markets in the European Union. To combat these threats, Microsoft has implemented fraud detection systems across its products, including Microsoft Defender for Cloud and Microsoft Edge, which features website typo protection and domain impersonation detection using deep learning technology. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Sunteck Beach Residences (SBR) Mumbai Sunteck Book Now Undo Job scams: AI powers fake interviews and employment offers Employment fraud has evolved with generative AI enabling scammers to create fake job listings, stolen credentials, and AI-powered email campaigns targeting job seekers. These scams often appear legitimate through AI-powered interviews and automated correspondence, making it increasingly difficult to identify fraudulent offers. Warning signs include unsolicited job offers promising high pay for minimal qualifications, requests for personal information including bank details, and offers that seem too good to be true. Microsoft advises job seekers to verify employer legitimacy by cross-checking company details on official websites and platforms like LinkedIn , and to be wary of emails from free domains rather than official company email addresses. Tech support fraud: AI enhances social engineering attacks While some tech support scams don't yet leverage AI, Microsoft has observed financially motivated groups like Storm-1811 impersonating IT support through voice phishing to gain access to victims' devices through legitimate tools like Windows Quick Assist. AI tools can expedite the collection and organization of information about targeted victims to create more credible social engineering lures. In response, Microsoft blocks an average of 4,415 suspicious Quick Assist connection attempts daily—approximately 5.46% of global connection attempts. The company has implemented warning messages in Quick Assist to alert users about possible scams before they grant access to their devices and developed a Digital Fingerprinting capability that leverages AI and machine learning to detect and prevent fraud. Microsoft is taking a proactive approach to fraud prevention through its Secure Future Initiative. In January 2025, the company introduced a new policy requiring product teams to perform fraud prevention assessments and implement fraud controls as part of their design process. Microsoft has also joined the Global Anti-Scam Alliance to collaborate with governments, law enforcement, and other organizations to protect consumers from scams.

Microsoft Thwarts US$4 Billion In Fraud Attempts As AI-Driven Scams Surge
Microsoft Thwarts US$4 Billion In Fraud Attempts As AI-Driven Scams Surge

BusinessToday

time21-04-2025

  • Business
  • BusinessToday

Microsoft Thwarts US$4 Billion In Fraud Attempts As AI-Driven Scams Surge

Microsoft said it blocked nearly US$4 billion in fraud attempts between April 2024 and April 2025, highlighting the scale and sophistication of cybercrime threats amid a global rise in AI-powered scams. According to the latest Cyber Signals report, Microsoft rejected 49,000 fraudulent partner enrolments and prevented approximately 1.6 million bot sign-up attempts per hour, as AI tools continue to lower the barrier for cybercriminals. Generative AI tools are now used to craft convincing fake websites, job scams, and phishing campaigns with deepfakes and cloned voices. Microsoft observed a growing trend of AI-assisted scams originating from regions like China and Germany, where digital marketplaces are most active. Threat actors are now able to build fraudulent e-commerce websites and customer service bots in minutes, leveraging AI-generated content to mislead consumers into trusting fake storefronts and reviews. These deceptive practices have become increasingly difficult to detect. Microsoft's multi-layered response includes domain impersonation protection, scareware blockers, typo protection, and fake job detection systems across Microsoft Edge, LinkedIn, and other platforms. Windows Quick Assist has also been enhanced with in-product warnings and fraud detection. The tool now blocks over 4,400 suspicious connection attempts daily, thanks to Digital Fingerprinting and AI-driven risk signals. Scammers continue to exploit job seekers by generating fake listings, AI-written interviews, and phishing campaigns. Microsoft recommends job platforms enforce multifactor authentication and monitor deepfake-generated interviews to mitigate risks. Meanwhile, groups like Storm-1811 have impersonated IT support via Windows Quick Assist, gaining unauthorised device access without using AI. Microsoft has since strengthened safeguards and suspended accounts linked to such abuse. As part of its Secure Future Initiative, Microsoft introduced a new policy in January 2025 requiring all product teams to perform fraud risk assessments during the design phase. The goal is to embed security measures directly into the architecture of products and services. Corporate Vice-President of Anti-Fraud and Product Abuse, Kelly Bissell, said Microsoft's defence strategy relies not only on technology but also public education and industry collaboration. Microsoft is working closely with global enforcement agencies through the Global Anti-Scam Alliance (GASA) to dismantle criminal infrastructures. 'Cybercrime is a trillion-dollar problem. AI gives us the ability to respond faster, but it also requires all of us—tech firms, regulators, and users—to work together,' said Bissell. To stay protected, consumers are advised to: Verify job listings and company legitimacy. Avoid unsolicited offers via text or personal emails. Be wary of websites offering 'too good to be true' deals. Use browsers with fraud protection and never share personal or financial information with unverified sources. Related

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store