logo
How unsanctioned staff AI use exposes firms to data breach?

How unsanctioned staff AI use exposes firms to data breach?

Zawya09-06-2025

As chat bots that continue to grow in prominence across the globe and grab the attention of billions of people, a silent problem of privacy breaches is brewing, putting at risk companies that process scores of personal data.
Cybersecurity firm Harmonic Security analysed over 176,000 prompts input by about 8,000 users into popular generative (gen) AI platforms like ChatGPT, Google's Gemini, Perplexity AI, and Microsoft's Copilot, and found that troves of sensitive information make their way into the platforms through the prompts.
In the quarter to March 2025, about 6.7 percent of the prompts tracked contained sensitive information including customer personal data, employee data, company confidential legal and finance details, or even sensitive code.
About 30 percent of the sensitive data were legal and finance data on companies' planned mergers or acquisitions, investment portfolio, legal discourse, billing and payment, sales pipeline, or even financial projections.
Read: AIdentity crisis: How tech is easing online fraudCustomer data like credit card numbers, transactions, or profiles also made their way to these platforms through the prompts, as did employee information like payroll details and employment profiles.
Developers seeking to improve or perfect their codes using genAI tools also inadvertently passed on copyrighted or intellectual property material, security keys, and network information into the bots, exposing their companies to fraudsters.
Asked about the safety of such information, chat bots like ChatGPT always say the information is safe and is not shared with third parties. Even their terms of service say as much, but experts have a warning.
While the information may seem secure within the bots and pose no threat of breach, the experts say it is time companies start checking and restricting what information their employees feed into these platforms, or risk massive data breaches.'One of the privacy risks when using AI platforms is unintentional data leakage,' warns Anna Collard, senior vice president for content strategy at cybersecurity firm KnowBe4 Africa. 'Many people don't realise just how much sensitive information they're inputting.''Cyber hygiene now includes AI hygiene. This should include restricting access to genAI tools without oversight or only allowing those approved by the company.'While a majority of companies around the globe now acknowledge the importance of AI in their operations and are beginning to adopt it, only a few organisations have policies or checks for AI output.
According to McKinsey's latest State of AI survey that interviewed business leaders across the globe, only 27 percent of companies fully review content generated by AI. Forty three percent of companies check less than 40 percent of such content.
But AI use is growing by the minute. Large language Models (LLMs) like ChatGPT have trampled social media apps that have long been digital magnets in user visits and hours of daily interactions.
Read: 'Godfather of AI' now fears it's unsafe. Proposes plan to rein it inMultiple studies, including the one by McKinsey, show that today, nearly three in four employees use genAI to complete simple tasks like writing a speech, proofreading a write-up, writing an email, analysing a document, generating a quotation, or even writing computer programmes.
The rapid proliferation of Chinese-based LLMs like Deepseek is also seen increasing the threat of data breaches to companies. Over the past year, there has been an avalanche of new Chinese chat bots, including Baidu chat, Ernie Bot, Qwen chat, Manus, and Kimi Moonshot among others.'The Chinese government can likely just request access to this data, and data shared with them should be considered property of the Chinese Communist Party,' notes Harmonic in a recent report.
© Copyright 2022 Nation Media Group. All Rights Reserved. Provided by SyndiGate Media Inc. (Syndigate.info).

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

IntelliDent AI and Manipal Academy of Higher Education sign strategic MoU to transform healthcare through AI
IntelliDent AI and Manipal Academy of Higher Education sign strategic MoU to transform healthcare through AI

Zawya

time4 hours ago

  • Zawya

IntelliDent AI and Manipal Academy of Higher Education sign strategic MoU to transform healthcare through AI

IntelliDent AI, a Dubai-based healthtech innovator transforming dentistry through artificial intelligence, has signed a Memorandum of Understanding (MoU) with the Manipal Academy of Higher Education (MAHE), India—an Institution of Eminence and global academic leader. This strategic three-year partnership aims to accelerate advancements in AI-powered oral healthcare through collaborative research, education, and entrepreneurship. The MoU, executed on behalf of MAHE's Manipal College of Dental Sciences (MCODS), Mangalore, lays the foundation for academic-industry cooperation focused on developing future-ready dental AI solutions and equipping students with the technical and entrepreneurial skills to lead the next era of digital healthcare. Key Pillars of the Collaboration: Joint Research Programs: MAHE and IntelliDent will co-develop research initiatives in AI-driven diagnostics, public health, and healthcare innovation—contributing to academic publications, patents, and whitepapers. Training & Internships: MAHE students will gain hands-on exposure through internships and mentorships at IntelliDent, supported by industry insights, guest lectures, and workshops. Entrepreneurial Development: The collaboration will foster cohort-based learning modules, innovation hackathons, and startup support to accelerate the commercialization of student-led healthtech ideas. Knowledge Exchange: Faculty, researchers, and industry experts will engage in reciprocal learning and cross-training to fuel innovation, skill-building, and strategic growth. Affaan Shaikh, Founder & CEO of IntelliDent AI, shared his thoughts on the collaboration: 'This partnership is about reimagining healthcare through ethical AI and innovation. We are thrilled to work alongside one of India's top institutions to shape the next generation of AI health leaders.' The MoU was signed by Dr. Giridhar P. Kini, Registrar of MAHE, and Mr. Affaan Shaikh, with active engagement from academic and innovation stakeholders from both organizations. This collaboration underscores IntelliDent AI's mission to scale accessible, AI-powered dental care globally and MAHE's continued commitment to integrating technology, research, and impact-driven education in the healthcare ecosystem. Together, MAHE and IntelliDent AI are building a bold future where education, innovation, and oral health equity intersect.

The anatomy of a crypto scam: How to stop and prevent common threats
The anatomy of a crypto scam: How to stop and prevent common threats

Crypto Insight

time11 hours ago

  • Crypto Insight

The anatomy of a crypto scam: How to stop and prevent common threats

In the vast world of crypto, the line between opportunity and deception is razor-thin. The traits that make digital assets attractive — anonymity, independence and rapid transferability — also create fertile ground for fraudsters. Scams are woven into the fabric of the crypto ecosystem, exploiting trust, greed and fear. Unlike traditional financial systems with regulators, the decentralized crypto space allows opportunistic actors to thrive. Understanding the structure of these scams is crucial. Just as forensic investigators dissect crime scenes, analyzing the architecture of crypto scams reveals the calculated maneuvers used to siphon funds. Each scam follows a familiar blueprint — preying on human psychology and the lack of regulation in decentralized finance (DeFi). Breaking down these frameworks provides valuable insights, helping investors and institutions recognize warning signs and fortify defenses in this high-risk environment. The hook — perfect bait for every target The first stage of any scam begins with the hook: a carefully crafted message or offer designed to capture the victim's attention and trigger an emotional response. Before setting the hook, scammers often invest significant time gathering information about their targets. They sift through leaked emails, phone numbers and other personal information to build a profile, crafting a personalized scam to increase the likelihood of success. By incorporating specific details — such as the target's language or personal information — the fraudsters add a layer of credibility that creates trust. Once armed with their target's details, scammers move to the hook, preying on curiosity, trust and the promise of easy profits. Whether it's a phishing email, a fake account alert or an investment opportunity promising 'guaranteed returns,' the goal is to present something too enticing to ignore. A common example is the fake exchange account scam, in which victims believe they have been given accidental access to a large sum of unclaimed money. The scam begins with an unexpected message stating, 'Your account has been created,' accompanied by login credentials for an account/wallet on a cryptocurrency exchange. The victim logs in and finds a balance of $10,000 waiting for them. Delight is replaced by greed as they attempt to withdraw the funds. But there's a catch: the system requires a small deposit — perhaps $1,000 — to unlock the full amount. Once the fee is paid, the scam becomes clear: the exchange was fake, and the deposit is now in the hands of scammers. This scam works because it preys on greed and the allure of a 'lucky break.' Victims become so focused on the reward that they ignore the warning signs, such as bad grammar in the message or lack of domain security on the website. The setup — establishing trust and gaining access After successfully hooking a victim's attention, scammers focus on building trust. This phase involves cultivating a sense of legitimacy and familiarity with scammersgoing to great lengths to establish a personal connection. Scammers may even employ tactics like investment scams, where they spend weeks or months grooming their victims, engaging them in friendly conversations and feigned relationships to create a strong bond. Only once this trust is deeply established do they introduce the fraudulent investment or fake platform, luring victims to transfer funds that they will never see again. The SIM swap attack is another devastating example whereby scammers exploit technological trust. By gathering personal information that is available publicly on social media, such as birthdays, pet names or even favorite sports teams, the fraudster can impersonate the victim. They then contact the target's mobile service provider, armed with these personal details, and request a phone number transfer to a SIM card in their possession. With control over the victim's phone number, they can bypass two-factor authentication and gain access to crypto wallets, bank accounts and emails. The setup phase succeeds because scammers exploit both technological trust and personal familiarity. Humans are, by nature, social creatures, and scammers exploit this characteristic by building relationships that appear genuine. In the SIM swap, scammers manipulate trust in technology, using the victim's digital security habits against them. The execution — draining funds through hidden mechanisms Once access is gained, scammers move to the execution phase, where they drain funds using hidden mechanisms. This is the most devastating stage, as the carefully designed setup ends in significant financial losses for the victim before they've even realized something is wrong. For example, in 2018, a victim boarded a short flight, unaware that scammers had executed a SIM swap while he was offline. By the time the plane landed, funds had been siphoned from his crypto wallet. With control over his phone number, the scammers were able to bypass two-factor authentication (2FA) and gain access to everything. Another good example is the poison wallet tactic which targets large over-the-counter (OTC) platforms. Scammers trick targets into sending small amounts of funds to fraudulent addresses. They do this by creating wallet addresses that look very similar to the initial and final characters of the victim's legitimate address. They then send a small transaction to the victim, hoping the fake address will show up in the user's transaction history. When the victim next makes a transaction, they may unwittingly select the fake address from their history. In this tactic, scammers take advantage of automation and human error. Bots monitor wallet balances, triggering automatic withdrawals when a balance crosses a certain threshold. Meanwhile, the use of familiar-looking addresses plays on the victim's carelessness and trust in their own records. The stolen amounts might be small per transaction, but cumulatively, they siphon off thousands daily, all going virtually unnoticed.

Apple executives held internal talks about buying Perplexity: Reports
Apple executives held internal talks about buying Perplexity: Reports

Khaleej Times

time12 hours ago

  • Khaleej Times

Apple executives held internal talks about buying Perplexity: Reports

Apple executives have held internal talks about potentially bidding for artificial intelligence startup Perplexity, Bloomberg News reported on Friday, citing people with knowledge of the matter. The discussions are at an early stage and may not lead to an offer, the report said, adding that the tech behemoth's executives have not discussed a bid with Perplexity's management. "We have no knowledge of any current or future MA discussions involving Perplexity," Perplexity said in response to a Reuters' request for comment. Apple did not immediately respond to a Reuters' request for comment. Big tech companies are doubling down on investments to enhance AI capabilities and support growing demand for AI-powered services to maintain competitive leadership in the rapidly evolving tech landscape. Bloomberg News also reported on Friday that Meta Platforms tried to buy Perplexity earlier this year. Meta announced a $14.8 billion investment in Scale AI last week and hired Scale AI CEO Alexandr Wang to lead its new superintelligence unit. Adrian Perica, Apple's head of mergers and acquisitions, has weighed the idea with services chief Eddy Cue and top AI decision-makers, as per the report. The iPhone maker reportedly plans to integrate AI-driven search capabilities, such as Perplexity AI, into its Safari browser, potentially moving away from its longstanding partnership with Alphabet's Google. Banning Google from paying companies to make it their default search engine is one of the remedies proposed by the US Department of Justice to break up its dominance in online search. While traditional search engines such as Google still dominate global market share, AI-powered search options including Perplexity and ChatGPT are gaining prominence and seeing rising user adoption, especially among younger generations. Perplexity recently completed a funding round that valued it at $14 billion, Bloomberg News reported. A deal close to that would be Apple's largest acquisition so far. The Nvidia-backed startup provides AI search tools that deliver information summaries to users, similar to OpenAI's ChatGPT and Google's Gemini.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store