Everyone's using AI at work. Here's how companies can keep data safe
Companies across industries are encouraging their employees to use AI tools at work. Their workers, meanwhile, are often all too eager to make the most of generative AI chatbots like ChatGPT. So far, everyone is on the same page, right?
There's just one hitch: How do companies protect sensitive company data from being hoovered up by the same tools that are supposed to boost productivity and ROI? After all, it's all too tempting to upload financial information, client data, proprietary code, or internal documents into your favorite chatbot or AI coding tool, in order to get the quick results you want (or that your boss or colleague might be demanding). In fact, a new study from data security company Varonis found that shadow AI—unsanctioned generative AI applications—poses a significant threat to data security, with tools that can bypass corporate governance and IT oversight, leading to potential data leaks. The study found that nearly all companies have employees using unsanctioned apps, and nearly half have employees using AI applications considered high-risk.
For information security leaders, one of the key challenges is educating workers about what the risks are and what the company requires. They must ensure that employees understand the types of data the organization handles—ranging from corporate data like internal documents, strategic plans, and financial records, to customer data such as names, email addresses, payment details, and usage patterns. It's also critical to communicate how each type of data is classified—for example, whether it is public, internal-only, confidential, or highly restricted. Once this foundation is in place, clear policies and access boundaries must be established to protect that data accordingly.
'What we have is not a technology problem, but a user challenge,' said James Robinson, chief information security officer at data security company Netskope. The goal, he explained, is to ensure that employees use generative AI tools safely—without discouraging them from adopting approved technologies.
'We need to understand what the business is trying to achieve,' he added. Rather than simply telling employees they're doing something wrong, security teams should work to understand how people are using the tools, to make sure the policies are the right fit—or whether they need to be adjusted to allow employees to share information appropriately.
Jacob DePriest, chief information security officer at password protection provider 1Password, agreed, saying that his company is trying to strike a balance with its policies—to both encourage AI usage and also educate so that the right guardrails are in place.
Sometimes that means making adjustments. For example, the company released a policy on the acceptable use of AI last year, part of the company's annual security training. 'Generally, it's this theme of 'Please use AI responsibly; please focus on approved tools; and here are some unacceptable areas of usage.'' But the way it was written caused many employees to be overly cautious, he said.
'It's a good problem to have, but CISOs can't just focus exclusively on security,' he said. 'We have to understand business goals and then help the company achieve both business goals and security outcomes as well. I think AI technology in the last decade has highlighted the need for that balance. And so we've really tried to approach this hand in hand between security and enabling productivity.'
But companies who think banning certain tools is a solution, should think again. Brooke Johnson, SVP of HR and security at Ivanti, said her company found that among people who use generative AI at work, nearly a third keep their AI use completely hidden from management. 'They're sharing company data with systems nobody vetted, running requests through platforms with unclear data policies, and potentially exposing sensitive information,' she said in a message.
The instinct to ban certain tools is understandable but misguided, she said. 'You don't want employees to get better at hiding AI use; you want them to be transparent so it can be monitored and regulated,' she explained. That means accepting the reality that AI use is happening regardless of policy, and conducting a proper assessment of which AI platforms meet your security standards.
'Educate teams about specific risks without vague warnings,' she said. Help them understand why certain guardrails exist, she suggested, while emphasizing that it is not punitive. 'It's about ensuring they can do their jobs efficiently, effectively, and safely.'
Think securing data in the age of AI is complicated now? AI agents will up the ante, said DePriest.
'To operate effectively, these agents need access to credentials, tokens, and identities, and they can act on behalf of an individual—maybe they have their own identity,' he said. 'For instance, we don't want to facilitate a situation where an employee might cede decision-making authority over to an AI agent, where it could impact a human.' Organizations want tools to help facilitate faster learning and synthesize data more quickly, but ultimately, humans need to be able to make the critical decisions, he explained.
Whether it is the AI agents of the future or the generative AI tools of today, striking the right balance between enabling productivity gains and doing so in a secure, responsible way may be tricky. But experts say every company is facing the same challenge—and meeting it is going to be the best way to ride the AI wave. The risks are real, but with the right mix of education, transparency, and oversight, companies can harness AI's power—without handing over the keys to their kingdom.
This story was originally featured on Fortune.com

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New York Post
an hour ago
- New York Post
SoftBank pitches chip giant TSMC on building $1 trillion AI hub in US: report
SoftBank CEO Masayoshi Son is pitching Taiwan Semiconductor Manufacturing Company on a massive $1 trillion complex in the US to build robots and artificial intelligence, according to a report. The giant robotics center would be based in Arizona, a version of the production hub seen in the Chinese city of Shenzhen that could help bring manufacturing back to the US, sources told Bloomberg. It comes as President Trump has been calling for an all-hands approach to bringing manufacturing opportunities to the US, especially by tech companies and automakers. 3 SoftBank's CEO Masayoshi Son speaking during a White House event as President Trump looks on. KEN CEDENO/POOL/EPA-EFE/Shutterstock Son is seeking out TSMC invest $165 billion in the US and has opened its first Arizona factory – as a partner, according to the report. It's unclear what role Son sees for the Taiwanese chip giant, which makes Nvidia's most advanced chips, and if the company would even be interested in the project. TSMC declined to comment. Codenamed 'Project Crystal Land,' the complex is a clear attempt not only to advance artificial intelligence but to ensure a lasting legacy for Son, who has often talked down his past accomplishments and abandoned projects midway, sources told Bloomberg. The ambitious, one-of-a-kind facility would require support from the Trump administration. SoftBank officials have spoken with federal and state officials, including Commerce Secretary Howard Lutnick, to discuss possible tax breaks for firms building factories or investing in the complex, sources told Bloomberg. Son is also speaking with major tech companies as possible investors, like South Korea's Samsung. SoftBank, Samsung and the White House did not immediately respond to The Post's requests for comment. 3 Masayoshi Son is reportedly seeking out TSMC as a partner in the Arizona project. REUTERS Son's company has invested heavily in ChatGPT maker OpenAI, recently leading a $40 billion funding round for the Sam Altman-led firm as the two seek to raise hundreds of billions of dollars to fund large data centers in the US. These data centers are crucial to the artificial intelligence industry, which requires vast amounts of power and large storage capabilities. SoftBank's campaigning process for the Arizona complex could signal that its money-raising efforts alongside OpenAI are proceeding at a slower pace than they had anticipated, according to the report. Son has created a list of companies that might take part in the Arizona manufacturing hub, like automation company Agile Robots SE, sources said. 3 President Trump has said he wants to bring manufacturing opportunities back to the US. AP Meanwhile, SoftBank is exploring project financing options for Stargate, its $500 billion initiative to build data centers in the US with OpenAI and Oracle. This financing method could allow SoftBank to raise funding on a project-by-project basis, which is easier than gathering a large sum of money upfront. The same process could potentially be used for Project Crystal Land, according to the Bloomberg report. These plans are still preliminary and could change, sources told the news outlet.


Forbes
2 hours ago
- Forbes
How ChatGPT Broke My Brain (And Why I Still Use It Every Day)
I continue using ChatGPT daily—but I've learned to treat it how high performers treat ... More performance-enhancing tools—with structure, limits, and awareness. NurPhoto via Getty Images Even as a proponent of AI, I've learned the hard way: the biggest threat isn't automation—it's what I stop doing when I rely too heavily on machines. There was a stretch—weeks, really—when I couldn't finish a simple email without asking ChatGPT to do it for me. I'd start typing, feel unsure, and reach for a prompt. New tone. New angle. Just one more version. Every time, 'maybe this one' felt like the answer. And the dopamine hit? Instant. Novel. Addictive. What began as a tool to streamline marketing copy turned into paralysis. I stopped trusting my own phrasing. Iteration replaced decision making. And here's what surprised me most: the more I used AI to write small things , the harder it became to write important things . Tasks I've tackled confidently on my own for my entire life—like every Forbes article I've authored or even my own book—shifted from being sources of intellectual challenge and joy to overwhelming experiences filled with self-doubt. My thinking felt fuzzier. The inner voice I rely on to structure an argument or hold a tension had gone quieter. The Dopamine Loop of Prompting ChatGPT doesn't just give answers—it delivers a perfectly engineered cocktail of anticipation and novelty. Each version feels like it might be the one. Each response is a surprise, tapping into the psychological principle of intermittent reinforcement, famously demonstrated by psychologist B.F. Skinner, where unpredictable rewards significantly amplify behaviors, much like gambling addiction. For someone with an ADHD brain like mine—wired for pattern-seeking, shortcut-taking, and reward-chasing—ChatGPT is catnip. Every new draft becomes a low-effort opportunity to avoid doing the hard, focused work of starting and finishing something. It becomes a loop: Prompt → Output → Evaluate → Repeat. Each time I felt uncertain, I'd outsource the discomfort rather than work through it. Cognitive Offloading and the Erosion of Ownership This pattern has a name in cognitive science: cognitive offloading —relying on external systems to perform mental tasks we used to internalize. AI makes it easy to skip the generative friction that creativity often requires. I wasn't refining ideas—I was accumulating options. I wasn't editing—I was evaluating. And eventually, I wasn't writing—I was managing automated outputs. That doesn't just slow productivity. It reshapes the brain. Research by Adrian Ward and colleagues highlights how continuous dependence on digital tools for memory or problem solving reduces our ability to remember, process deeply, and engage analytically. Instead of actively shaping ideas, I found myself passively supervising generated content, weakening my own intellectual muscles. Even the Help Sometimes Gets in the Way I've known since high school that my best ideas emerge not at a desk, but while walking or in conversation—I literally think out loud. Colleagues joke that this clearly shows I was born to be a speaker, not a writer. When AI transcription tools arrived, they felt like the solution I'd been waiting for. Until I got the transcript back. It stripped the 'ums,' the tangents, the little asides to my kids mid-thought ('No, you can't have another popsicle, Daddy is working'), but it also erased the texture that made the thinking mine. I didn't need a cleaned-up version—I needed me in words. And I lost hours trying to get the AI to un-help. Why I Still Use It (And You Should Too) Such a horror story might naturally lead one to assume this article is tailor-made for the AI-resistant—something they can share aggressively with that one colleague who talks about ChatGPT as much as CrossFitters reminded you they did CrossFit in 2018. But here's the truth: I'm not anti-AI. Avoiding it is a recipe for irrelevance. I continue using ChatGPT daily—but I've learned to treat it how high performers treat performance-enhancing tools—with structure, limits, and awareness. When used intentionally, it's invaluable: It reveals blind spots. It lets me test structure and tone rapidly. It simulates collaboration when no one else is in the room. But here's the key: if I don't think first , AI doesn't help me—it replaces me. If I don't own my voice , it sounds like everyone else's. How I Reclaimed My Thinking—Without Ditching the Tech To preserve cognitive clarity, I built boundaries grounded in science: Start with your own sentence — I won't prompt until I've written my thesis, even if it's rough . This taps into The Generation Effect , a well-documented phenomenon showing that the act of creating information—not just reading it—builds stronger memory. — . This taps into , a well-documented phenomenon showing that the act of creating information—not just reading it—builds stronger memory. Avoid AI for first drafts — write first, then compare, not the other way around . Idea development followed by AI enhancement preserves individual voice and cognitive engagement. — . Idea development followed by AI enhancement preserves individual voice and cognitive engagement. Limit iterations — three options max, then decide . The Paradox of Choice and decision fatigue research —dating back to Schwartz and Iyengar's experiments—reveals that fewer options (e.g., 3 drafts max) reduce analysis paralysis and increase satisfaction. — . The Paradox of Choice and decision fatigue —dating back to Schwartz and Iyengar's experiments—reveals that fewer options (e.g., 3 drafts max) reduce analysis paralysis and increase satisfaction. Protect tech-free white space— schedule tech-free blocks, prioritizing clarity over speed . As I've previously explored , dedicated white space time can directly facilitate constructive and innovative thinking. This concept is supported by multiple studies , which consistently show that taking breaks—especially through walking—boosts creativity by as much as 60%. These aren't just habits—they're boundaries preserving the part of me no machine replicates. The Real Risk for Leaders This isn't just about writing—it's about attention, judgment, and trust: the core ingredients of leadership. The danger isn't AI replacing us—it's AI eroding our capacity for deep, sustained human thinking, tempting us away from uniquely human work: wrestling with ideas, navigating ambiguity, and staying with the slow burn of unfinished thought. AI is here to stay—and I'm grateful for it. It's powerful. It's essential. But if we don't approach it with intention, it won't just alter how we work. It will reshape how we think. That's not a technical shift. It's a leadership risk. We're not at risk of being replaced by machines—unless we stop doing the very things machines need from us. Let's protect our minds, not just optimize our prompts.


Forbes
2 hours ago
- Forbes
How To Use ChatGPT Prompts To Make $100/Day In 2025
You don't need to wait for your next pay check or run into credit card debt to cover the unexpected Your car needs an urgent repair the morning of a job interview. Your kid needs extra money for a school project. An unexpected bill hits you way before payday, and out of nowhere, a medical emergency arises and now you're slammed with hospital bills. Ever had a situation arise that demanded $100 or more, fast? All of the above scenarios usually have us resorting to our credit cards and going into more debt just to cover the unexpected. But did you know that right now, you have access to two of the most valuable assets you'll ever need for your career, which can cover you financially when your salary isn't enough? These two assets are: If you knew there was a way to make $100 or more within a day's work, outside of your job, you'd jump at the opportunity, right? Well in this article, you'll learn how to leverage AI tools like ChatGPT, and combine it with your skill set. This short guide will cover five ChatGPT prompts you can use right now to help you maximize your earning power, so you're not going from paycheck to paycheck, or at the mercy of your employer's discretion, waiting for a pay rise. You'll discover: Why ChatGPT Is The Perfect Tool For Making Money Online What used to take hours, days, or weeks, or be accomplished with tedious workflows and expensive SaaS tools, can now be achieved within minutes. ChatGPT is one of the best tools to use in combination with your expertise, because it literally does whatever you prompt it to do. You can teach yourself and give yourself a crash course on anything, any skill, and make money as you learn. When I started freelancing and launched my business in 2019 when I was 19 years old, I literally had no clue what I was doing…well at least that's what I thought. I had zero professional certifications, qualifications, psychology background, or work experience in the field I created my business in (coaching). It still tickles me to this day how I was able to miraculously pull it off, land clients, gain high customer satisfaction, and expand to where I am now as a six-figure entrepreneur. Mind you, I accomplished all of that, largely without ChatGPT (it didn't exist when I started, until three years later). My point? If I can start a business out of nothing, with no prior experience or professional expertise in that field, and teach myself from Google, so can you; but this time, you have it way easier than me because you have the advantages of leveraging ChatGPT as your business assistant and virtual teacher. 4 ChatGPT Prompt Examples To Make $100/Day Use these simple prompts: Create X [specified number] ideas for Instagram reels for my client for this month. Include captions for each. Every caption should have a strong hook and a CTA. Some CTAs can lead directly to sales, while others are softer, like polls, giveaways, etc. This prompt works well for social media management within different social media apps, not just Instagram. Social media marketing is a high-income skill, estimated to be worth up to $1.5 trillion by 2030. Help me generate the marketing strategy for [type of client] for the next 90 days, using [list the channels, email marketing, client's blog, social media, etc.]. After doing this, give ChatGPT a follow-up prompt: Can you explain the reasoning behind this marketing strategy? Why did you do XYZ? This allows you to learn as you earn, instead of just copying and pasting blindly. It allows you to dig into any sources cited for context, and actually teach yourself this high-income skill, while getting paid. I'm designing a landing page for [type of client and industry]. The landing page should [provide the specs]. Help me write the copy for this page that will attract leads and increase conversion rates. Also what are some key elements I should include on the landing page for strong optimization and visual appeal, and why? Landing page design can earn you from $125 for a day's work, according to a NetCredit report. High-Income Skills That Pay $100+ In A Day These ChatGPT prompts help you leverage high-paying skills to create services that can easily cover emergencies with just one or two clients a week. NetCredit's analysis also included other high-paying skills that you can turn into services and products, that pay $100 or more a day in 2025. The full top 10 list includes: How To Use ChatGPT To Scale These Side Hustles Now that you know which skills pay the most money right now in 2025 so you can earn online, here are a few ways you can put ChatGPT to work as your business assistant. Quick tip before you go: Making $100 or more in a day because way easier when you're not relying on one income stream. You need to have several. Launch multiple side hustles that complement each other, so you're not burned out. Eventually, use AI to help you scale so you can produce more while working less. High-income skills like social media management and marketing strategy can earn you in excess of ... More $120 a day FAQs How fast can I earn $100? Once you have the right systems in place and have established trust with your audience, it's easier for you to pull off a project or gig worth $100 or more, which you can lay aside for those unexpected financial crises. What's the easiest way to make money with ChatGPT, for beginners? Start with something you're already comfortable with and have expertise in. Then, ask ChatGPT to help you scale and build passive income products with it.