logo
Students are using AI to write scholarship essays. Does it work?

Students are using AI to write scholarship essays. Does it work?

Boston Globe09-04-2025

'They felt a little bit sterile,' said Geiger, the cofounder and CEO of a company called Scholarships360, an online platform used by more than 300,000 students last year to find and apply for scholarships.
Related
:
Advertisement
Curious, Scholarships360 staffers deployed AI-detection software called GPTZero. It checked almost 1,000 essays submitted for one scholarship and determined that about 42 percent of them had likely been composed with the help of generative AI.
With college acceptances beginning to roll in for high school seniors, and juniors starting to brainstorm the essays they'll submit with their applications in the fall, Geiger is concerned. When students use AI to help write their essays, he said, they are wasting a valuable opportunity.
'The essay is one of the few opportunities in the admissions process for a student to communicate directly with a scholarship committee or with an admissions reader,' Geiger said. 'That provides a really powerful opportunity to share who you are as a person, and I don't think that an AI tool is able to do that.'
Advertisement
Madelyn Ronk, a 20-year-old student at Penn State Beaver, said she never considered using ChatGPT to write the personal statement required for her transfer application from community college last year. A self-described Goody Two-shoes, she didn't want to get in trouble. But there was another reason: She didn't want to turn in the same essay as anyone else.
'I want to be unique. I feel like when people use AI constantly, it just gives the same answer to every single person,' said Ronk, who wrote her essay about volunteering for charitable organizations in her hometown. 'I would like my answer to be me. So I don't use AI.'
Geiger said students' fears about submitting a generic essay are valid — they're less likely to get scholarships that way. But that doesn't mean they have to avoid generative AI altogether. Some companies offer services to help students use AI to improve their work, rather than to cheat — such as getting help writing an outline, using proper grammar or making points effectively. Generative AI can proofread an essay, and can even tell a student whether their teacher is likely to flag it as AI-assisted.
Related
:
Packback, for example, is an online platform whose AI software can chat with students and give feedback as they are writing. The bot might flag grammatical errors or the use of passive voice or whether students are digressing from their point. Craig Booth, the company's chief technology officer, said the software is designed to introduce students to ethical uses of AI.
A
Advertisement
Not all scholarship providers or colleges have policies on exactly how AI can or cannot be used in prospective student essays. For example,
Tools like GPTZero aren't reliable 100 percent of the time. The Markup, a news outlet focused on technology, reported on a study that found
Because detection software isn't always accurate, Geiger said, Scholarships360 doesn't base scholarship decisions on whether essays were flagged as being generated by AI. But, he said, many of the students whose essays were flagged weren't awarded a given scholarship because 'if your writing is being mistaken for AI,' whether you used the technology or not, for a scholarship or admissions essay, 'it's probably going to be missing the mark.'
Jonah O'Hara, who serves as chair of the admissions practices committee at the National Association of College Admissions Counselors, said that using AI isn't 'inherently evil,' but colleges and scholarship providers need to be transparent about their expectations and students need to disclose when they're using it and for what.
Advertisement
O'Hara, who is director of college counseling at Rocky Hill Country Day School in Rhode Island, said that he has always discouraged students from using a thesaurus in writing college application essays, or using any words that aren't normal for them.
'If you don't use 'hegemony' and 'parsimonious' in text messages with your friends, then why would you use it in an essay to college? That's not you,' O'Hara said. 'If you love the way polysyllabic words roll off your tongue, then, of course, if it's your voice, then use it.'
Generative AI is, functionally, the latest evolution of the thesaurus, and O'Hara wonders whether it has 'put a shelf life on the college essay.'
There was a time when some professors offered self-scheduled, unproctored take-home exams, O'Hara recalled. Students had to sign an honor statement promising that everything they submitted was their own work. But the onus was on the professors to write cheat-proof exams. O'Hara said if the college essay is going to survive, he thinks this is the direction administrators will have to go.
'If we get to a point where colleges cannot confidently determine [its] authenticity,' he said, 'then they may abandon it entirely.'
This story about was produced by
, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the
.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI makes shocking move amid fierce competition, Microsoft problems
OpenAI makes shocking move amid fierce competition, Microsoft problems

Miami Herald

time38 minutes ago

  • Miami Herald

OpenAI makes shocking move amid fierce competition, Microsoft problems

A blind man once told me, "I wish I knew what a beautiful woman looks like". He started losing his sight from birth and lost it completely while he was still just a child. What do the engineers trying to make artificial intelligence know about intelligence? To me, they look like a bunch of blind men, trying to build a "living" statue of a beautiful person. The worst part is, they don't even know they are blind. Do you remember the scandal when an engineer from Google claimed that the company's AI is sentient? When I saw the headlines, I didn't even open the articles, but my conclusion was that either Google made a terrible mistake in hiring him or it was an elaborate PR stunt. I thought Google was famous for having a high hiring bar, so I was leaning toward a PR stunt-I was wrong. Related: Apple WWDC underwhelms fans in a crucial upgrade What is amazing about that story is that roughly six months later, ChatGPT came out and put Google's AI department into panic mode. They were far behind ChatGPT, which was not even close to being sentient. Engineers from OpenAI, were the ones to start a new era, the era in which investors are presented with a statue that sort of has a human face, and has a speaker inside playing recordings of human speech, expecting that the "blind" men working on it, will soon make it become alive and beautiful. Of course, investors are also ignorant of the fact that engineers are "blind". OpenAI is now faced with many rivals, and the developing situation is starting to look like a bunch of bullies trying to out-bully each other instead of offering a superior product. Meta's recent investment of $15 billion in Scale AI seems to have hit OpenAI quite hard. OpenAI will phase out work with Scale AI, said the company spokesperson for Bloomberg on June 18th. According to the same source, Scale AI accounted for a small fraction of OpenAI's overall data needs. It looks like Meta's latest move angered OpenAI's CEO Sam Altman. In a podcast hosted by his brother, he revealed that Meta Platforms dangled $100 million signing bonuses to lure OpenAI staff, only to fail. "None of our best people have decided to take them up on that," he said, writes Moz Farooque for TheStreet. Related: Popular AI stock inks 5G network deal Unless Altman shows some evidence, this can also be a way to mislead Meta's engineers into believing they aren't compensated fairly. Not that Zuckerberg wouldn't do such a thing, but only the people involved know the truth. As if OpenAI's competition is closing in, buying partner companies and trying to poach its staff by offering ridiculous bonuses aren't enough, the company has even more problems. It is bleeding money, and has issues with a big stakeholder. More AI Stocks: Veteran fund manager raises eyebrows with latest Meta Platforms moveGoogle plans major AI shift after Meta's surprising $14 billion moveAnalysts revamp forecast for Nvidia-backed AI stock OpenAI lost about $5 billion in 2024. There are no estimates on how much the company will lose this year, but according to Bloomberg News, the company does not expect to become cash flow positive until 2029. Latest developments will likely push that date farther into the future. Microsoft has invested about $14 billion in OpenAI; however, the relationship has turned sour since then. OpenAI has considered accusing Microsoft of anticompetitive behavior in their deal, reported the Wall Street Journal on June 16th. On June 19th The Financial Times reported that Microsoft is prepared to abandon its negotiations with OpenAI if the two sides cannot agree on critical issues. Meanwhile, OpenAI has started shockingly discounting enterprise subscriptions to ChatGPT. This had angered salespeople at Microsoft, which sells competing apps at higher prices, reported The Information. Related: Amazon's latest big bet may flop "In my experience, products are only discounted when they are not selling because customers do not perceive value at the higher price. If someone loses copious amounts of money at the higher price, how will the economics work at a lower price?" wrote veteran hedge fund manager Doug Kass in his diary on TheStreet Pro." OpenAI's price cuts could kick off a price war, with a race to the bottom even as OpenAI, Microsoft, Meta, and Google continue plowing tens of billions into developing it. "My suspicion, although those guys might be good (in theory) at technology, they are not good at business. I think they will find much less in the way of elasticity than they hope, because the problem is the quality of the output more than it is the price," said Kass. What will happen to OpenAI's cash flow positive plan after 2029? I doubt it is reachable with the now slashed prices. Will the company even live to see 2029? I think that is a better question. Related: Elon Musk's DOGE made huge mistakes with veterans' programs The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

AI tools collect, store your data – how to be aware of what you're revealing
AI tools collect, store your data – how to be aware of what you're revealing

Yahoo

time5 hours ago

  • Yahoo

AI tools collect, store your data – how to be aware of what you're revealing

Like it or not, artificial intelligence has become part of daily life. Many devices — including electric razors and toothbrushes — have become "AI-powered," using machine learning algorithms to track how a person uses the device, how the device is working in real time, and provide feedback. From asking questions to an AI assistant like ChatGPT or Microsoft Copilot to monitoring a daily fitness routine with a smartwatch, many people use an AI system or tool every day. While AI tools and technologies can make life easier, they also raise important questions about data privacy. These systems often collect large amounts of data, sometimes without people even realizing their data is being collected. The information can then be used to identify personal habits and preferences, and even predict future behaviors by drawing inferences from the aggregated data. As an assistant professor of cybersecurity at West Virginia University, I study how emerging technologies and various types of AI systems manage personal data and how we can build more secure, privacy-preserving systems for the future. Generative AI software uses large amounts of training data to create new content such as text or images. Predictive AI uses data to forecast outcomes based on past behavior, such as how likely you are to hit your daily step goal, or what movies you may want to watch. Both types can be used to gather information about you. Generative AI assistants such as ChatGPT and Google Gemini collect all the information users type into a chat box. Every question, response and prompt that users enter is recorded, stored and analyzed to improve the AI model. OpenAI's privacy policy informs users that "we may use content you provide us to improve our Services, for example to train the models that power ChatGPT." Even though OpenAI allows you to opt out of content use for model training, it still collects and retains your personal data. Although some companies promise that they anonymize this data, meaning they store it without naming the person who provided it, there is always a risk of data being reidentified. Beyond generative AI assistants, social media platforms like Facebook, Instagram and TikTok continuously gather data on their users to train predictive AI models. Every post, photo, video, like, share and comment, including the amount of time people spend looking at each of these, is collected as data points that are used to build digital data profiles for each person who uses the service. The profiles can be used to refine the social media platform's AI recommender systems. They can also be sold to data brokers, who sell a person's data to other companies to, for instance, help develop targeted advertisements that align with that person's interests. Many social media companies also track users across websites and applications by putting cookies and embedded tracking pixels on their computers. Cookies are small files that store information about who you are and what you clicked on while browsing a website. One of the most common uses of cookies is in digital shopping carts: When you place an item in your cart, leave the website and return later, the item will still be in your cart because the cookie stored that information. Tracking pixels are invisible images or snippets of code embedded in websites that notify companies of your activity when you visit their page. This helps them track your behavior across the internet. This is why users often see or hear advertisements that are related to their browsing and shopping habits on many of the unrelated websites they browse, and even when they are using different devices, including computers, phones and smart speakers. One study found that some websites can store over 300 tracking cookies on your computer or mobile phone. Like generative AI platforms, social media platforms offer privacy settings and opt-outs, but these give people limited control over how their personal data is aggregated and monetized. As media theorist Douglas Rushkoff argued in 2011, if the service is free, you are the product. Many tools that include AI don't require a person to take any direct action for the tool to collect data about that person. Smart devices such as home speakers, fitness trackers and watches continually gather information through biometric sensors, voice recognition and location tracking. Smart home speakers continually listen for the command to activate or "wake up" the device. As the device is listening for this word, it picks up all the conversations happening around it, even though it does not seem to be active. Some companies claim that voice data is only stored when the wake word — what you say to wake up the device — is detected. However, people have raised concerns about accidental recordings, especially because these devices are often connected to cloud services, which allow voice data to be stored, synced and shared across multiple devices such as your phone, smart speaker and tablet. If the company allows, it's also possible for this data to be accessed by third parties, such as advertisers, data analytics firms or a law enforcement agency with a warrant. This potential for third-party access also applies to smartwatches and fitness trackers, which monitor health metrics and user activity patterns. Companies that produce wearable fitness devices are not considered "covered entities" and so are not bound by the Health Information Portability and Accountability Act. This means that they are legally allowed to sell health- and location-related data collected from their users. Concerns about HIPAA data arose in 2018, when Strava, a fitness company released a global heat map of users' exercise routes. In doing so, it accidentally revealed sensitive military locations across the globe through highlighting the exercise routes of military personnel. The Trump administration has tapped Palantir, a company that specializes in using AI for data analytics, to collate and analyze data about Americans. Meanwhile, Palantir has announced a partnership with a company that runs self-checkout systems. Such partnerships can expand corporate and government reach into everyday consumer behavior. This one could be used to create detailed personal profiles on Americans by linking their consumer habits with other personal data. This raises concerns about increased surveillance and loss of anonymity. It could allow citizens to be tracked and analyzed across multiple aspects of their lives without their knowledge or consent. Some smart device companies are also rolling back privacy protections instead of strengthening them. Amazon recently announced that starting on March 28, 2025, all voice recordings from Amazon Echo devices would be sent to Amazon's cloud by default, and users will no longer have the option to turn this function off. This is different from previous settings, which allowed users to limit private data collection. Changes like these raise concerns about how much control consumers have over their own data when using smart devices. Many privacy experts consider cloud storage of voice recordings a form of data collection, especially when used to improve algorithms or build user profiles, which has implications for data privacy laws designed to protect online privacy. All of this brings up serious privacy concerns for people and governments on how AI tools collect, store, use and transmit data. The biggest concern is transparency. People don't know what data is being collected, how the data is being used, and who has access to that data. Companies tend to use complicated privacy policies filled with technical jargon to make it difficult for people to understand the terms of a service that they agree to. People also tend not to read terms of service documents. One study found that people averaged 73 seconds reading a terms of service document that had an average read time of 29 to 32 minutes. Data collected by AI tools may initially reside with a company that you trust, but can easily be sold and given to a company that you don't trust. AI tools, the companies in charge of them and the companies that have access to the data they collect can also be subject to cyberattacks and data breaches that can reveal sensitive personal information. These attacks can by carried out by cybercriminals who are in it for the money, or by so-called advanced persistent threats, which are typically nation/state-sponsored attackers who gain access to networks and systems and remain there undetected, collecting information and personal data to eventually cause disruption or harm. While laws and regulations such as the General Data Protection Regulation in the European Union and the California Consumer Privacy Act aim to safeguard user data, AI development and use have often outpaced the legislative process. The laws are still catching up on AI and data privacy. For now, you should assume any AI-powered device or platform is collecting data on your inputs, behaviors and patterns. Although AI tools collect people's data, and the way this accumulation of data affects people's data privacy is concerning, the tools can also be useful. AI-powered applications can streamline workflows, automate repetitive tasks and provide valuable insights. But it's crucial to approach these tools with awareness and caution. When using a generative AI platform that gives you answers to questions you type in a prompt, don't include any personally identifiable information, including names, birth dates, Social Security numbers or home addresses. At the workplace, don't include trade secrets or classified information. In general, don't put anything into a prompt that you wouldn't feel comfortable revealing to the public or seeing on a billboard. Remember, once you hit enter on the prompt, you've lost control of that information. Remember that devices which are turned on are always listening — even if they're asleep. If you use smart home or embedded devices, turn them off when you need to have a private conversation. A device that's asleep looks inactive, but it is still powered on and listening for a wake word or signal. Unplugging a device or removing its batteries is a good way of making sure the device is truly off. Finally, be aware of the terms of service and data collection policies of the devices and platforms that you are using. You might be surprised by what you've already agreed to. Christopher Ramezan is an assistant professor of cybersecurity at West Virginia University. This article is republished from The Conversation under a Creative Commons license. This article is part of a series on data privacy that explores who collects your data, what and how they collect, who sells and buys your data, what they all do with it, and what you can do about it. This article originally appeared on Erie Times-News: AI devices collect your data, raise questions about privacy | Opinion

Perplexity's new AI features are a game changer. Here's how to make the most of them
Perplexity's new AI features are a game changer. Here's how to make the most of them

Fast Company

time7 hours ago

  • Fast Company

Perplexity's new AI features are a game changer. Here's how to make the most of them

Perplexity has become my primary tool for search. I rely on it for concise summaries of complex topics. I like the way it synthesizes information and provides reliable citations for me to explore further. I prefer Perplexity's well-organized responses to Google's laundry list of links, though I still use Google to find specific sites & addresses and for other 'micro-searches.' Perplexity's not perfect. I've rarely seen it hallucinate, but it can pick dubious sources or misinterpret your question. As with any tool that uses AI, the wording of your query impacts your result. Write detailed queries and specify preferred sources when you can. Double-check critical data or facts. Google's new AI Mode is a strong new competitor, and ChatGPT, Claude and others now offer AI-powered search, but I still rely on Perplexity for reasons detailed below. This post updates my previous post with new features, examples, and tips. My favorite new features Labs. Create slides, reports, dashboards, and Web apps by writing a detailed query and specifying the format of the results you want. Check out the Project Gallery for 20 examples. Voice Mode. I ask historical questions about books, curiosities about nature and science, and things I should already know about movies & music. The transcript shows up afterwards. Templates for Spaces. A large new collection of templates makes it easier to get started with custom instructions for various kinds of research, for sales/marketing, education, finance, or other subjects. Transcription. Upload & transcribe files up to 25mb. Ask for insights & ideas. Topical landing pages for finance, travel, shopping, and academics provide useful examples and new practical ways to use Perplexity. When to use Perplexity Get up to speed on a topic: Need to research North Korea-China relations? Ask Perplexity for a summary and sources. See the result. Research hyper-specific information: Ask for a list of organizations that crowdsource info about natural disasters. See the result. Explore personal curiosities: I was curious about Mozart's development as a violinist, so I asked for key dates and details. See the result. The best things about Perplexity Sources. Perplexity provides links to its sources, so you can follow-up on anything you want to learn more about. Tip: specify sources to prioritize. Summaries. Instead of long articles or lists of links, get straight-to-the-point answers that save time. Tip: specify when you want a summary table. Follow-ups. Ask follow-up questions to dive deeper into a topic, just like a conversation. For visual topics, Perplexity can surface relevant images and videos. Tip: customize your own follow-up query if defaults aren't relevant. Deep Research. Get fuller results for queries where you need more info. Tip: Use Claude or ChatGPT to help you draft clearer, more thorough search prompts. Spaces. Group related searches into collections so they're easy to return to later. I created one for Atlanta before a trip. You can keep a collection private, invite others to edit it, or share a public link. Tip: create a team space. Pages. Share search results by creating public pages you can customize. Watch a 1-minute video demo. Examples: Beginners Guide to Drumming, a Barcelona itinerary, and forest hotels in Sweden. Labs. Use Perplexity More Effectively You can use Perplexity on the Web, Mac, Windows, iOS and Android. Start with Perplexity's own introductory guide, check the how it works FAQ, then use the Get Started template to use Perplexity itself to learn more. Write detailed queries Include two or more sentences specifying what you're looking for and why. Your result will be better than if you just use keywords. Refine your settings Specify one or more preferred source types: Web, academic sources, social (i.e. Reddit), or financial (SEC filings). Pick your model. Advanced users can specify the AI flavor Perplexity uses. I'd recommend maintaining Perplexity's default or the o3 option for research that requires complex reasoning. You can also use Grok, Gemini or Claude. Specify domains to search. Mention specific domains or kinds of sites you're interested in for more targeted results. Use a domain limiter to narrow your search to a particular site or domain type, e.g. 'domain:.gov' to focus only on government sites. Or just use natural language to limit Perplexity to certain kinds of sites, as in this example scouring CUNY sites for AI policies. Personalize your account. Add a brief summary of your interests, focus areas, and information preferences in your profile to customize the way Perplexity provides you with answers. Quick searches are fine when you're just looking for a simple fact, like when was CUNY founded. Pro searches are best for more intricate, multi-part queries. On the free plan you get 3 pro searches a day. Examples: Perplexity in action Check public opinion: 'Is there a Pew survey about discovering news through social media platforms?' See the result. Explore historical archives: 'List literacy and education programs in high-growth African countries in the last decade.' See the result. Pricing Free for unlimited quick searches, 3 pro searches and 3 file uploads per day. $20/month for unlimited file and image uploads for analysis; access to Labs; and 10x as many citations. See the 2025 feature comparison. Privacy To protect your privacy when using Perplexity, capitalize on the following: Turn 'data retention' off in your settings. (Screenshot). Turn on the Incognito setting if you're signed in to anonymize a search. Search in an incognito browser tab without logging into Perplexity. Bonus features The free Chrome Extension lets you summon a Perplexity search from any page. The 'summarize' button hasn't always worked for me. The Perplexity Encyclopedia has a collection of tool comparisons An experimental beta Tasks feature lets you schedule customized searches Listen to an AI audio chat about Perplexity I generated w/ NotebookLM. Caveats Accuracy and confabulation: While Perplexity uses retrieval augmented generation to reduce errors, it's not flawless. Check the sources it references. Document analysis limitations: The file size limit for uploads is 25MB. Covert larger files to text or use Adobe's free compressor or SmallPDF. Deep Research, though fast, is not nearly as thorough as what is provided by ChatGPT's Deep Research or Gemini's. Alternatives to Perplexity Google AI Mode: Google's much-improved new AI search option provides summary responses like Perplexity. Here's an example of a comparison table it created for me and its take on 10 Perplexity features. Try it in labs. Free. Consensus: Superb for academic queries. Search 200 million peer-reviewed research papers and get a summary and links to publications. Useful for scientific or other research questions, e.g. active vs. passive learning or how cash transfers impact poverty. Pricing: Free for unlimited searches and limited premium use; $9/month billed annually for full AI capabilities. ChatGPT Web Search. Turn on the 'Search the Web' option under the tools menu when using ChatGPT to enable Web searching. Search chats include inline links with sources. For example, here's a ChatGPT Web search query about Perplexity vs. other AI search tools. It includes a helpful ChatGPT-generated chart. As differentiators I like Perplexity's summaries, suggested follow-up queries, Labs, and the handy Voice Mode for quick questions.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store