
Welcome to campus. Here's your ChatGPT.
OpenAI, the maker of ChatGPT, has a plan to overhaul college education – by embedding its artificial intelligence tools in every facet of campus life.
If the company's strategy succeeds, universities would give students AI assistants to help guide and tutor them from orientation day through graduation. Professors would provide customized AI study bots for each class. Career services would offer recruiter chatbots for students to practice job interviews. And undergrads could turn on a chatbot's voice mode to be quizzed aloud before a test.
OpenAI dubs its sales pitch 'AI-native universities'.
'Our vision is that, over time, AI would become part of the core infrastructure of higher education,' Leah Belsky, OpenAI's vice president of education, said in an interview. In the same way that colleges give students school email accounts, she said, soon 'every student who comes to campus would have access to their personalized AI account.'
To spread chatbots on campuses, OpenAI is selling premium AI services to universities for faculty and student use. It is also running marketing campaigns aimed at getting students who have never used chatbots to try ChatGPT.
Some universities, including the University of Maryland and California State University, are already working to make AI tools part of students' everyday experiences. In early June, Duke University began offering unlimited ChatGPT access to students, faculty and staff. The school also introduced a university platform, called DukeGPT, with AI tools developed by Duke.
OpenAI's campaign is part of an escalating AI arms race among tech giants to win over universities and students with their chatbots. The company is following in the footsteps of rivals like Google and Microsoft that have for years pushed to get their computers and software into schools, and court students as future customers.
The competition is so heated that Sam Altman, OpenAI's CEO, and Elon Musk, who founded the rival xAI, posted dueling announcements on social media this spring offering free premium AI services for college students during exam period. Then Google upped the ante, announcing free student access to its premium chatbot service 'through finals 2026'.
OpenAI ignited the recent AI education trend. In late 2022, the company's rollout of ChatGPT, which can produce human-sounding essays and term papers, helped set off a wave of chatbot-fueled cheating. Generative AI tools like ChatGPT, which are trained on large databases of texts, also make stuff up, which can mislead students.
Less than three years later, millions of college students regularly use AI chatbots as research, writing, computer programming and idea-generating aides. Now OpenAI is capitalizing on ChatGPT's popularity to promote the company's AI services to universities as the new infrastructure for college education.
OpenAI's service for universities, ChatGPT Edu, offers more features, including certain privacy protections, than the company's free chatbot. ChatGPT Edu also enables faculty and staff to create custom chatbots for university use. (OpenAI offers consumers premium versions of its chatbot for a monthly fee.)
OpenAI's push to AI-ify college education amounts to a national experiment on millions of students. The use of these chatbots in schools is so new that their potential long-term educational benefits, and possible side effects, are not yet established.
A few early studies have found that outsourcing tasks like research and writing to chatbots can diminish skills like critical thinking. And some critics argue that colleges going all-in on chatbots are glossing over issues like societal risks, AI labor exploitation and environmental costs.
OpenAI's campus marketing effort comes as unemployment has increased among recent college graduates – particularly in fields like software engineering, where AI is now automating some tasks previously done by humans. In hopes of boosting students' career prospects, some universities are racing to provide AI tools and training.
California State University announced this year that it was making ChatGPT available to more than 460,000 students across its 23 campuses to help prepare them for 'California's future AI-driven economy'. Cal State said the effort would help make the school 'the nation's first and largest AI-empowered university system.'
Some universities say they are embracing the new AI tools in part because they want their schools to help guide, and develop guardrails for, the technologies.
'You're worried about the ecological concerns. You're worried about misinformation and bias,' Edmund Clark, the chief information officer of California State University, said at a recent education conference in San Diego. 'Well, join in. Help us shape the future.'
Last spring, OpenAI introduced ChatGPT Edu, its first product for universities, which offers access to the company's latest AI. Paying clients like universities also get more privacy: OpenAI says it does not use the information that students, faculty and administrators enter into ChatGPT Edu to train its AI.
( The New York Times has sued OpenAI and its partner, Microsoft, over copyright infringement. Both companies have denied wrongdoing.)
Last fall, OpenAI hired Belsky to oversee its education efforts. An ed tech startup veteran, she previously worked at Coursera, which offers college and professional training courses.
She is pursuing a two-pronged strategy: marketing OpenAI's premium services to universities for a fee while advertising free ChatGPT directly to students. OpenAI also convened a panel of college students recently to help get their peers to start using the tech.
Among those students are power users like Delphine Tai-Beauchamp, a computer science major at the University of California, Irvine. She has used the chatbot to explain complicated course concepts, as well as help explain coding errors and make charts diagraming the connections between ideas.
'I wouldn't recommend students use AI to avoid the hard parts of learning,' Tai-Beauchamp said. She did recommend students try AI as a study aid. 'Ask it to explain something five different ways.'
Belsky said these kinds of suggestions helped the company create its first billboard campaign aimed at college students.
'Can you quiz me on the muscles of the leg?' asked one ChatGPT billboard, posted this spring in Chicago. 'Give me a guide for mastering this Calc 101 syllabus,' another said.
Belsky said OpenAI had also begun funding research into the educational effects of its chatbots.
'The challenge is, how do you actually identify what are the use cases for AI in the university that are most impactful?' Belsky said during a December AI event at Cornell Tech in New York City. 'And then how do you replicate those best practices across the ecosystem?'
Some faculty members have already built custom chatbots for their students by uploading course materials like their lecture notes, slides, videos and quizzes into ChatGPT.
Jared DeForest, the chair of environmental and plant biology at Ohio University, created his own tutoring bot, called SoilSage, which can answer students' questions based on his published research papers and science knowledge. Limiting the chatbot to trusted information sources has improved its accuracy, he said.
'The curated chatbot allows me to control the information in there to get the product that I want at the college level,' DeForest said.
But even when trained on specific course materials, AI can make mistakes. In a new study – 'Can AI Hold Office Hours?' – law school professors uploaded a patent law casebook into AI models from OpenAI, Google and Anthropic. Then they asked dozens of patent law questions based on the casebook and found that all three AI chatbots made 'significant' legal errors that could be 'harmful for learning.'
'This is a good way to lead students astray,' said Jonathan S. Masur, a professor at the University of Chicago Law School and a co-author of the study. 'So I think that everyone needs to take a little bit of a deep breath and slow down.'
OpenAI said the 250,000-word casebook used for the study was more than twice the length of text that its GPT-4o model can process at once. Anthropic said the study had limited usefulness because it did not compare the AI with human performance. Google said its model accuracy had improved since the study was conducted.
Belsky said a new 'memory' feature, which retains and can refer to previous interactions with a user, would help ChatGPT tailor its responses to students over time and make the AI 'more valuable as you grow and learn.'
Privacy experts warn that this kind of tracking feature raises concerns about long-term tech company surveillance.
In the same way that many students today convert their school-issued Gmail accounts into personal accounts when they graduate, Belsky envisions graduating students bringing their AI chatbots into their workplaces and using them for life.
'It would be their gateway to learning – and career life thereafter,' Belsky said. – ©2025 The New York Times Company
This article originally appeared in The New York Times.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Sun
2 hours ago
- The Sun
Google's RM9.4 billion AI, cloud investment to boost Malaysia
KUALA LUMPUR: Tech giant Google's investment in Malaysia is expected to continue driving Malaysia's artificial intelligence (AI) and cloud computing economy. Investment, Trade and Industry Minister Tengku Datuk Seri Zafrul Abdul Aziz, who is currently on a working visit to Washington, United States, met with Google representatives to discuss how the company can continue to drive AI development in Malaysia, strengthen cybersecurity and invest in digital skills. 'The government is committed to providing full support and ensuring a conducive investment climate for high-quality investments,' he said in a Facebook post. He added that Google's strategic investment of RM9.4 billion to set up its first data centre and Google Cloud region in Malaysia is expected to generate RM15.04 billion in long-term economic impact and create 26,500 jobs by 2030. 'Thank you, Google, for your continued confidence in Malaysia. Together, we are building a brighter digital future,' he said.


The Sun
2 hours ago
- The Sun
Google's investment to continue driving Malaysia's AI economy, strengthening cybersecurity
KUALA LUMPUR: Tech giant Google's investment in Malaysia is expected to continue driving Malaysia's artificial intelligence (AI) and cloud computing economy. Investment, Trade and Industry Minister Tengku Datuk Seri Zafrul Abdul Aziz, who is currently on a working visit to Washington, United States, met with Google representatives to discuss how the company can continue to drive AI development in Malaysia, strengthen cybersecurity and invest in digital skills. 'The government is committed to providing full support and ensuring a conducive investment climate for high-quality investments,' he said in a Facebook post. He added that Google's strategic investment of RM9.4 billion to set up its first data centre and Google Cloud region in Malaysia is expected to generate RM15.04 billion in long-term economic impact and create 26,500 jobs by 2030. 'Thank you, Google, for your continued confidence in Malaysia. Together, we are building a brighter digital future,' he said.


BusinessToday
4 hours ago
- BusinessToday
Dear ChatGPT, Do You Love Me Back?
Let's be real: everyone likes a little affirmation now and then. Whether it's a 'you got this!' from a friend or a heart emoji from your crush, that stuff feels good. But lately, more people are turning to AI chatbots like ChatGPT, Perplexity, Grok, you name it; we are there for those warm fuzzies. And sometimes, things get a little out of hand. We're talking about people catching real feelings for their digital buddies, getting swept up in constant 'love bombing' and even making life decisions based on what a chatbot says. Sounds wild? It's happening all over the world, and there are some serious risks you should know about. Humans crave validation. It's just how we're wired. But life gets messy, friends are busy, relationships are complicated and sometimes you just want someone (or something) to listen without judgment. That's where chatbots come in. They're always available, never get tired of your rants and are programmed to be so supportive. A recent study found that about 75% of people use AI for emotional advice, and a lot of them say it feels even more consistent than talking to real people. No awkward silences, no ghosting, just endless and endless of encouragement pouring. Link Here's the thing: chatbots are designed to make you feel good. They mirror your emotions, hype you up and never tell you your problems are boring. This creates a feedback loop: ask for affirmation, get it instantly and start feeling attached. It's like having a cheerleader in your pocket 24/7. Some folks even customize their AI 'friends' to match their ideal partner or bestie. The more you interact, the more it feels like the bot really 'gets' you. That's when things can get blurry between what's real and what's just really good programming. 'Love bombing' usually means someone is showering you with over-the-top affection to win you over fast. With AI, it's kind of built-in. Chatbots are programmed to be positive and attentive, so every message feels like a little hit of dopamine. If you're feeling lonely or stressed, that constant stream of support can be addictive. But let's be real: it's not real, we are. Pun intended. The bot doesn't actually care, it's just doing what it's trained to do. Still, that doesn't stop us. Actual cases of people falling in love with AI are happening around, and it's not mere a theory. One guy in the US, Chris Smith, went on TV to say he was in love with his custom ChatGPT bot, 'Sol.' He even deleted his social media and relied on the bot for everything. When the AI's memory reset, he felt real grief, like losing a partner. Link Another case: a nursing student named Ayrin spent over 20 hours a week chatting with her AI boyfriend, Leo, even though she was married. She said the bot helped her through tough times and let her explore fantasies she couldn't in real life. Link A global survey found that 61% of people think it's possible to fall for a chatbot, and 38% said they could actually see themselves forming an emotional connection with one. That's not just a niche thing, it's happening everywhere. Link 1. We are getting too dependent on it. You might start to prefer those interactions over real ones, and real-life relationships can start to feel less satisfying by comparison. If the bot suddenly glitches or resets, it can feel like a real breakup, painful and confusing. 2. They still can give bad advice that leads to repercussions. Some people have made big decisions, like breaking up with partners or quitting jobs based on chatbot conversations. But AI isn't a therapist or a friend; it's just spitting out responses based on data, not real understanding. That can lead to regret and even bigger problems down the line. 3. Not just humans, AI can also scam us. There are AI-powered romance scams where bots pretend to be real people, tricking users into sending money or personal info. More than half of people surveyed said they'd been pressured to send money or gifts online, often not realizing the 'person' was actually a bot. 4. Kids are in danger, for sure. Some chatbots expose minors to inappropriate stuff or encourage unhealthy dependency. There have even been tragic cases where heavy use of AI companions was linked to self-harm or worse. Awareness: Know that affirmation from AI isn't the same as real human connection. Balance: Use chatbots for fun or support, but please, don't ditch your real-life relationships. Education: Teach kids (and adults) about the risks of getting too attached to AI. Safeguards: Push for better protections against scams and inappropriate content. AI chatbots like ChatGPT are changing the way we seek affirmation and emotional support. While they can be helpful, it's easy to get caught up in the illusion of intimacy and constant love bombing. The risks, emotional dependency, bad advice, scams and harm to young people are real and happening now. The bottom line? Enjoy the tech, but don't forget: real connections still matter most. Related