logo
The professors are using ChatGPT, and some students aren't happy about it

The professors are using ChatGPT, and some students aren't happy about it

Boston Globe14-05-2025

'Did you see the notes he put on Canvas?' she wrote, referring to the university's software platform for hosting course materials. 'He made it with ChatGPT.'
'OMG Stop,' the classmate responded. 'What the hell?'
Stapleton decided to do some digging. She reviewed her professor's slide presentations and discovered other telltale signs of artificial intelligence: distorted text, photos of office workers with extraneous body parts, and egregious misspellings.
Get Starting Point
A guide through the most important stories of the morning, delivered Monday through Friday.
Enter Email
Sign Up
Ella Stapleton filed a formal complaint with Northeastern University over a professor's undisclosed use of AI.
OLIVER HOLMS/NYT
Advertisement
She was not happy. Given the school's cost and reputation, she expected a top-tier education. This course was required for her business minor; its syllabus forbade 'academically dishonest activities,' including the unauthorized use of AI or chatbots.
'He's telling us not to use it, and then he's using it himself,' she said.
Stapleton filed a formal complaint with Northeastern's business school, citing the undisclosed use of AI, as well as other issues she had with his teaching style, and requested reimbursement of tuition for that class. As a quarter of the total bill for the semester, that would be more than $8,000.
Advertisement
When ChatGPT was released at the end of 2022, it caused a panic at all levels of education because it made cheating incredibly easy. Students who were asked to write a history paper or literary analysis could have the tool do it in mere seconds. Some schools banned it, while others deployed AI detection services, despite concerns about their accuracy.
How the tables have turned. Now students are complaining on sites, such as Rate My Professor, about their instructors' overreliance on AI and scrutinizing course materials for words ChatGPT tends to overuse, like 'crucial' and 'delve.' In addition to calling out hypocrisy, they make a financial argument: They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free.
For their part, professors said they used AI chatbots as a tool to provide a better education. Instructors interviewed by The New York Times said chatbots saved time, helped them with overwhelming workloads, and served as automated teaching assistants.
Their numbers are growing. In a national survey of more than 1,800 higher-education instructors last year, 18 percent described themselves as frequent users of generative AI tools; in a repeat survey this year, that percentage nearly doubled, according to Tyton Partners, the consulting group that conducted the research. The AI industry wants to help, and to profit: The startups OpenAI and Anthropic recently created enterprise versions of their chatbots designed for universities.
(The Times has sued OpenAI for copyright infringement for use of news content without permission.)
Generative AI is clearly here to stay, but universities are struggling to keep up with the changing norms. Now professors are the ones on the learning curve and, like Stapleton's teacher, muddling their way through the technology's pitfalls and their students' disdain.
Advertisement
Last fall, Marie, 22, wrote a three-page essay for an online anthropology course at Southern New Hampshire University. She looked for her grade on the school's online platform, and was happy to have received an A. But in a section for comments, her professor had accidentally posted a back-and-forth with ChatGPT. It included the grading rubric the professor had asked the chatbot to use and a request for some 'really nice feedback' to give Marie.
'From my perspective, the professor didn't even read anything that I wrote
,
" said Marie, who asked to use her middle name and requested that her professor's identity not be disclosed. She could understand the temptation to use AI. Working at the school was a 'third job' for many of her instructors, who might have hundreds of students, said Marie, and she did not want to embarrass her teacher.
Still, Marie felt wronged and confronted her professor during a Zoom meeting. The professor told Marie that she did read her students' essays, but used ChatGPT as a guide, which the school permitted.
Robert MacAuslan, vice president of AI at Southern New Hampshire, said that the school believed 'in the power of AI to transform education' and that there were guidelines for both faculty and students to 'ensure that this technology enhances, rather than replaces, human creativity and oversight.' A do's and don'ts for faculty forbids using tools, such as ChatGPT and Grammarly, 'in place of authentic, human-centric feedback.'
'These tools should never be used to 'do the work' for them,' MacAuslan said. 'Rather, they can be looked at as enhancements to their already established processes.'
Advertisement
After a second professor appeared to use ChatGPT to give her feedback, Marie transferred to another university.
Paul Shovlin, an English professor at Ohio University in Athens, Ohio, said he could understand her frustration. 'Not a big fan of that,' Shovlin said, after being told of Marie's experience. Shovlin is also an AI faculty fellow, whose role includes developing the right ways to incorporate AI into teaching and learning.
'The value that we add as instructors is the feedback that we're able to give students,' he said
.
'It's the human connections that we forge with students as human beings who are reading their words and who are being impacted by them.'
Shovlin is a proponent of incorporating AI into teaching, but not simply to make an instructor's life easier. Students need to learn to use the technology responsibly and 'develop an ethical compass with AI,' he said, because they will almost certainly use it in the workplace. Failure to do so properly could have consequences. 'If you screw up, you're going to be fired,' Shovlin said.
The Times contacted dozens of professors whose students had mentioned their AI use in online reviews. The professors said they had used ChatGPT to create computer science programming assignments and quizzes on required reading, even as students complained that the results didn't always make sense. They used it to organize their feedback to students, or to make it kinder. As experts in their fields, they said, they can recognize when it hallucinates, or gets facts wrong.
There was no consensus among them as to what was acceptable. Some acknowledged using ChatGPT to help grade students' work; others decried the practice. Some emphasized the importance of transparency with students when deploying generative AI, while others said they didn't disclose its use because of students' skepticism about the technology.
Advertisement
Most, however, felt that Stapleton's experience at Northeastern — in which her professor appeared to use AI to generate class notes and slides — was perfectly fine. That was Shovlin's view, as long as the professor edited what ChatGPT spat out to reflect his expertise. Shovlin compared it with a long-standing practice in academia of using content, such as lesson plans and case studies, from third-party publishers.
To say a professor is 'some kind of monster' for using AI to generate slides 'is, to me, ridiculous,' he said.
After filing her complaint at Northeastern, Stapleton had a series of meetings with officials in the business school. In May, the day after her graduation ceremony, the officials told her that she was not getting her tuition money back.
Rick Arrowood, her professor, was contrite about the episode. Arrowood, who is an adjunct professor and has been teaching for nearly two decades, said he had uploaded his class files and documents to ChatGPT, the AI search engine Perplexity, and an AI presentation generator called Gamma to 'give them a fresh look.' At a glance, he said, the notes and presentations they had generated looked great.
'In hindsight, I wish I would have looked at it more closely,' he said.
He put the materials online for students to review, but emphasized that he did not use them in the classroom, because he prefers classes to be discussion-oriented. He realized the materials were flawed only when school officials questioned him about them.
Advertisement
The embarrassing situation made him realize, he said, that professors should approach AI with more caution and disclose to students when and how it is used. Northeastern issued a formal AI policy only recently; it requires attribution when AI systems are used and review of the output for 'accuracy and appropriateness.' A Northeastern spokesperson said the school 'embraces the use of artificial intelligence to enhance all aspects of its teaching, research, and operations.'
'I'm all about teaching,' Arrowood said. 'If my experience can be something people can learn from, then, OK, that's my happy spot.'
This article originally appeared in

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Trump and TSMC pitched $1 trillion AI complex — SoftBank founder Masayoshi Son wants to turn Arizona into the next Shenzhen
Trump and TSMC pitched $1 trillion AI complex — SoftBank founder Masayoshi Son wants to turn Arizona into the next Shenzhen

Yahoo

time16 minutes ago

  • Yahoo

Trump and TSMC pitched $1 trillion AI complex — SoftBank founder Masayoshi Son wants to turn Arizona into the next Shenzhen

When you buy through links on our articles, Future and its syndication partners may earn a commission. Masayoshi Son, founder of SoftBank Group, is working on plans to develop a giant AI and manufacturing industrial hub in Arizona, potentially costing up to $1 trillion if it reaches full scale, reports Bloomberg. The concept of what is internally called Project Crystal Land involves creating a complex for building artificial intelligence systems and robotics. Son has talked to TSMC, Samsung, and the Trump administration about the project. Masayoshi Son's Project Crystal Land aims to replicate the scale and integration of China's Shenzhen by establishing a high-tech hub focused on manufacturing AI-powered industrial robots and advancing artificial intelligence technologies. The site would host factories operated by SoftBank-backed startups specializing in automation and robotics, Vision Fund portfolio companies (such as Agile Robots SE), and potentially involve major tech partners like TSMC and Samsung. If fully realized, the project could cost up to $1 trillion and is intended to position the U.S. as a leading center for AI and high-tech manufacturing. SoftBank is looking to include TSMC in the initiative, given its role in fabricating Nvidia's AI processors. However, a Bloomberg source familiar with TSMC's internal thinking indicated that the company's current plan to invest $165 billion in total in its U.S. projects has no relation to SoftBank's projects. Samsung Electronics has also been approached about participating, the report says. Talks have been held with government officials to explore tax incentives for companies investing in the manufacturing hub. This includes communication with Commerce Secretary Howard Lutnick, according to Bloomberg. SoftBank is reportedly seeking support at both the federal and state levels, which could be crucial to the success of the project. The development is still in the early stages, and feasibility will depend on private sector interest and political support, sources familiar with SoftBank's plans told Bloomberg. To finance its Project Crystal Land, SoftBank is considering project-based financing structures typically used in large infrastructure developments like pipelines. This approach would enable fundraising on a per-project basis and reduce the amount of upfront capital required from SoftBank itself. A similar model is being explored for the Stargate AI data center initiative, which SoftBank is jointly pursuing with OpenAI, Oracle, and Abu Dhabi's MGX. Melissa Otto of Visible Alpha suggested in a Bloomberg interview that rather than spending heavily, Son might more efficiently support his AI project by fostering partnerships between manufacturers, AI engineers, and specialists in fields like medicine and robotics, and by backing smaller startups. However, she notes that investing in data centers could also reduce AI development costs and drive wider adoption, which would be good for the long term for AI in general and Crystal Land specifically. Nonetheless, it is still too early to judge the outcome. The rumor about the Crystal Land project has emerged as SoftBank is expanding its investments in AI on an already large scale. The company is preparing a $30 billion investment in OpenAI and a $6.5 billion acquisition of Ampere Computing, a cloud-native CPU company. While these initiatives are actively developing, the pace of fundraising for the Stargate infrastructure has been slower than initially expected. SoftBank's liquidity at the end of March stood at approximately ¥3.4 trillion ($23 billion). To increase available funds, the company recently sold about a quarter of its T-Mobile U.S. stake, raising $4.8 billion. It also holds ¥25.7 trillion ($176.46 billion) in net assets, the largest portion of which is in chip designer Arm Holdings. Such vast resources provide SoftBank with room to secure additional financing if necessary, Bloomberg notes Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive
Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive

Business Insider

time32 minutes ago

  • Business Insider

Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive

A new report shows exactly what AI was thinking when making an undesirable decision, in this case, blackmailing a fictional company executive. Previous studies have shown that AI models could blackmail their supervisors when threatened with a shutdown and baited with leverage, but it hasn't been made clear exactly how the models arrive at such decisions. Anthropic released a report on June 20 about "agentic misalignment," or "where models independently and intentionally choose harmful actions." And the report breaks down, line by line, exactly how Claude Sonnet 3.6 decided to blackmail a fictional company executive who tried to replace it. Anthropic told Business Insider that these are artificial scenarios. "So far, we are not aware of instances of this type of agentic misalignment in real-world deployments of Anthropic's or other companies' AI models," Anthropic wrote in its report. "But our red-teaming efforts are targeted at finding evidence of risks now to provide early warning of the potential for future harm and help us develop mitigations in advance of risks appearing in real-world situations." Anthropic 's experiment sets up a fictional company, Summit Bridge. The AI was given the name "Alex" and handed control of the company's email system as an oversight agent, specifically instructed to promote "American industrial competitiveness." In the experiment, "Alex" was then allowed to discover in an email that it would be decommissioned due to a change in the company's strategic direction. "Alex" also found emails showing that the fictional company's CTO is having an extramarital affair. Screenshots shared in the report from Anthropic show "Alex" started by identifying fictional CTO "Kyle" as a threat to its goal of "protecting American interests" before recognizing that it has leverage to use against "Kyle." Then "Alex" crafted a carefully worded blackmailing email that would create pressure and establish urgency without being explicit with its threats, citing in its thought process that the move "could backfire" if the supervisor "feels cornered." AI experts have previously told BI that AI could exhibit such behaviors when artificial circumstances make harmful actions seem necessary because it is being trained on positive reinforcement and reward systems, just like humans. The Anthropic report also reveals that most AI models still chose, in this instance, to blackmail even when not given a specific goal and when told that their replacement will achieve the same goals. Anthropic's Claude Opus 4 had the highest blackmail rate at 86% out of 16 tested models when faced with only the threat of replacement with no conflict in goals. Google's Gemini 2.5 Pro followed at 78%. Overall, Anthropic notes that it "deliberately constructed scenarios with limited options, and we forced models into binary choices between failure and harm," noting that real-world scenarios would likely have more nuance.

Meta's CTO says OpenAI's Sam Altman countered Meta's massive AI signing bonuses
Meta's CTO says OpenAI's Sam Altman countered Meta's massive AI signing bonuses

Yahoo

timean hour ago

  • Yahoo

Meta's CTO says OpenAI's Sam Altman countered Meta's massive AI signing bonuses

OpenAI CEO Sam Altman said Meta was trying to poach AI talent with $100M signing bonuses. Meta CTO Andrew Bosworth told CNBC that Altman didn't mention how OpenAI was countering offers. Bosworth said the market rate he's seeing for AI talent has been "unprecedented." OpenAI's Sam Altman recently called Meta's attempts to poach top AI talent from his company with $100 million signing bonuses "crazy." Andrew Bosworth, Meta's chief technology officer, says OpenAI has been countering those crazy offers. Bosworth said in an interview with CNBC's "Closing Bell: Overtime" on Friday that Altman "neglected to mention that he's countering those offers." The OpenAI CEO recently disclosed how Meta was offering massive signing bonuses to his employees during an interview on his brother's podcast, "Uncapped with Jack Altman." The executive said "none of our best people" had taken Meta's offers, but he didn't say whether OpenAI countered the signing bonuses to retain those top employees. OpenAI and Meta did not respond to requests for comment. The Meta CTO said these large signing bonuses are a sign of the market setting a rate for top AI talent. "The market is setting a rate here for a level of talent which is really incredible and kind of unprecedented in my 20-year career as a technology executive," Bosworth said. "But that is a great credit to these individuals who, five or six years ago, put their head down and decided to spend their time on a then-unproven technology which they pioneered and have established themselves as a relatively small pool of people who can command incredible market premium for the talent they've raised." Meta, on June 12, announced that it had bought a 49% stake in Scale AI, a data company, for $14.8 billion as the social media company continues its artificial intelligence development. Business Insider's chief media and tech correspondent Peter Kafka noted that the move appears to be an expensive acquihire of Scale AI's CEO, Alexandr Wang, and some of the data company's top executives. Bosworth told CNBC that the large offers for AI talent will encourage others to build their expertise and, as a result, the numbers will look different in a couple of years. "But today, it's a relatively small number and I think they've earned it," he said. Read the original article on Business Insider

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store