logo
A consideration of the complicated future of artificial intelligence at Hunter College High School from available tools to ethical usage

A consideration of the complicated future of artificial intelligence at Hunter College High School from available tools to ethical usage

New York Post22-04-2025

This article is one of the winning submissions from the New York Post Scholars Contest, presented by Command Education.
It's 3:30 p.m., and a high school student sits at his desk, staring down the rubric for an essay due tomorrow that he hasn't started. He contemplates staying up all night to research, outline, write and edit the essay, risking a mediocre grade because he was in a rush. It occurs to him that, with ChatGPT, he could be done (or well on his way) in an hour. In 2025, students at Hunter College High School (HCHS), like high schools all over the country, confront this temptation every day. Just a few years ago, it would have been unimaginable for a high school student to use a generative artificial intelligence (AI) tool to complete their homework. But with the advent of large-language models like ChatGPT in 2022, AI tools have become commonplace at HCHS.
As of January 2025, 26% of US high school students use ChatGPT for their schoolwork, up 100% since December 2023, according to Pew Research. According to a survey of 47 middle and high schoolers, conducted by What's What, the student newspaper, more than double the national average—or 72.3% of survey respondents—say they had used ChatGPT at least once in the past month, with 36.1% having used it at least three times. The average HCHS student used AI approximately four times per month.
Advertisement
HCHS is an academically rigorous school in New York City with a high standard for assignments. Students often feel extreme pressures to succeed: in addition to a gruelling course load, many passionately pursue extracurriculars and internships. Using AI is appealing to students because it saves them time and energy. Many students are willing to risk the chance of getting caught under the school's zero-tolerance cheating policies if it means they can sleep more or spend more time on another assignment.
Since its 2022 release, HCHS students have found myriad ways to use ChatGPT—both those allowed and those not—to make their lives easier. Some of the most common, permitted uses include finding primary sources, summarizing long documents, checking work, and asking AI to quiz them before tests. 'I think that's what it's good for, really, doing the little things that take unreasonably long so you can get on with your life,' says sophomore Madelyn. But, 'the tricky part is knowing when to stop.' The most common use for ChatGPT is generating ideas that the students then narrow down or flesh out.
Use of ChatGPT skyrocketed in 2024, in proportion to its increasing usefulness over the course of the year. OpenAI, ChatGPT's parent company, allowed the chatbot to access past chats, and web searches, so that the AI was not limited to its training data, but could also access information on the internet in real time. As a result, more students have found ways to use ChatGPT productively.
Advertisement
'People have figured out how to use ChatGPT better,' explains Madelyn. 'There are still people who think they're gonna get away with turning in heavily AI-written papers, but a lot of people have found how to use it as a tool to cut down homework time or for a jumping off point.' She estimates approximately 70% of her friends use it regularly.
Some students see ChatGPT usage as a bridge to building key skills. One HCHS junior comments that 'ChatGPT is not going away anytime soon, so we might as well learn how to use it to augment our learning, rather than detract from it.' As more professionals use AI for everything from writing emails to writing code, becoming familiar with the tools in high school is increasingly important.
In addition to ChatGPT, students also use other AI tools, like QuillBot, Perplexity, and Mathaway to proofread essays or help with homework. One junior explains that they had a hard time using traditional tools, like JSTOR, or EBSCOHost, to find sources for their term paper due to its niche topic, so AI provides them with 'a great jumping off point to find primary and secondary sources that I think make [AI] a really useful social studies source.'
Madelyn describes the way students discuss AI as a kind of ubiquitous vice in pursuit of higher grades, almost like not sleeping enough. 'You pull up to school like 'yeah, I got three hours of sleep last night I can't even' and it's kinda normal. We all know it's not great but we laugh it off, and there's a bit of camaraderie, because the other person has probably had those days, too.' In the same way, other students seem to understand and relate to their peers who admit to using AI for assignments.
Advertisement
Scheherazade Schonfeld
Teachers and administrators have engineered their AI-related messaging to discourage usage in all cases: The HCHS student handbook now reads, 'Students [will] not use AI-generated content in any way on assignments or examinations, as detailed above, unless an instructor for a given course specifically authorizes their use.' And the punishments are severe, ranging from failing the assignment to expulsion. The English Department Academic Integrity Policy does not authorize 'any use of AI for the work of our classes.' Teacher Kimberly Airoldi explains that in English classes, automating any part of the process with AI was explicitly counter to her department's goals of teaching writing skills.
Many students who use ChatGPT to generate content that they pass off as their own, in violation of the student handbook, believe that they usually get away with it. Although most major assignments are checked through Turnitin, a commonly used plagiarism and AI-detection tool, minor assignments are rarely checked. Even Turnitin admits they can't detect 100% of AI use, and tools like HIX Bypass exist to get around Turnitin. False positives, Airoldi explains, are common when using an AI detector, which is why teachers need to manually check each flagged essay.
But teachers know a lot more about AI use than students think. Eighth grader Dalia observes that nearly all of her class knew which students were using AI. 'Our teachers aren't stupid,' Dalia says. 'If every single person in our class knows people are using AI, then I'm sure the teacher does, too.'
Advertisement
When asked how likely they were, on a scale of one to five, to use ChatGPT/AI in completing their assignments, 37% of HCHS students rated themselves as a three or above, with 45.7% of students considered themselves to be at a one. Among students who did not use ChatGPT, the most common reasons were unreliability and fear of being caught. 'I've never used it because I'm too scared,' says one senior.
Students are 'surrounded' by adults using AI, says Airoldi. But, she explains, the distinction is that students are still learning and building skills, and 'if you use AI, you don't get that part. You get a paper, yeah, but you don't get the learning part.' Additionally, students are 'surrounded by this messaging that AI can make your writing better,' a fundamental misconception pushed by tech companies, says Airoldi. On the contrary, AI writing lacks the nuance and subtext she aims to teach.
The emphasis on the supposed value of AI has not only changed the work that Airoldi's students use AI for, it has also changed the way they write themselves. Even when they are not using AI to generate ideas or write text for them, students are also 'replicating the language and the approach of AI' in their work. This has led to more robotic essays that sound like they are AI-generated, even when they are not.
Sophomore Penelope warns against idealizing the power of AI. 'I don't think it can do a better job than me if I put just a little bit of effort in,' she explains, and in implementing her ideas, 'I don't think it would go in the right direction.' Though some treat AI tools as the equivalent of a peer or collaborator, Penelope doesn't think ChatGPT is 'who I would collaborate with.'
Even though the rules seem clear, it's hard to tell when ethical usage veers into plagiarism or cheating. For instance, one might use an AI tool to understand what is happening in their English class, only to reference those same ideas to write an essay that is not entirely their own. Sophomore Tal thinks the best option may just be not to trust AI at all: 'I'd rather just struggle through the work and figure it out myself.'
A 10th-grader at Hunter College High School in Manhattan, Schonfeld dreams of being a foreign correspondent one day.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Is AMD Stock a Buy, Sell, or Hold on Untether AI Acquisition?
Is AMD Stock a Buy, Sell, or Hold on Untether AI Acquisition?

Yahoo

time27 minutes ago

  • Yahoo

Is AMD Stock a Buy, Sell, or Hold on Untether AI Acquisition?

Over the past decade, Advanced Micro Devices (AMD) has thrived, rewriting its legacy in the fiercely competitive semiconductor arena. The company has evolved from an industry underdog to a headline name in innovation, particularly in artificial intelligence (AI). Adding another feather to its cap, in early June, AMD announced it has brought onboard the team behind Untether AI. These engineers, known for building AI inference chips that outperform rivals in both speed and energy efficiency, will now strengthen AMD's skills in enterprise data centers and edge computing. Robotaxis, Powell and Other Key Things to Watch this Week Make Over a 2.4% One-Month Yield Shorting Nvidia Out-of-the-Money Puts Is Quantum Computing (QUBT) Stock a Buy on This Bold Technological Breakthrough? Stop Missing Market Moves: Get the FREE Barchart Brief – your midday dose of stock movers, trending sectors, and actionable trade ideas, delivered right to your inbox. Sign Up Now! Far more than just a hiring spree, the strategic agreement brings in deep expertise in AI hardware, software, compiler engineering, kernel development, and system-on-chip design, all critical in the fast-moving AI race. The move is no isolated bet. Just weeks ago, AMD acquired silicon photonics firm Enosemi to scale its co-packaging capabilities, and days later, it snapped up open-source software player Brium. So, let us now examine whether the latest move strengthens AMD's position as a compelling investment opportunity. Advanced Micro Devices (AMD), headquartered in Santa Clara, California, holds a market cap of $205.6 billion and leads the charge in high-performance computing. The company delivers a powerful mix of CPUs, GPUs, FPGAs, Adaptive SoCs, and deep software capabilities to build cutting-edge platforms that support cloud infrastructure, edge computing, and end-user systems. Over the past three months, AMD has been on an upward march, posting gains of 22.5%. In just the last five trading days, the stock has leapt another 4.7%, a sharp spike that points to fresh tailwinds lifting the company higher. On May 6, Advanced Micro Devices lifted the curtain on its Q1 2025 earnings, and its performance beat analyst expectations. The chipmaker reported revenue of $7.44 billion, rising 35.9% year over year and exceeding Wall Street's forecast of $7.12 billion. At the heart of this impressive run was the data center segment, which delivered $3.7 billion in revenue, marking a 57% surge from the same period last year. Other divisions joined the rally too. The client and gaming business generated $2.9 billion combined. While the client unit saw a dramatic upswing of 68%, bringing in $2.3 billion, the gaming segment faced continued headwinds, falling 30% to $647 million. Still, the gains elsewhere helped AMD widen its gross margin to 50%, a solid jump from 47% a year ago, aided by a richer product mix and stronger data center sales. Non-GAAP net income climbed 54.6% to reach $1.6 billion, reinforcing the company's strong operational grip. Adjusted EPS came in at $0.96, up 54.8% from the prior year and again beating the Street's projection of $0.93. But despite the bullish results, AMD struck a cautious tone. Management has flagged export restrictions on A.I. chips to China, estimating a $700 million revenue hit this quarter and a total impact of $1.5 billion for the fiscal year. Even so, AMD forecast Q2 revenue at $7.4 billion. While analysts see EPS dipping 30% year over year to $0.35 in Q2, they expect it to rise 20.6% to $3.16 for the full year and jump 54.1% to $4.87 in fiscal 2026. AMD stands firm in the market, demonstrating steady confidence as it secures a 'Moderate Buy' consensus. Out of 42 analysts closely following the stock, 28 give it an enthusiastic 'Strong Buy' rating, one leans toward a 'Moderate Buy,' and 13 adopt a cautious 'Hold' stance. The average price target of $133.32 represents potential upside of 5.6%. Meanwhile, the Street-High target of $200 hints at a 58.7% climb from current levels. On the date of publication, Aanchal Sugandh did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on

Four months after a $3B valuation, Harvey AI grows to $5B
Four months after a $3B valuation, Harvey AI grows to $5B

TechCrunch

time37 minutes ago

  • TechCrunch

Four months after a $3B valuation, Harvey AI grows to $5B

Harvey AI, a startup that provides automation for legal work, has raised $300 million in Series E funding at a $5 billion valuation, the company told Fortune. The round was co-led by Kleiner Perkins and Coatue, with participation from existing investors, including Conviction, Elad Gil, OpenAI Startup Fund, and Sequoia. The financing comes just four months after Harvey announced that Sequoia led a $300 million Series D round at a $3 billion valuation. While many AI companies aim to keep headcount low, Harvey is rapidly expanding its staff. The three-year-old startup already employs 340 people and plans to double that number with its fresh funds, Fortune reported. Some of the new staff will be hired to help Harvey build AI products for professional services beyond legal, including tax accounting. The company's AI solutions, which assist lawyers in reviewing documents and drafting contracts, are used by 337 legal clients. Harvey has been growing at a rapid clip, reaching an annualized run-rate revenue of $75 million in April, up from $50 million earlier in the year, Reuters reported last month. Some of Harvey's competitors include older legal startups, such as 10-year-old Ironclad and 17-year-old Clio, which raised a $300 million round at a $3 billion valuation last year.

AI Could Be Harnessed to Cut More Emissions Than It Creates
AI Could Be Harnessed to Cut More Emissions Than It Creates

Scientific American

time37 minutes ago

  • Scientific American

AI Could Be Harnessed to Cut More Emissions Than It Creates

CLIMATEWIRE | Artificial intelligence could cut global climate pollution by up to 5.4 billion metric tons a year over the next decade if it's harnessed in ways that would improve transportation, energy and food production. Those reductions would outweigh even the expected increase in global energy consumption and emissions that would be created by running power-hungry data centers associated with AI, according to research by the Grantham Research Institute that was published in the journal npj Climate Action. 'The key will be to channel practical AI applications towards key impact areas to accelerate the market adoption rate and efficiency of low-carbon solutions,' the study said, noting that governments will have a vital role to play. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. 'Governments must regulate AI to minimise their environmental footprint,' the study said, pointing to the need for energy efficient AI models and the use of renewable energy to power data centers. It also says governments should invest in AI infrastructure and education in developing countries, 'ensuring that the benefits of AI are shared equitably.' The use of AI has boomed in recent years and President Donald Trump has made its expansion a priority as he seeks to outcompete China on advanced technologies. But its voracious use of energy has raised concerns about its climate impact, particularly as Trump and other officials argue that winning the race for AI is a reason to expand fossil fuels. The International Energy Agency projects that by 2030 data centers will consume twice as much electricity as they do today. Growing energy demands are already challenging the U.S. grid, and oil companies are using AI to find new areas to drill. BloombergNEF has said fossil fuels will provide most of the new power for data centers over the next decade, imperiling efforts to cut carbon pollution. There are ways to mitigate the damage, the Grantham study said. It outlines five areas where AI can be harnessed to reduce emissions, including consumer behavior, energy management and technology innovation. For example, AI can help integrate renewables into the grid by better forecasting supply and demand fluctuations and help the grid distribute energy more accurately, reducing concerns about intermittency. That can increase the uptake of solar and wind and lower the use of polluting backup power sources, the report said. 'Power grids are at the heart of the entire economy, so improving their efficiency reduces emissions across multiple sectors,' Roberta Pierfederici, a policy fellow at the Grantham Research Institute and author of the study, said in an email. AI can also identify new types of protein to replace meat and dairy in human diets — industries that are heavy emitters. And it can improve transportation by lowering the cost of electric vehicles through battery improvements or encouraging people to switch to shared transport. Those actions combined could reduce emissions by 3.2 billion to 5.4 billion metric tons of carbon dioxide equivalent annually by 2035 compared to their current trajectory, the study said. That's more than the entire European Union. By comparison, the U.S. released 6.2 billion metric tons of climate pollution in 2023. Those cuts are not in line with what's needed to keep global temperatures from rising more than 1.5 degrees Celsius since the beginning of the industrial age. But they could keep a check on warming by more than offsetting the emissions released by using AI. The study estimated that energy emissions tied to data centers and AI will reach 0.4 billion to 1.6 billion metric tons of CO2 equivalent over the next decade. The study does have its limitations, given how quickly the field of AI is changing. The authors acknowledged that they might have underestimated AI's potential to reduce emissions, because they only looked at how AI is applied currently to three sectors. On the other hand, the study didn't consider how energy efficiency gains from AI could lead to increased consumption elsewhere that could drive emissions up. Pierfederici said while the rise in emissions from data centers is a valid concern, she believes the study makes a strong case for using AI to help tackle rising temperatures. 'That said, governments need to play an active role in guiding how AI is applied and governed, to make sure the downsides are managed effectively and the full potential of AI for climate action is realized,' she added.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store