Latest news with #AI-written


Atlantic
13-06-2025
- Business
- Atlantic
The Newspaper That Hired ChatGPT
For more than 20 years, print media has been a bit of a punching bag for digital-technology companies. Craigslist killed the paid classifieds, free websites led people to think newspapers and magazines were committing robbery when they charged for subscriptions, and the smartphone and social media turned reading full-length articles into a chore. Now generative AI is in the mix—and many publishers, desperate to avoid being left behind once more, are rushing to harness the technology themselves. Several major publications, including The Atlantic, have entered into corporate partnerships with OpenAI and other AI firms. Any number of experiments have ensued—publishers have used the software to help translate work into different languages, draft headlines, and write summaries or even articles. But perhaps no publication has gone further than the Italian newspaper Il Foglio. For one month, beginning in late March, Il Foglio printed a daily insert consisting of four pages of AI-written articles and headlines. Each day, Il Foglio 's top editor, Claudio Cerasa, asked ChatGPT Pro to write articles on various topics—Italian politics, J. D. Vance, AI itself. Two humans reviewed the outputs for mistakes, sometimes deciding to leave in minor errors as evidence of AI's fallibility and, at other times, asking ChatGPT to rewrite an article. The insert, titled Il Foglio AI, was almost immediately covered by newspapers around the world. 'It's impossible to hide AI,' Cerasa told me recently. 'And you have to understand that it's like the wind; you have to manage it.' Now the paper—which circulates about 29,000 copies each day, in addition to serving its online readership—plans to embrace AI-written content permanently, issuing a weekly AI section and, on occasion, using ChatGPT to write articles for the standard paper. (These articles will always be labeled.) Cerasa has already used the technology to generate fictional debates, such as an imagined conversation between a conservative and a progressive cardinal on selecting a new pope; a review of the columnist Beppe Severgnini's latest book, accompanied by Severgnini's AI-written retort; the chatbot's advice on what to do if you suspect you're falling in love with a chatbot ('Do not fall in love with me'); and an interview with Cerasa himself, conducted by ChatGPT. Il Foglio 's AI work is full-fledged and transparently so: natural and artificial articles, clearly divided. Meanwhile, other publications provide limited, or sometimes no, insight into their usage of the technology, and some have even mixed AI and human writing without disclosure. As if to demonstrate how easily the commingling of AI and journalism can go sideways, just days after Cerasa and I first spoke, at least two major regional American papers published a spread of more than 50 pages titled 'Heat Index,' which was riddled with errors and fabrications; a freelancer who'd contributed to the project admitted to using ChatGPT to generate at least some portions of the text, resulting in made-up book titles and expert sources who didn't actually exist. The result was an embarrassing example of what can result when the technology is used to cut corners. With so many obvious pitfalls to using AI, I wanted to speak with Cerasa to understand more about his experiment. Over Zoom, he painted an unsettling, if optimistic, portrait of his experience with AI in journalism. Sure, the technology is flawed. It's prone to fabrications; his staff has caught plenty of them, and has been taken to task for publishing some of those errors. But when used correctly, it writes well—at times more naturally, Cerasa told me, than even his human staff. Still, there are limits. 'Anyone who tries to use artificial intelligence to replace human intelligence ends up failing,' he told me when I asked about the 'Heat Index' disaster. 'AI is meant to integrate, not replace.' The technology can benefit journalism, he said, 'only if it's treated like a new colleague—one that needs to be looked after.' The problem, perhaps, stems from using AI to substitute rather than augment. In journalism, 'anyone who thinks AI is a way to save money is getting it wrong,' Cerasa said. But economic anxiety has become the norm for the field. A new robot colleague could mean one, or three, or 10 fewer human ones. What, if anything, can the rest of the media learn from Il Foglio 's approach? Our conversation has been edited for length and clarity. Matteo Wong: In your first experiment with AI, you hid AI-written articles in your paper for a month and asked readers if they could detect them. How did that go? What did you learn? Claudio Cerasa: A year ago, for one month, every day we put in our newspaper an article written with AI, and we asked our readers to guess which article was AI-generated, offering the prize of a one-year subscription and a bottle of champagne. The experiment helped us create better prompts for the AI to write an article, and helped us humans write better articles as well. Sometimes an article written by people was seen as an article written by AI: for instance, when an article is written with numbered points—first, second, third. So we changed something in how we write too. Wong: Did anybody win? Cerasa: Yes, we offered a lot of subscriptions and champagne. More than that, we realized we needed to speak about AI not just in our newspaper, but all over the world. We created this thing that is important not only because it is journalism with AI, but because it combines the oldest way to do information, the newspaper, and the newest, artificial intelligence. Wong: How did your experience of using ChatGPT change when you moved from that original experiment to a daily imprint entirely written with AI? Cerasa: The biggest thing that has changed is our prompt. At the beginning, my prompt was very long, because I had to explain a lot of things: You have to write an article with this style, with this number of words, with these ideas. Now, after a lot of use of ChatGPT, it knows better what I want to do. When you start to use, in a transparent way, artificial intelligence, you have a personal assistant: a new person that works in the newspaper. It's like having another brain. It's a new way to do journalism. Wong: What are the tasks and topics you've found that ChatGPT is good at and for which you'd want to use it? And conversely, where are the areas where it falls short? Cerasa: In general, it is good at three things: research, summarizing long documents, and, in some cases, writing. I'm sure in the future, and maybe in the present, many editors will try to think of ways AI can erase journalists. That could be possible, because if you are not a journalist with enough creativity, enough reporting, enough ideas, maybe you are worse than a machine. But in that case, the problem is not the machine. The technology can also recall and synthesize far more information than a human can. The first article we put in the normal newspaper written with AI was about the discovery of a key ingredient for life on a distant planet. We asked the AI to write a piece on great authors of the past and how they imagined the day scientists would make such a discovery. A normal person would not be able to remember all these things. Wong: And what can't the AI do? Cerasa: AI cannot find the news; it cannot develop sources or interview the prime minister. AI also doesn't have interesting ideas about the world—that's where natural intelligence comes in. AI is not able to draw connections in the same way as intelligent human journalists. I don't think an AI would be able to come up with and fully produce a newspaper generated by AI. Wong: You mentioned before that there may be some articles or tasks at a newspaper that AI can already write or perform better than humans, but if so, the problem is an insufficiently skilled person. Don't you think young journalists have to build up those skills over time? I started at The Atlantic as an assistant editor, not a writer, and my primary job was fact-checking. Doesn't AI threaten the talent pipeline, and thus the media ecosystem more broadly? Cerasa: It's a bit terrifying, because we've come to understand how many creative things AI can do. For our children to use AI to write something in school, to do their homework, is really terrifying. But AI isn't going away—you have to educate people to use it in the correct way, and without hiding it. In our newspaper, there is no fear about AI, because our newspaper is very particular and written in a special way. We know, in a snobby way, that our skills are unique, so we are not scared. But I'm sure that a lot of newspapers could be scared, because normal articles written about the things that happened the day before, with the agency news—that kind of article, and also that kind of journalism, might be the past.


Scoop
13-06-2025
- Business
- Scoop
Local Businesses Get Google AI Search Reprieve – But For How Long?
Press Release – Alexanders Digital Marketing According to Alexander, aiming to rank through well-structured pages with video, local SEO, and schema markup for ecommerce products is still fundamental, as website visitors are more likely to be sales-ready when they arrive, having read the AI overviews. Rachel Alexander, Alexanders Digital Marketing / Supplied Businesses worried about losing Google rankings to AI search have more time to update tactics – new research reveals. Google AI Overviews are the AI-written answers that now appear at the top of many Google search results. This has shifted businesses rankings further down the page and the fear is that the AI answer will take away traffic from websites. According to new Semrush research, AI Overviews were triggered for 6.49% of queries in January, climbing to 13.14% by March 2025. 'While this looks concerning, the detail reveals good news for businesses: to date, Google AI Overviews primarily target 'informational' searches like 'how do solar panels work',' said Rachel Alexander, CEO of SEO agency Alexanders, in Christchurch. 'This leaves commercial keywords such as 'heat pump installation Christchurch' largely untouched,' Alexander said. 'Google isn't disrupting the searches that drive revenue from Google search ads or shopping ads because it is protecting its advertising revenue,' said Alexander. Alexander warns against complacency. 'The upshot is that businesses shouldn't rely on old SEO. Plus, there are many other AI platforms that are not guarding ad revenue who will in fact give AI summaries on commercial phrases,' said Alexander. To address this, Alexander explained that SEO techniques need to be updated so information is picked up by generative engines, if business owners want to get listed in AI recommendations. According to Alexander, aiming to rank through well-structured pages with video, local SEO, and schema markup for ecommerce products is still fundamental, as website visitors are more likely to be sales-ready when they arrive, having read the AI overviews. Alexander recommends a strategy of appearing in listicles, such as 'The 10 best luxury hotels in Christchurch' on trusted platforms, as this can help businesses get picked up in AI recommendations. About Alexanders Digital Marketing Founded in 1997, Alexanders Digital Marketing has spent over 28 years helping Canterbury businesses achieve growth through SEO services and strategic marketing. Content Sourced from Original url


USA Today
28-05-2025
- Politics
- USA Today
As a college professor, I see how AI is stripping away the humanity in education
Dustin Hornbeck Guest Columnist As the 2025 school year ends, one thing teachers, parents and the broader public know for sure is that artificial intelligence is here, and it is taking on more responsibilities that used to be left to the human brain. AI can now tutor students at their own pace, deliver custom content and even ace exams, including one I made for my own course. While a bit frightening, that part doesn't bother me. Of course, machines can process information faster than we can. What bothers me is that we seem ready to let the machines and political discontent define the purpose of education. Kids are disengaged at school; AI doesn't help A recent Brookings report found that only 1 in 3 students are actively engaged in school. That tracks with what I have seen myself as a former high school teacher and current professor. Need a break? Play the USA TODAY Daily Crossword Puzzle. Many students are checked out, quietly drifting through the motions while teachers juggle multiple crises. They try to pull some students up to grade level and just hope the others don't slide backward. It's more triage than teaching. I tested one of my own final exams in ChatGPT. It scored a 90% the first time and 100% the next. Colleagues tell me their students are submitting AI-written essays. One professor I know gave up and went back to in-class handwritten essays for his final exam. It's 2025 and we're back to blue books. I recently surveyed and interviewed high school social studies teachers across the country for a study about democratic education. Every one of them said they're struggling to design assignments that AI can't complete. These aren't multiple-choice quizzes or five-paragraph summaries. They're book analyses, historical critiques and policy arguments ‒ real cognitive work that used to demand original thought. Now? A chatbot can mimic it well enough to get by. So what do we do? Double down on job training? That's what I fear. A lot of today's education policy seems geared toward producing workers for an economy that's already in flux. But AI is going to reshape the labor market whether we like it or not. Pretending we can out-credential our way through it is wishful thinking. School should teach kids how to live in the world, not just work in it John Dewey, the early 20th century pragmatist, had the answer over 100 years ago. He reminded us that school is never just a pipeline to employment. It is a place to learn how to live in a democracy. Not just memorize facts about it, but participate in it. Build it. Challenge it. Schools are not about the world; they are the world ‒ just with guidance by adults and peers, and more chances to fail safely … hopefully. That's not something AI can do. And frankly, it's not something our current test-driven, job-metric-obsessed education system is doing, either. Parents and community members also play a crucial role in shaping this type of education, which can lead to a healthier and more robust democracy for all. In Dewey's model, teachers aren't content deliverers. They are guides and facilitators of meaning. They are people who help students figure out how to live together, how to argue without tearing each other apart, how to make sense of the world and their place in it, how to find their purpose, and how to work with peers to solve problems. If we let AI define the boundaries of teaching, we'll hollow it out. Sure, students may learn more efficient ways to take in content. But they'll miss out on the messy, human work of collaboration, curiosity, disagreement and creation. And in a world increasingly shaped by machines, that could be the most important thing we can teach. The challenge isn't to beat AI at its own game. It's to make sure school stays human enough that students learn how to be human together. Dustin Hornbeck, PhD, is an assistant professor of educational leadership and policy studies. His opinion does not represent that of the university for which he works. This column originally appeared in The Tennessean.


Axios
27-05-2025
- Business
- Axios
AI is perfecting scam emails, making phishing hard to catch
AI chatbots have made scam emails harder to spot and the tells we've all been trained to look for — clunky grammar, weird phrasing — utterly useless. Why it matters: Scammers are raking in more than ever from basic email and impersonation schemes. Last year, the FBI estimates, they made off with a whopping $16.6 billion. Thwarting AI-written scams will require a new playbook that leans more on users verifying messages and companies detecting scams before they hit inboxes, experts say. The big picture: ChatGPT and other chatbots are helping non-English-speaking scammers write typo-free messages that closely mimic trusted senders. Before, scammers relied on clunky tools like Google Translate, which often were too literal in their translations and couldn't capture grammar and tone. Now, AI can write fluently in most languages, making malicious messages far harder to flag. What they're saying:"The idea that you're going to train people to not open [emails] that look fishy isn't going to work for anything anymore," Chester Wisniewski, global field CISO at Sophos, told Axios. "Real messages have some grammatical errors because people are bad at writing," he added. "ChatGPT never gets it wrong." The big picture: Scammers are now training AI tools on real marketing emails from banks, retailers and service providers, Rachel Tobac, an ethical hacker and CEO of SocialProof Security, told Axios. "They even sound like they are in the voice of who you're used to working with," Tobac said. Tobac said one Icelandic client who had never before worried about employees falling for phishing emails was now concerned. "Previously, they've been so safe because only 350,000 people comfortably speak Icelandic," she said. "Now, it's a totally new paradigm for everybody." Threat level: Beyond grammar, the real danger lies in how these tools scale precision and speed, Mike Britton, CISO at Abnormal Security, told Axios. Within minutes, scammers can use chatbots to create dossiers about the sales teams at every Fortune 500 company and then use those findings to write customized, believable emails, Britton said. Attackers now also embed themselves into existing email threads using lookalike domains, making their messages nearly indistinguishable from legitimate ones, he added. "Our brain plays tricks on us," Britton said. "If the domain has a W in it, and I'm a bad guy, and I set up a domain with two Vs, your brain is going to autocorrect." Yes, but: Spotting scam emails isn't impossible. In Tobac's red team work, she typically gets caught when: Someone practices what she calls polite paranoia, or when they text or call the organization or person being impersonated to confirm if they sent a suspicious message. A target uses a password manager and has complex, long passwords. They have multifactor authentication enabled. What to watch: Britton warned that low-cost generative AI tools for deepfakes and voice clones could soon take phishing to new extremes. "It's going to get to the point where we all have to have safe words, and you and I get on a Zoom and we have to have our secret pre-shared key," Britton said. "It's going to be here before you know it."
Yahoo
27-05-2025
- Politics
- Yahoo
As a college professor, I see how AI is stripping away the humanity in education
As the 2025 school year ends, one thing teachers, parents and the broader public knows for sure is that AI is here, and it is taking on more responsibilities that used to be left to the human brain. AI can now tutor students at their own pace, deliver custom content and even ace exams, including one I made for my own course. While a bit frightening, that part doesn't bother me. Of course machines can process information faster than we can. What bothers me is that we seem ready to let the machines and political discontent define the purpose of education. A recent Brookings report found that only one in three students is actively engaged in school. That tracks with what I have seen myself as a former high school teacher and current professor. Many students are checked out, quietly drifting through the motions while teachers juggle multiple crises. They try to pull some students up to grade level and just hope the others don't slide backward. It's more triage than teaching. I tested one of my own final exams in ChatGPT. It scored a 90% the first time and 100% the next. Colleagues tell me their students are submitting AI-written essays. One professor I know gave up and went back to in-class handwritten essays for his final exam. It's 2025 and we're back to blue books. I recently surveyed and interviewed high school social studies teachers across the country for a study about democratic education. Every one of them said they're struggling to design assignments AI can't complete. More: U.S. lawmakers, Nashville music industry members discuss AI: 'Making sure we get this right is really important' These aren't multiple-choice quizzes or five-paragraph summaries. They're book analyses, historical critiques and policy arguments—real cognitive work that used to demand original thought. Now? A chatbot can mimic it well enough to get by. So what do we do? Double down on job training? That's what I fear. A lot of today's education policy seems geared toward producing workers for an economy that's already in flux. But AI is going to reshape the labor market whether we like it or not. Pretending we can out-credential our way through it is wishful thinking. John Dewey, the early 20th century pragmatist, had the answer over 100 years ago. He reminded us that school is never just a pipeline to employment. It is a place to learn how to live in a democracy. Not just memorize facts about it, but participate in it. Build it. Challenge it. Schools are not about the world; they are the world — just with guidance by adults and peers, and more chances to fail safely … hopefully. In Dewey's model, teachers aren't content deliverers. They are guides and facilitators of meaning. They are people who help students figure out how to live together, how to argue without tearing each other apart, how to make sense of the world and their place in it, how to find their purpose and work with peers to solve problems. That's not something AI can do. And frankly, it's not something our current test-driven, job-metric obsessed education system is doing either. Parents and community members also play an important role in shaping this type of education, which would lead to a healthier and more robust democracy for call. More: From GPS gaffes to fabricated facts: AI still needs a human co-pilot If we let AI define the boundaries of teaching, we'll hollow it out. Sure, students may learn more efficient ways to take in content. But they'll miss out on the messy, human work of collaboration, curiosity, disagreement and creation. And in a world increasingly shaped by machines, that may be the most important thing we can teach. The challenge isn't to beat AI at its own game. It's to make sure school stays human enough that students learn how to be human—together. Dustin Hornbeck, Ph.D., is an assistant professor of educational leadership and policy studies. His opinion does not represent that of the University for which he works. This article originally appeared on Nashville Tennessean: AI is transforming education. We're struggling to keep up | Opinion