logo
AI chatbots need more books - these libraries are opening their stacks

AI chatbots need more books - these libraries are opening their stacks

The Star13-06-2025

Everything ever said on the Internet was just the start of teaching artificial intelligence about humanity. Tech companies are now tapping into an older repository of knowledge: the library stacks.
Nearly one million books published as early as the 15th century - and in 254 languages - are part of a Harvard University collection being released to AI researchers on Thursday. Also coming soon are troves of old newspapers and government documents held by Boston's public library.
Cracking open the vaults to centuries-old tomes could be a data bonanza for tech companies battling lawsuits from living novelists, visual artists and others whose creative works have been scooped up without their consent to train AI chatbots.
"It is a prudent decision to start with public domain data because that's less controversial right now than content that's still under copyright,' said Burton Davis, a deputy general counsel at Microsoft.
Davis said libraries also hold "significant amounts of interesting cultural, historical and language data' that's missing from the past few decades of online commentary that AI chatbots have mostly learned from.
Supported by "unrestricted gifts' from Microsoft and ChatGPT maker OpenAI, the Harvard-based Institutional Data Initiative is working with libraries around the world on how to make their historic collections AI-ready in a way that also benefits libraries and the communities they serve.
"We're trying to move some of the power from this current AI moment back to these institutions,' said Aristana Scourtas, who manages research at Harvard Law School's Library Innovation Lab.
"Librarians have always been the stewards of data and the stewards of information.'
Harvard's newly released dataset, Institutional Books 1.0, contains more than 394 million scanned pages of paper.
One of the earlier works is from the 1400s - a Korean painter's handwritten thoughts about cultivating flowers and trees. The largest concentration of works is from the 19th century, on subjects such as literature, philosophy, law and agriculture, all of it meticulously preserved and organized by generations of librarians.
It promises to be a boon for AI developers trying to improve the accuracy and reliability of their systems.
Harvard's collection was already digitised starting in 2006 for another tech giant, Google, in its controversial project to create a searchable online library of more than 20 million books. Photo: AP
"A lot of the data that's been used in AI training has not come from original sources,' said the data initiative's executive director, Greg Leppert, who is also chief technologist at Harvard's Berkman Klein Center for Internet & Society. This book collection goes "all the way back to the physical copy that was scanned by the institutions that actually collected those items,' he said.
Before ChatGPT sparked a commercial AI frenzy, most AI researchers didn't think much about the provenance of the passages of text they pulled from Wikipedia, from social media forums like Reddit and sometimes from deep repositories of pirated books.
They just needed lots of what computer scientists call tokens - units of data, each of which can represent a piece of a word.
Harvard's new AI training collection has an estimated 242 billion tokens, an amount that's hard for humans to fathom but it's still just a drop of what's being fed into the most advanced AI systems. Facebook parent company Meta, for instance, has said the latest version of its AI large language model was trained on more than 30 trillion tokens pulled from text, images and videos.
Meta is also battling a lawsuit from comedian Sarah Silverman and other published authors who accuse the company of stealing their books from "shadow libraries' of pirated works.
Now, with some reservations, the real libraries are standing up.
OpenAI, which is also fighting a string of copyright lawsuits, donated US$50mil (RM211mil) this year to a group of research institutions including Oxford University's 400-year-old Bodleian Library, which is digitising rare texts and using AI to help transcribe them.
When the company first reached out to the Boston Public Library, one of the biggest in the US, the library made clear that any information it digitised would be for everyone, said Jessica Chapel, its chief of digital and online services.
"OpenAI had this interest in massive amounts of training data. We have an interest in massive amounts of digital objects. So this is kind of just a case that things are aligning,' Chapel said.
Digitisation is expensive. It's been painstaking work, for instance, for Boston's library to scan and curate dozens of New England's French-language newspapers that were widely read in the late 19th and early 20th century by Canadian immigrant communities from Quebec. Now that such text is of use as training data, it helps bankroll projects that librarians want to do anyway.
'A lot of the data that's been used in AI training has not come from original sources,' says Leppert, executive director at the Institutional Data Initiative. Photo: AP
"We've been very clear that, 'Hey, we're a public library,'" Chapel said. "Our collections are held for public use, and anything we digitised as part of this project will be made public.'
Harvard's collection was already digitised starting in 2006 for another tech giant, Google, in its controversial project to create a searchable online library of more than 20 million books.
Google spent years beating back legal challenges from authors to its online book library, which included many newer and copyrighted works. It was finally settled in 2016 when the US Supreme Court let stand lower court rulings that rejected copyright infringement claims.
Now, for the first time, Google has worked with Harvard to retrieve public domain volumes from Google Books and clear the way for their release to AI developers. Copyright protections in the US typically last for 95 years, and longer for sound recordings.
How useful all of this will be for the next generation of AI tools remains to be seen as the data gets shared on Thursday on the Hugging Face platform, which hosts datasets and open-source AI models that anyone can download.
The book collection is more linguistically diverse than typical AI data sources. Fewer than half the volumes are in English, though European languages still dominate, particularly German, French, Italian, Spanish and Latin.
A book collection steeped in 19th century thought could also be "immensely critical' for the tech industry's efforts to build AI agents that can plan and reason as well as humans, Leppert said.
"At a university, you have a lot of pedagogy around what it means to reason,' Leppert said. "You have a lot of scientific information about how to run processes and how to run analyses.'
At the same time, there's also plenty of outdated data, from debunked scientific and medical theories to racist narratives.
"When you're dealing with such a large data set, there are some tricky issues around harmful content and language," said Kristi Mukk, a coordinator at Harvard's Library Innovation Lab who said the initiative is trying to provide guidance about mitigating the risks of using the data, to "help them make their own informed decisions and use AI responsibly.' - AP

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

China has dealt with over 3,500 non-compliant AI products since April
China has dealt with over 3,500 non-compliant AI products since April

The Star

time2 hours ago

  • The Star

China has dealt with over 3,500 non-compliant AI products since April

BEIJING, June 20 (Xinhua) -- China's cyberspace watchdogs have addressed issues with over 3,500 artificial intelligence (AI) products that were not compliant with relevant rules since April, including mini-programs, web applications and AI agents. This is a result of a campaign launched in April, targeting the abuse of AI technology in forms such as deepfake face-swapping and voice-cloning that infringes on public interests, as well as the failure to properly label AI-generated content that has misled the public, according to the Cyberspace Administration of China (CAC) on Friday. More than 960,000 items with illegal or harmful content were removed from the internet, and over 3,700 related accounts were shut down over the period, the CAC said. During this phase, the CAC instructed local cyberspace authorities to intensify their actions against non-compliant AI products, and to cut off their marketing and traffic channels. It urged major websites and platforms to strengthen their technical safeguards. Efforts were also made to accelerate the implementation of labeling regulations for AI-generated content. In the next phase of the campaign, the CAC will focus on prominent issues such as AI-generated rumors and vulgar online content, build a technical monitoring system, and standardize sanction protocols to maintain a healthy online environment and steer AI development in a more positive direction.

Vanzo taps Taiwan market with exclusive Watsons distribution deal
Vanzo taps Taiwan market with exclusive Watsons distribution deal

The Star

time6 hours ago

  • The Star

Vanzo taps Taiwan market with exclusive Watsons distribution deal

KUALA LUMPUR: Vanzo Holdings Bhd's wholly owned subsidiary, Vanzo Asia Sdn Bhd (VASB), has entered into an agreement with Taiwan-based Xishangxi International Marketing Co Ltd (XIMCL). Under the agreement, XIMCL is appointed as the exclusive distributor of VASB's products in both online and physical Watsons stores in Taiwan, Vanzo said in a filing with Bursa Malaysia. VASB has the right to authorise XIMCL to distribute the products to other key retail channels such as pharmacies, supermarkets, minimarkets and convenience stores. Vanzo said the agreement is for a two-year period, commencing on June 20, 2025 and expiring on June 19, 2027. 'The agreement enables VASB to enter Taiwan's fast-moving consumer goods (FMCG) market through Watsons. The products are expected to be available to Taiwanese consumers across Watsons' over 500 outlets starting September 2025. 'This initiative shall enable VASB to further expand its market presence in Taiwan's FMCG market through additional key retail channels, including pharmacies, supermarkets, minimarkets and convenience stores,' Vanzo said.

BBC threatens legal action against AI startup Perplexity over content scraping, FT reports
BBC threatens legal action against AI startup Perplexity over content scraping, FT reports

The Sun

time8 hours ago

  • The Sun

BBC threatens legal action against AI startup Perplexity over content scraping, FT reports

THE BBC has threatened legal action against Perplexity, accusing the AI startup of training its 'default AI model' using BBC content, the Financial Times reported on Friday, making the British broadcaster the latest news organisation to accuse the AI firm of content scraping. The BBC may seek an injunction unless Perplexity stops scraping its content, deletes existing copies used to train its AI systems, and submits 'a proposal for financial compensation' for the alleged misuse of its intellectual property, FT said, citing a letter sent to Perplexity CEO Aravind Srinivas. The broadcaster confirmed the FT report in a statement to Reuters. Perplexity has faced accusations from media organizations, including Forbes and Wired, for plagiarizing their content but has since launched a revenue-sharing program to address publisher concerns. Last October, the New York Times sent it a 'cease and desist' notice, demanding the firm stop using the newspaper's content for generative AI purposes. Since the introduction of ChatGPT, publishers have raised alarms about chatbots that comb the internet to find information and create paragraph summaries for users. The BBC said that parts of its content had been reproduced verbatim by Perplexity and that links to the BBC website have appeared in search results, according to the FT report. Perplexity called the BBC's claims 'manipulative and opportunistic' in a statement to Reuters, adding that the broadcaster had 'a fundamental misunderstanding of technology, the internet and intellectual property law.' Perplexity provides information by searching the internet, similar to ChatGPT and Google's Gemini, and is backed by founder Jeff Bezos, AI giant Nvidia and Japan's SoftBank Group. The startup is in advanced talks to raise $500 million in a funding round that would value it at $14 billion, the Wall Street Journal reported last month.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store