logo
Lawyers Just Discovered Something About Meta's AI That Could Cost Zuckerberg Untold Billions of Dollars

Lawyers Just Discovered Something About Meta's AI That Could Cost Zuckerberg Untold Billions of Dollars

Yahoo4 days ago

A legal expert found that Meta's AI is able to spit out entire portions of books verbatim — and if he's right, it could be seriously bad news for the company and its CEO Mark Zuckerberg.
First, a quick primer. All the AI that's commercially buzzy at the moment, like OpenAI's ChatGPT or Meta's Llama, is trained by feeding in huge amounts of data. Then researchers do a bunch of number crunching using algorithms, basically teaching the system to recognize patterns in all that data so thoroughly that it can then create new patterns — meaning that, say, if you ask for a summary of the plot of one of the "Harry Potter" books, it'll give you (hopefully) a reasonable overview.
The problem, Stanford tech law expert Mark Lemley explains in an interview with New Scientist, is that his team's research found that Meta's LLaMA is able to repeat verbatim the exact contents of copyrighted books — such as, in one example he found, lengthy passages from the multi-billion dollar "Harry Potter" series.
For Meta, this is a gigantic legal liability. Why? Because if its AI is producing entire excerpts of material used to train it, it starts to look less like its AI is producing transformative works based on general patterns about language and the world it learned from its training data, and more like the AI is acting like a giant .ZIP file of copyrighted work, which users can then reproduce at will.
And it looks a lot like it is. When testing out various AI models by companies including OpenAI, DeepSeek, and Microsoft, Lemley's team found that Meta's LLaMA was the only one that spat out book content exactly. Specifically, the researchers found that LLaMA seemed to have memorized material including the first book in J.K. Rowling's "Harry Potter" series, F. Scott Fitzgerald's "The Great Gatsby," and George Orwell's "1984."
It's not under debate that Meta, like its peers in the tech industry, used copyrighted materials to train its AI. But its specific methodology for doing so has come under fire: it emerged in copyright lawsuit against Meta by authors including the comedian Sarah Silverman that the model was trained on the "Books3" dataset, which contains almost 200,000 copyrighted publications and which Meta engineers downloaded using an illegal torrent ("Torrenting from a [Meta-owned] corporate laptop doesn't feel right," one of them fussed while doing so, in messages produced in court.)
Lemley and his team estimate that if just three percent of the Books3 dataset were found to be infringing, the company behind it could owe nearly $1 billion in statutory damages, and that's not counting any additional payouts based on profits gleaned from such theft. And if the proportion of infringing content is higher, at least in theory Meta could end up nailed to the wall.
Lemley is in a weird position, by the way. He previously defended Meta in that same lawsuit we mentioned above, but earlier this year, the Stanford professor announced in a LinkedIn post that he would no longer be representing the company in a protest of Meta and Zuckerberg's right-wing virtue signaling. Back then, he said he believed Meta should win its case — but based on his new research, it sounds like that opinion may have shifted.
Meta declined to comment to New Scientist about Lemley's findings.
More on Meta: Meta Says It's Okay to Feed Copyrighted Books Into Its AI Model Because They Have No "Economic Value"

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tesla Misses Robotaxi Launch Date, Goes With Safety Drivers
Tesla Misses Robotaxi Launch Date, Goes With Safety Drivers

Forbes

time6 minutes ago

  • Forbes

Tesla Misses Robotaxi Launch Date, Goes With Safety Drivers

A vehicle Tesla is using for robotaxi testing purposes in Austin, Texas, US, on Friday, June 20, ... More 2025.. Photographer: Eli Hartman/Bloomberg Tesla's much-anticipated June 22 'no one in the vehicle' Robotaxi launch in Austin is not ready. Instead, Tesla has announced to its invite-only passengers that it will operate a limited service with Tesla employees on board the vehicle to maintain safety. Tesla will use an approach that was used in 2019 by Russian robotaxi company Yandex, putting the safety driver in the passengers seat rather than the driver's seat. (Yandex's robotaxi was divested from Russian and now is called AVRide.) Having an employee on board, commonly called a safety driver, is the approach that every robocar company has used for testing, including testing of passenger operations. Most companies spend many years (Waymo spent a decade) testing with safety drivers, and once they are ready to take passengers, there are typically some number of years testing in that mode, though the path to removing the safety driver depends primarily on evaluation of the safety case for the vehicle, and less on the presence of passengers. Tesla has put on some other restrictions--rides will be limited to 6am to midnight (the opposite of Cruise's first operations, which were only at night) and riders come from an invite-only list (as was also the case for Waymo, and Cruise and others in their early days.) Rides will be limited to a restricted service area (often mistakenly called a 'geofence') which avoids complex and difficult streets and intersections. Rides will be unavailable in inclement weather, which also can happen with other vehicles, though fairly rarely today. Tesla FSD is known to disable itself if rain obscures some of its cameras--only the front cameras have a rain wiper. The fleet will be small. Waymo started testing with safety drivers in 2009, gave rides to passengers with safety drivers in 2017, and without safety drivers in 2020 in the Phoenix area. Cruise had a much shorter period with passengers and safety drivers. Motional has given rides for years but has never removed the safety driver. Most Chinese companies spent a few years doing it. Giving passengers rides requires good confidence in the safety of the system+safety driver combination, but taking the passengers does not alter how well the vehicle drives, except perhaps around pick-up and drop-off. (While a vehicle is more at liberty to make hard stops with no passengers on board, I am aware of no vehicle which takes advantage of this.) As such we have no information on whether Tesla will need their safety drivers for a month or a several years, or even forever with current hardware. Passenger's Seat vs. Driver's Seat Almost all vehicles use a safety driver behind the wheel. Tesla's will be in the passenger seat, in a situation similar to that used by driving instructors for student human drivers. While unconfirmed by Tesla, the employee in the passenger seat can grab the wheel and steer. Because stock Teslas have fully computer controlled brake and acceleration, they might equip the driver with electronic pedals. Some reports have suggested they have a hand controller or other ways to command the vehicle to brake. There is no value to putting the safety driver on the passengers side. It is no safer than being behind the wheel, and believed by most to be less safe because of the unusual geometr20 November 2024, Berlin: A prototype of the Tesla Cybercab stands in a showroom in the Mall of Berlin. Photo: Hannes P. Albert/dpa (Photo by Hannes P Albert/picture alliance via Getty Images)y. It's hard to come up with any reason other than just how it looks. Tesla can state the vehicles have 'nobody in the driver's seat' in order to attempt to impress the public. The driving school system works, so it's not overtly dangerous, but in that case there's an obvious reason for it that's not optics. Tesla Cybercab concept. With only 2 seats and no controls, not very suitable for a safety driver. ... More These are not being used in Tesla's Austin pilot. That said, most robocar prototypes, including Tesla supervised FSD, are reasonably safe with capable safety drivers. A negligent and poorly managed safety driver in an Uber ATG test vehicle killed a pedestrian in Tempe, Arizona when the safety driver completely ignored her job, but otherwise these systems have a good record. The combination of Tesla Autopilot and a supervising driver has a reasonable record. (The record is not nearly as good as some people think Tesla claims. Every quarter, Tesla publishes a deeply misleading report comparing the combination of Tesla Autopilot plus supervisor to the general crash rate, but they report airbag deployments for the Teslas mostly on freeways and compare it without general crash numbers on all roads for general drivers. This makes it seem Autopilot is many times safer than regular drivers when it's actually similar, a serious and deceitful misrepresentation.) As noted, Yandex, now AVRide, has used safety drivers in the passenger seat, and has done so in Austin--also speculated to be mostly for optics, though there are some legal jurisdictions where companies shave made this move because the law requires safety drivers and they hope to convey an aura of not needing them. This has also been the case in China.) When Cruise did their first 'driverless' demo ride in San Francisco, they had an employee in the passengers seat. So Tesla has been ready to run with safety drivers for years. What's tested here isn't the safety of the cars, but all the complexity of handling passengers, including the surprising problems of good PuDo (Pick-up/Drop-off.) Whether Teslas can operate a safe robotaxi with nobody onboard, particularly with their much more limited sensor hardware, remains to be seen. Other Paths To Launch Tesla apparently experimented with different paths to getting out on the road before they are ready to run unsupervised. In particular, vehicles were seen with the passenger seat safety driver, and also being followed by a 'chase car' with two on board. Reports also came of Tesla planning for 'lots of tele-ops' including not just remote assistance (as all services do) but remote supervision including remote driving. We may speculate that Tesla evaluated many different approaches: Because Elon Musk promised 'nobody in the car' and 'unsupervised' in the most recent Tesla earnings call, there was great pressure to produce #1, but the Tesla team must have concluded they could not do that yet, and made the right choice, though #3 is a better choice than #4. They also did not feel up to #2, which is commonly speculated to be what other companies have done on their first launch, later graduating to #1 #5 just looks goofy, I think the optics would not work, and it's also challenging. Remote driving is real and doable--in spite of the latency and connectivity issues of modern data networks--but perhap Tesla could not get it ready in time. All teams use remote assistance operators who do not drive the cars, but can give them advice when they get confused by a situation, and stop and ask for advice. Even Waymo recently added a minor remote driving ability for low-speed 'get the car out off the road' sort of operations. I have recommended this for some time. It is worth noting the contrast beween Cruise's 'night only' launch and Tesla's mostly-daytime one. Cruise selected the night because there is less traffic and complexity. LIDARs see very well at night. Tesla's camera-based system has very different constraints at night and many fear it's inferior then. On the other hand Tesla will operate in some night hours and with more cars and pedestrians on the street. The question for Tesla will be whether the use of safety drivers is a very temporary thing, done just because they weren't quite ready but needed to meet the announced date, or a multi-year program as it has been for most teams. Tesla is famous for not meeting the forecast ship dates for its FSD system, so it's not shocking that this pattern continues. The bigger question is whether they can do it at all. Tesla FSD 13, the version available to Tesla owners, isn't even remotely close to robotaxi ready. If Tesla has made a version which is closer, through extra work, training and severe limitations of the problem space, it's still a big accomplishment. This will be seen in the coming months. Two robocar teams had severe interactions with pedestrians. Both those teams, and one pedestrian, are dead. Tesla knows they must not make mistakes.

Using AI in Customer Service? Don't Make These 4 Mistakes
Using AI in Customer Service? Don't Make These 4 Mistakes

Entrepreneur

time18 minutes ago

  • Entrepreneur

Using AI in Customer Service? Don't Make These 4 Mistakes

AI is revolutionizing customer service in 2025, offering speed, personalization and efficiency. But to avoid frustrating users, businesses must ensure the following things. Opinions expressed by Entrepreneur contributors are their own. AI is omnipresent in 2025 in all areas of the business sphere, including customer service. And for good reason. Used right, AI can provide invaluable insights into your customers' behaviors and preferences, boost the efficiency of your customer service team and increase overall satisfaction. Between dynamic personalization, streamlined purchase processes and predictive customer support, many small businesses are leveraging AI to level the playing field and provide enterprise-grade customer service. However, despite AI's massive potential, there are several potential pitfalls when using AI in customer service. At worst, AI can scare off customers or generate frustration, rather than helping to streamline processes. Here are the four most common mistakes — and how to avoid them. Related: How Small Businesses Can Leverage AI Without Breaking the Bank 1. Frustrating generic chatbots To start with, chatbots can be a great asset to your team members and customers alike. They can speedily handle routine queries, free up your agents' capacities, respond to customers even outside regular business hours and reduce wait times. However, to be effective, chatbots need to be well-trained and personalized. Unfortunately, many companies — in a rush to stay ahead in the AI race — deployed chatbots that ask too many questions, give generic answers and fail to solve queries. In one hilarious example, NYC's MyCity chatbot kept giving wrong answers even six months post-deployment and after $600,000 in investments, misinforming users about legal requirements for business owners and even basic facts such as the minimum wage. Overall, 80% of people reported that interactions with chatbots have increased their frustration rather than leading to quicker solutions to the issues they were facing. To avoid this, it's crucial that chatbots are trained well on company-internal data. Ideally, they should be able to leverage customer-specific data across a number of different channels in order to provide personalized, efficient support to every person who reaches out. 2. Unaccessible siloed data On that note, another common pitfall to avoid when implementing AI in customer service is data siloing. One of AI's greatest strengths is its capacity to process huge amounts of data and unearth patterns and trends, condensed into actionable insights. These insights can then be leveraged for personalization and targeted strategy adjustments. However, that's only possible if AI actually has access to all the necessary data elements — and that is a challenge many small businesses are currently facing. In fact, a recent study by Nextiva, a market leader in customer experience software solutions, found that among company leadership, data siloing was identified as one of the most common barriers to AI implementation. In the study, 39% of respondents agreed that they "struggled with accessibility, aggregation, integration and structure of real-time and historical data." To avoid this limitation, it's essential to audit data storage and integration as soon as you start planning your AI implementation strategy. Making sure from the start that the systems you are considering integrate well — or that bridge solutions are at least available — will avoid unnecessary siloing and frustration down the line. Related: AI Can Give You New Insights About Your Customers for Cheap. Here's How to Make It Work for You. 3. Going overboard on hyper-personalization and automation On the other end of the spectrum are businesses that go overboard in their enthusiasm for AI, to a degree that can appear off-putting to many customers. This includes hyper-personalization and automation processes. While personalization is a key advantage of AI and can boost the efficiency of customer service agents and the satisfaction of the people they interact with, you don't want to appear omniscient either. Having the impression that a company knows everything about them before they even talk to you is seen as acutely creepy by many customers. Salesbots, in particular, often trigger the uncanny valley effect, or scare off potential customers by leveraging information they don't feel they ought to have access to. To steer clear of this particular pitfall, it's essential to carefully calibrate the level of personalisation you implement and weigh its potential benefits in boosting conversions against customers' perception of intrusiveness. 4. Forgetting human escalation options Finally, a widespread mistake small businesses make in leveraging AI for customer service is to neglect human escalation options, especially in customer support. No matter what your AI can do, it's always necessary to offer customers the option to talk to a human agent instead. There is nothing more frustrating for a customer facing an urgent problem than being stuck in an ineffective conversation loop with a chatbot or a virtual phone agent when an actual person would clearly help them reach a solution far more efficiently. Outside business hours, when AI is the only one holding down the fort, it's often enough to offer customers the option to leave a message and assure them you will contact them as soon as possible. Other than that, though, you need to give people the option of a human lifeline to help put out an urgent fire. Related: Does AI Deserve All the Hype? Here's How You Can Actually Use AI in Your Business Conclusion In 2025, AI is an incredible asset that small businesses can leverage to elevate their customer service. It is, however, not a panacea. To effectively harness the potential of AI and avoid common pitfalls, it's necessary to carefully plan and train the systems you're deploying, exercise discretion with respect to personalization and implement a human failsafe option. By sticking to these tenets, though, you'll be able to make the most of the opportunities AI has to offer for small businesses in customer service and increase your overall customer satisfaction.

AI chatbots and TikTok reshape how young people get their daily news
AI chatbots and TikTok reshape how young people get their daily news

Yahoo

time20 minutes ago

  • Yahoo

AI chatbots and TikTok reshape how young people get their daily news

Artificial intelligence is changing the way people get their news, with more readers turning to chatbots like ChatGPT to stay up to date. At the same time, nearly half of young adults now rely on platforms such as TikTok as their main source of news. The findings come from the Reuters Institute's annual Digital News Report, released this week. The Oxford University-affiliated study surveyed nearly 97,000 people across 48 countries to track how global news habits are shifting. The study found that a notable number of people are using AI chatbots to read headlines and get news updates – a shift described by the institute's director Mitali Mukherjee as a 'new chapter' in the way audiences consume information. While only 7 percent overall say they use AI chatbots to find news, that number rises among younger audiences – 12 percent of under-35s and 15 percent of under-25s now rely on tools such as OpenAI's ChatGPT, Google's Gemini or Meta's Llama for their news. 'Personalised, bite-sized and quick – that's how younger audiences want their news, and AI tools are stepping in to deliver exactly that,' Mukherjee noted. Beyond reading headlines, many readers are turning to AI for more complex tasks: 27 percent use it to summarise news articles, 24 percent for translations, and 21 percent for recommendations on what to read next. Nearly one in five have quizzed AI directly about current events. (with newswires) Read more on RFI EnglishRead also:AI steals spotlight from Nobel winners who highlight Its power and risksAI showcase pays off for France, but US tech scepticism endures'By humans, for humans': French dubbing industry speaks out against AI threat

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store