
Getty's landmark UK lawsuit on copyright and AI set to begin
FILE PHOTO: An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
LONDON (Reuters) -Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI begins at London's High Court on Monday, with the photo provider's case likely to set a key precedent for the law on AI.
The Seattle-based company, which produces editorial content and creative stock images and video, accuses Stability AI of breaching its copyright by using its images to "train" its Stable Diffusion system, which can generate images from text inputs.
Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion.
Stability AI – which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP – is fighting the case and denies infringing any of Getty's rights.
A Stability AI spokesperson said that "the wider dispute is about technological innovation and freedom of ideas," adding: "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression."
Getty's case is one of several lawsuits brought in Britain, the U.S. and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago.
WIDER IMPACT
Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists.
Lawyers say Getty's case will have a major impact on the law, as well as potentially informing government policy on copyright protections relating to AI.
"Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said.
She added that a victory for Getty could mean that Stability AI and other developers will face further lawsuits.
Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development".
(Reporting by Sam Tobin; Editing by Andrew Heavens)
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Free Malaysia Today
20 minutes ago
- Free Malaysia Today
How AI is becoming a secret weapon for workers
Companies should completely rethink their integration of AI, rather than turning a blind eye to those employees that use the technology covertly. (Envato Elements pic) PARIS : Artificial intelligence is fast becoming part of everyday working life, promising productivity gains and a transformation of working methods. Between enthusiasm and caution, companies are trying to harness this tech and integrate it into their processes. But behind the official rhetoric, a very different reality is emerging: many employees are adopting these tools discreetly, out of sight of their managers. A recent survey conducted by software company Ivanti shows the extent of this under-the-radar adoption of AI, revealing one-third of employees surveyed use AI tools without their supervisors' knowledge. There are several distinct reasons for this covert strategy. For 36% of them, it is primarily a matter of gaining a 'secret advantage' over their colleagues, while 30% of respondents fear that revealing their dependence on this technology could cost them their jobs. This is understandable, considering that 29% of employees are concerned that AI will diminish the value of their skills in the eyes of their employer. The figures reveal an explosion in clandestine use: 42% of office workers say they use generative AI tools such as ChatGPT at work. Among IT professionals, this proportion reaches an impressive 74%. And close to half of office workers use AI tools not provided by their company. Underestimating the risks This covert use exposes organisations to considerable risks: unauthorised platforms do not always comply with security standards or corporate data-protection requirements. From confidential data and business strategies to intellectual property, anything and everything can potentially be fed into AI tools unchecked. 'It is crucial for employers to assume this is happening, regardless of any restrictions, and to assess the use of AI to ensure it complies with their security and governance standards,' stressed Brooke Johnson, chief legal counsel at Ivanti. Employers should encourage open dialogue to foster transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively. (Envato Elements pic) The survey also reveals a troubling paradox: while 52% of office workers believe that working more efficiently simply means doing more work, many prefer to keep their productivity gains to themselves. This mistrust is accompanied by an AI-fuelled impostor syndrome, with 27% of users saying they don't want their abilities to be questioned. This situation highlights a huge gap between management and employees: although 44% of professionals surveyed say their company has invested in AI, they simultaneously complain about a lack of training and skills to use these technologies effectively. This disconnect betrays a poorly orchestrated technological transformation. In the face of this silent revolution, Johnson advocates a proactive approach: 'Organisations should implement clear policies and guidelines for the use of AI tools, along with regular training sessions to educate employees on the potential security and ethical implications.' This survey suggests that companies should completely rethink their integration of AI, rather than turning a blind eye to this legion of secret users. The stakes go beyond mere operational optimisation: the most successful organisations will need to balance technological use with the enhancement of human potential. By encouraging open dialogue, employers can foster transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively. Ignoring this silent revolution runs the risk of deepening mutual distrust between management and employees, to everyone's detriment.

Malay Mail
2 hours ago
- Malay Mail
Digital deception: Misinformation war escalates as AI deepfakes, fake war footage flood social media amid Iran-Israel conflict
WASHINGTON, June 23 — AI deepfakes, video game footage passed off as real combat, and chatbot-generated falsehoods — such tech-enabled misinformation is distorting the Israel-Iran conflict, fuelling a war of narratives across social media. The information warfare unfolding alongside ground combat — sparked by Israel's strikes on Iran's nuclear facilities and military leadership — underscores a digital crisis in the age of rapidly advancing AI tools that have blurred the lines between truth and fabrication. The surge in wartime misinformation has exposed an urgent need for stronger detection tools, experts say, as major tech platforms have largely weakened safeguards by scaling back content moderation and reducing reliance on human fact-checkers. After Iran struck Israel with barrages of missiles last week, AI-generated videos falsely claimed to show damage inflicted on Tel Aviv and Ben Gurion Airport. The videos were widely shared across Facebook, Instagram and X. Using a reverse image search, AFP's fact-checkers found that the clips were originally posted by a TikTok account that produces AI-generated content. There has been a 'surge in generative AI misinformation, specifically related to the Iran-Israel conflict,' Ken Jon Miyachi, founder of the Austin-based firm BitMindAI, told AFP. 'These tools are being leveraged to manipulate public perception, often amplifying divisive or misleading narratives with unprecedented scale and sophistication.' 'Photo-realism' GetReal Security, a US company focused on detecting manipulated media including AI deepfakes, also identified a wave of fabricated videos related to the Israel-Iran conflict. The company linked the visually compelling videos — depicting apocalyptic scenes of war-damaged Israeli aircraft and buildings as well as Iranian missiles mounted on a trailer — to Google's Veo 3 AI generator, known for hyper-realistic visuals. The Veo watermark is visible at the bottom of an online video posted by the news outlet Tehran Times, which claims to show 'the moment an Iranian missile' struck Tel Aviv. 'It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation and sow confusion,' said Hany Farid, the co-founder of GetReal Security and a professor at the University of California, Berkeley. Farid offered one tip to spot such deepfakes: the Veo 3 videos were normally eight seconds in length or a combination of clips of a similar duration. 'This eight-second limit obviously doesn't prove a video is fake, but should be a good reason to give you pause and fact-check before you re-share,' he said. The falsehoods are not confined to social media. Disinformation watchdog NewsGuard has identified 51 websites that have advanced more than a dozen false claims — ranging from AI-generated photos purporting to show mass destruction in Tel Aviv to fabricated reports of Iran capturing Israeli pilots. Sources spreading these false narratives include Iranian military-linked Telegram channels and state media sources affiliated with the Islamic Republic of Iran Broadcasting (IRIB), sanctioned by the US Treasury Department, NewsGuard said. The surge in wartime misinformation has exposed an urgent need for stronger detection tools, experts say, as major tech platforms have largely weakened safeguards by scaling back content moderation and reducing reliance on human fact-checkers. — Picture by Hari Anggara 'Control the narrative' 'We're seeing a flood of false claims and ordinary Iranians appear to be the core targeted audience,' McKenzie Sadeghi, a researcher with NewsGuard, told AFP. Sadeghi described Iranian citizens as 'trapped in a sealed information environment,' where state media outlets dominate in a chaotic attempt to 'control the narrative.' Iran itself claimed to be a victim of tech manipulation, with local media reporting that Israel briefly hacked a state television broadcast, airing footage of women's protests and urging people to take to the streets. Adding to the information chaos were online clips lifted from war-themed video games. AFP's fact-checkers identified one such clip posted on X, which falsely claimed to show an Israeli jet being shot down by Iran. The footage bore striking similarities to the military simulation game Arma 3. Israel's military has rejected Iranian media reports claiming its fighter jets were downed over Iran as 'fake news.' Chatbots such as xAI's Grok, which online users are increasingly turning to for instant fact-checking, falsely identified some of the manipulated visuals as real, researchers said. 'This highlights a broader crisis in today's online information landscape: the erosion of trust in digital content,' BitMindAI's Miyachi said. 'There is an urgent need for better detection tools, media literacy, and platform accountability to safeguard the integrity of public discourse.' — AFP

Malay Mail
a day ago
- Malay Mail
Tengku Zafrul: Google's Malaysia investment to boost AI growth, create jobs
KUALA LUMPUR, June 22 — Tech giant Google's investment in Malaysia is expected to continue driving Malaysia's artificial intelligence (AI) and cloud computing economy. Investment, Trade and Industry Minister Tengku Datuk Seri Zafrul Abdul Aziz, who is currently on a working visit to Washington, United States, met with Google representatives to discuss how the company can continue to drive AI development in Malaysia, strengthen cybersecurity and invest in digital skills. 'The government is committed to providing full support and ensuring a conducive investment climate for high-quality investments,' he said in a Facebook post. He added that Google's strategic investment of RM9.4 billion to set up its first data centre and Google Cloud region in Malaysia is expected to generate RM15.04 billion in long-term economic impact and create 26,500 jobs by 2030. 'Thank you, Google, for your continued confidence in Malaysia. Together, we are building a brighter digital future,' he said. — Bernama