logo
Christie's AI art auction draws big-money bids — and thousands of protests signatures

Christie's AI art auction draws big-money bids — and thousands of protests signatures

Yahoo26-02-2025

In Christie's New York gallery, a robot is painting a 10-by-12-foot canvas. It adds more oil paint each time a $100 bid is placed on it. But its creative vision doesn't come from the artist who programmed it.
It comes from a technique called outpainting, which employs artificial intelligence to generate elements that blend with existing content on a canvas. It's just one method used by the 34 works in Christie's latest venture: the first major auction that exclusively features art made using AI.
'We've seen throughout time that there's a lot of artistry in working with mechanical means for creating artwork, " said artist and roboticist Alexander Reben, whose aforementioned painting is up for bidding. "And I think what really matters is your intention and what you do."
The auction house — known for selling fine art, luxury goods, and antiques — opened 'Augmented Intelligence' on Feb. 20. The sale has raked in hundreds of thousands of dollars in bids.
But not everyone is pleased with those results.
'Many of the artworks you plan to auction were created using AI models that are known to be trained on copyrighted work without a license,' states an open letter addressed to Christie's signed by more than 6,400 artists.
The letter called for the auction to be cancelled. Reid Southen, who helped organize the letter, said he believes a third of the works featured use generative AI models trained on copyrighted works. He named Midjourney, Open AI's Sora, Runway AI and Stable Diffusion as examples.
'Christie's can hold themselves accountable to a higher standard and engage with these things in a way that is supportive of artists as a whole, and doesn't package these exploitative models into their auction alongside people that are doing things ethically,' Southen said.
Southen, a Michigan-based film industry concept artist, said he and many of his peers have lost work and had their income 'slashed in half' over the past two years due to AI.
Art isn't the only industry bracing for change. According to a World Economic Forum report released last month, 41% of employers expect to downsize their workforce as AI begins to replicate roles. Sixty-nine percent said they plan to recruit talent skilled in AI tool design and enhancement.
But Christie's sees AI as a natural progression in art history. Nicole Sales Giles, Christie's director of digital art, said she welcomes debate around the auction as a sign that AI will transform art to the industry's benefit.
'I'm not a copyright lawyer, so I can't comment on the legality, but from a theft-influence angle, artists have been influenced by other artists for centuries,' Sales Giles said.
Many of the artists featured in the auction used their own data — including personal photography, curated collages and their own poetry — to train their AI models.
'The AI I've been using for almost 10 years was not trained on other artists' work,' said digital artist Daniel Ambrosi, whose work is part of the auction. 'It was not even created to make art in the first place.'
Ambrosi fed his photography of Central Park into Google's DeepDream at two different scales. The AI recognizes the image and moves pixels around in hallucinogenic ways.
'It's like I'm the leader of a jazz band,' he said. 'I write original compositions, and I have this virtuoso saxophonist who knows where I'm going with the song, but is going to improvise, surprise and delight me.'
But even if an artist is using their own work as an input, it doesn't guarantee that the AI model they are using was not built on data that contains copyrighted works.
On Feb. 12, Thomson Reuters won a copyright battle against a legal research firm that used its materials to train an AI model without permission. That ruling said tech companies used data sets with large amounts of human writing to train AI chatbots without compensating those who wrote the original works.
Developer OpenAI wrote in a U.K. filing last year that it would be 'impossible' to train top AI models without copyrighted works. The company's website stated that using publicly available internet materials to train AI models is fair use under U.S. copyright law.According to Reben, AI models pull from such large datasets that it's difficult to find an individual's work. As OpenAI's first artist in residence, Reben worked extensively with beta AI technologies for making art. Now, he's an artist in residence Meta. He said it comes down to the artist to assess what is fair use.
'Using other works to create new works is part of history,' Reben said. 'Creating things which change expression, which move the idea forward, is an exception in copyright law.'
But even if AI is set to become part of the fine art world, Southen said it should be integrated ethically. That means holding AI companies accountable to licensing data they extract value from, and compensating artists fairly. Until then, he said, it's time for Christie's to 'pump the brakes.'
This article was originally published on NBCNews.com

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Kroger To Close 60 Stores Across US: What To Know
Kroger To Close 60 Stores Across US: What To Know

Newsweek

timean hour ago

  • Newsweek

Kroger To Close 60 Stores Across US: What To Know

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Kroger announced plans to close 60 of its supermarkets across the United States over the next 18 months, representing about 5 percent of the Cincinnati-based company's 1,239 Kroger-branded grocery stores across 16 states. The popular grocery retailer revealed the closure plans while reporting first-quarter earnings on Friday but has not specified which store locations will be affected or released a list of impacted stores. Newsweek reached out to Kroger on Saturday via email for comment. Why It Matters Companies close store locations for various reasons. While shifts in consumer shopping behavior and lower demand can cause stores to close, corporations often choose to shutter underperforming locations. Sales dropped slightly to $45.1 billion compared to $45.3 billion for the same period a year earlier according to Kroger earnings data. The move comes as grocery retailers nationwide face mounting pressures from changing consumer habits, inflation, and increased competition from discount chains and online retailers. More than 2,500 store closures are planned across the U.S. this year, according to The Mirror. What To Know Kroger expects the 60 store closures to provide a modest financial benefit to the company, according to a regulatory filing. In the first quarter, Kroger recognized an impairment charge of $100 million related to the planned closings. The company indicated that resulting savings will be reinvested into customer experience initiatives across remaining locations. The closures affect Kroger's extensive footprint spanning 16 states, though the company has remained tight-lipped about specific locations. The grocery retailer told CBS MoneyWatch that it will not be releasing a list of the affected stores. This lack of transparency has left employees and customers uncertain about which communities will lose their local Kroger. However, Kroger says it is committed to supporting displaced workers. All employees at affected stores will be offered roles at other Kroger store locations, though details about relocation assistance or wage protection remain unclear. The timing coincides with broader challenges facing traditional grocery retailers. Many chains are grappling with rising operational costs, changing shopping patterns accelerated by the pandemic, and fierce competition from warehouse clubs, dollar stores, and e-commerce platforms. FILE - This June 17, 2014, file photo, shows a Kroger store in Houston. Kroger Co. FILE - This June 17, 2014, file photo, shows a Kroger store in Houston. Kroger Co. AP Photo/David J. Phillip What People Are Saying Kroger company statement: "As a result of these store closures, Kroger expects a modest financial benefit. Kroger is committed to reinvesting these savings back into the customer experience, and as a result, this will not impact full-year guidance." Director of Media Relations/Corporate Communications Erin Rolfes told Newsweek in an email response: "In the first quarter, Kroger recognized an impairment charge of $100 million related to the planned closing of approximately 60 stores over the next 18 months." Alex Beene, a financial literacy instructor for the University of Tennessee at Martin, previously told Newsweek: "For some major retailers, 2025 is becoming a year of consolidation. Retail locations that have struggled in recent years to remain profitable due to rising costs and less demand are being shuttered, as companies focus their efforts on more successful stores. The hope is these closures will ultimately produce more fiscal and operational efficiency, but it will come at the cost of customers who favored these locations having fewer options." Michael Ryan, a finance expert and the founder of previously told Newsweek: "These aren't random casualties; they're strategic amputations of unprofitable limbs to save the corporate $15+ minimum wages to supply chain inflation, all crushing their razor-thin margins. Combine this with the march of e-commerce and changing consumer habits post-pandemic, physical retail becomes a luxury many companies can no longer afford." What Happens Next The 18-month closure timeline suggests Kroger will implement the plan gradually, though specific dates and locations remain undisclosed.

Shaping the Future of Leadership: The Strategic Role of C-Suite Executive Recruiters in NYC's Innovation Economy
Shaping the Future of Leadership: The Strategic Role of C-Suite Executive Recruiters in NYC's Innovation Economy

Time Business News

timean hour ago

  • Time Business News

Shaping the Future of Leadership: The Strategic Role of C-Suite Executive Recruiters in NYC's Innovation Economy

New York City is more than just a business hub—it's a breeding ground for innovation. From fintech startups in Flatiron to global media empires headquartered in Midtown, the city pulses with disruption, reinvention, and opportunity. But behind every transformative business model is a leadership team equipped to execute with clarity and conviction. That's why C-suite executive recruiters in NYC have become indispensable partners to organizations looking to build forward-thinking, high-impact leadership teams. As NYC continues to position itself as a global leader in technology, sustainability, life sciences, and AI-driven enterprise, the profile of its executive leaders is shifting. Today's CEOs, CFOs, CMOs, and CTOs must not only bring proven management expertise—they must also possess the vision to navigate uncharted terrain. Traditional leadership search models aren't always equipped to address these evolving demands. That's where niche C-suite executive recruiters in NYC come in. These firms specialize in identifying talent who are not just functional leaders but are also fluent in innovation, adaptable to change, and capable of aligning with both investor expectations and market disruptions. NYC companies operate at breakneck speed. Whether in finance, media, health tech, or retail, market shifts happen rapidly—and leadership decisions must keep pace. But that doesn't mean companies can afford to cut corners when hiring for the C-suite. Every executive hire affects company culture, investor confidence, and long-term strategy. Executive recruiters mitigate this risk by offering both speed and precision. They leverage deep networks, proprietary search methodologies, and psychological assessments to vet candidates not only for technical fit, but also for values alignment and long-term leadership potential. In a city where the wrong hire can make headlines—and the right one can shape an industry—this level of rigor matters. One of the most valuable assets C-suite executive recruiters in NYC bring to the table is access to passive talent—executives who are not actively job-hunting but are open to compelling, strategically aligned opportunities. These individuals rarely respond to job postings or recruiter cold calls. Instead, they rely on trusted relationships and discreet introductions. Recruiters with long-standing reputations in NYC's leadership circles know how to initiate these conversations, build trust, and guide top candidates through complex, high-stakes transitions. Another area where modern C-suite recruiters are having a tangible impact is in advancing leadership diversity. In an increasingly global and socially conscious market, companies are under pressure to build executive teams that reflect the diverse perspectives of their customer base, employees, and investors. The best NYC-based recruiters proactively cultivate diverse candidate pipelines and advise companies on how to remove bias from their hiring processes. This not only strengthens brand reputation but also enhances business outcomes. Studies consistently show that diverse leadership teams drive better innovation and financial performance—a critical advantage in NYC's competitive sectors. As a leading name among C-suite executive recruiters in NYC, BCL Search brings a modern, strategic approach to leadership hiring. By combining deep market insight, long-standing industry relationships, and a highly personalized search process, they help companies build leadership teams that are not just ready for today—but equipped for the future. Whether you're seeking a transformational CEO or a digitally savvy CMO, BCL Search delivers the strategic partnership needed to secure world-class executive talent. TIME BUSINESS NEWS

Why is AI halllucinating more frequently, and how can we stop it?
Why is AI halllucinating more frequently, and how can we stop it?

Yahoo

time2 hours ago

  • Yahoo

Why is AI halllucinating more frequently, and how can we stop it?

When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store