logo
What happens when you feed AI nothing

What happens when you feed AI nothing

The Verge3 days ago

If you stumbled across Terence Broad's AI-generated artwork (un)stable equilibrium on YouTube, you might assume he'd trained a model on the works of the painter Mark Rothko — the earlier, lighter pieces, before his vision became darker and suffused with doom. Like early-period Rothko, Broad's AI-generated images consist of simple fields of pure color, but they're morphing, continuously changing form and hue.
But Broad didn't train his AI on Rothko; he didn't train it on any data at all. By hacking a neural network, and locking elements of it into a recursive loop, he was able to induce this AI into producing images without any training data at all — no inputs, no influences. Depending on your perspective, Broad's art is either a pioneering display of pure artificial creativity, a look into the very soul of AI, or a clever but meaningless electronic by-product, closer to guitar feedback than music. In any case, his work points the way toward a more creative and ethical use of generative AI beyond the large-scale manufacture of derivative slop now oozing through our visual culture.
Broad has deep reservations about the ethics of training generative AI on other people's work, but his main inspiration for (un)stable equilibrium wasn't philosophical; it was a crappy job. In 2016, after searching for a job in machine learning that didn't involve surveillance, Broad found employment at a firm that ran a network of traffic cameras in the city of Milton Keynes, with an emphasis on data privacy. 'My job was training these models and managing these huge datasets, like 150,000 images all around the most boring city in the UK,' says Broad. 'And I just got so sick of managing datasets. When I started my art practice, I was like, I'm not doing it — I'm not making [datasets].'
Legal threats from a multinational corporation pushed him further away from inputs. One of Broad's early artistic successes involved training a type of artificial neural network called an autoencoder on every frame of the film Blade Runner (1982), and then asking it to generate a copy of the film. The result, bits of which are still available online, are simultaneously a demonstration of the limitations, circa 2016, of generative AI, and a wry commentary on the perils of human-created intelligence. Broad posted the video online, where it soon received major attention — and a DMCA takedown notice from Warner Bros. 'Whenever you get a DMCA takedown, you can contest it,' Broad says. 'But then you make yourself liable to be sued in an American court, which, as a new graduate with lots of debt, was not something I was willing to risk.'
When a journalist from Vox contacted Warner Bros. for comment, it quickly rescinded the notice — only to reissue it soon after. (Broad says the video has been reposted several times, and always receives a takedown notice — a process that, ironically, is largely conducted via AI.) Curators began to contact Broad, and he soon got exhibitions at the Whitney, the Barbican, Ars Electronica, and other venues. But anxiety over the work's murky legal status was crushing. 'I remember when I went over to the private view of the show at the Whitney, and I remember being sat on a plane and I was shitting myself because I was like, O h, Warner Bros. are going to shut it down,' Broad recalls. 'I was super paranoid about it. Thankfully, I never got sued by Warner Bros., but that was something that really stuck with me. After that, I was like, I want to practice, but I don't want to be making work that's just derived off other people's work without their consent, without paying them. Since 2016, I've not trained a sort of generative AI model on anyone else's data to make my art.'
In 2018, Broad started a PhD in computer science at Goldsmiths, University of London. It was there, he says, that he started grappling with the full implications of his vow of data abstinence. 'How could you train a generative AI model without imitating data? It took me a while to realize that that was an oxymoron. A generative model is just a statistical model of data that just imitates the data it's been trained on. So I kind of had to find other ways of framing the question.' Broad soon turned his attention to the generative adversarial network, or GAN, an AI model that was then much in vogue. In a conventional GAN, two neural networks — the discriminator and the generator — combine to train each other. Both networks analyze a dataset, and then the generator attempts to fool the discriminator by generating fake data; when it fails, it adjusts its parameters, and when it succeeds, the discriminator adjusts. At the end of this training process, tug-of-war between discriminator and generator will, theoretically, produce an ideal equilibrium that enables this GAN to produce data that's on par with the original training set.
Broad's eureka moment was an intuition that he could replace the training data in the GAN with another generator network, loop it to the first generator network, and direct them to imitate each other. His early efforts led to mode collapse and produced 'gray blobs; nothing exciting,' says Broad. But when he inserted a color variance loss term into the system, the images became more complex, more vibrant. Subsequent experiments with the internal elements of the GAN pushed the work even further. 'The input to [a GAN] is called a latent vector. It's basically a big number array,' says Broad. 'And you can kind of smoothly transition between different points in the possibility space of generation, kind of moving around the possibility space of the two networks. And I think one of the interesting things is how it could just sort of infinitely generate new things.'
Looking at his initial results, the Rothko comparison was immediately apparent; Broad says he saved those first images in a folder titled 'Rothko-esque.' (Broad also says that when he presented the works that comprise (un)stable equilibrium at a tech conference, someone in the audience angrily called him a liar when he said he hadn't input any data into the GAN, and insisted that he must've trained it on color field paintings.) But the comparison sort of misses the point; the brilliance in Broad's work resides in the process, not the output. He didn't set out to create Rothko-esque images; he set out to uncover the latent creativity of the networks he was working with.
Did he succeed? Even Broad's not entirely sure. When asked if the images in (un)stable equilibrium are the genuine product of a 'pure' artificial creativity, he says, 'No external representation or feature is imposed on the networks outputs per se, but I have speculated that my personal aesthetic preferences have had some influence on this process as a form of 'meta-heuristic.' I also think why it outputs what it does is a bit of a mystery. I've had lots of academics suggest I try to investigate and understand why it outputs what it does, but to be honest I am quite happy with the mystery of it!'
Talking to him about his process, and reading through his PhD thesis, one of the takeaways is that, even at the highest academic level, people don't really understand exactly how generative AI works. Compare generative AI tools like Midjourney, with their exclusive emphasis on 'prompt engineering,' to something like Photoshop, which allows users to adjust a nearly endless number of settings and elements. We know that if we feed generative AI data, a composite of those inputs will come out the other side, but no one really knows, on a granular level, what's happening inside the black box. (Some of this is intentional; Broad notes the irony of a company called OpenAI being highly secretive about its models and inputs.)
Broad's explorations of inputless output shed some light on the internal processes of AI, even if his efforts sometimes sound more like early lobotomists rooting around in the brain with an ice pick rather than the subtler explorations of, say, psychoanalysis. Revealing how these models work also demystifies them — critical at a time when techno-optimists and doomers alike are laboring under what Broad calls 'bullshit,' the 'mirage' of an all-powerful, quasi-mystical AI. 'We think that they're doing far more than they are,' says Broad. 'But it's just a bunch of matrix multiplications. It's very easy to get in there and start changing things.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Aaron Taylor-Johnson hails director Danny Boyle's 28 Years Later approach
Aaron Taylor-Johnson hails director Danny Boyle's 28 Years Later approach

Yahoo

time6 minutes ago

  • Yahoo

Aaron Taylor-Johnson hails director Danny Boyle's 28 Years Later approach

Aaron Taylor-Johnson says the 28 Years Later cast were given "freedom of space to make mistakes" by director Danny Boyle. The 35-year-old actor plays Jamie in the new post-apocalyptic horror movie and praised the experimental approach that the acclaimed filmmaker took to the project. Aaron told Collider: "I think most days were definitely a new experience for me, and for us. Very creative. He was experimenting with these different cameras and different rigs and stuff, and not always with a sense of, like, 'This is how it's going to work.' "Some things he'd be like, 'Look, I don't know', and that was quite refreshing because it meant that there was this freedom of space to make mistakes, but be creative and find something new. "He's exceptional. When you're in the hands of Danny, he's got such an enthusiasm and this great energy, he really makes you feel in the safest of hands." Aaron features alongside Jodie Comer in the film with the pair as married couple Jamie and Isla – with Alfie Williams portraying their 12-year-old son Spike – and revealed that he and the Killing Eve actress spoke at length about the background to their characters in the flick that marks the latest installment of the 28 Days Later franchise. The Kraven the Hunter star said: "I feel like there was definitely a lot of love there in their relationship, but we had this two-week period before in the rehearsal period to really kind of dive into how they may have found each other, their relationship to the community, the island. "The one thing that they both can agree on is that they adore their son, and they've raised their son in this community." Taylor-Johnson added: "So, it's fractured, and it's complex, and I like that humanity, that sort of human, flawed aspect of it. Yeah, we talked a lot about the backstory to feel like it was grounded in something." Jodie felt that it was important for the duo to "understand" the background to their alter egos, even though it was often forgotten about when the pair were acting together on set. The 32-year-old star said: "It was like you say, it was important for us to understand that, and honestly, you then have to kind of forget it when you get on the set because you're faced with the material in front of you and where they are at this present moment. "But yeah, it was always important for us to feel that we knew the history before they got to this point." Meanwhile, Aaron revealed that he was "hugely impressed" with the attitude of his young co-star Williams – who is just 14 years of age. He said: "I was hugely impressed with Alfie's focus and his energy every single day. He was so switched on and gave it his all, and terrific to work with. "To have collaborative conversations with Danny and come up with great thoughts and opinions on his character in the scenes, he truly came to work prepared every day. So, it was very impressive, emotionally."

Midlothian man quit school after making £10k a month managing Fortnite players from bedroom
Midlothian man quit school after making £10k a month managing Fortnite players from bedroom

Yahoo

time8 minutes ago

  • Yahoo

Midlothian man quit school after making £10k a month managing Fortnite players from bedroom

A young Midlothian man quit school after earning £10,000 a month managing some of the world's biggest online gamers from his bedroom. Ross McLaren, 22, was born with a rare form of muscular dystrophy which left him wheelchair bound. He dreamed of becoming a professional Fortnite player but instead built a six-figure business at the age of 17 while living at home with his parents, Lesley and Craig. Now Ross manages a team of ten and oversees the daily operations of some of the biggest names in online gaming including megastar Fortnite players Clix and Sentinel, reports The Daily Record. READ MORE: Eager Edinburgh customers in huge queue stretching up road as new cafe opens READ MORE: Australian 'fiasco' ferry leaves Edinburgh after months docked in the capital He told the Record: "I really wanted to be a pro-Fortnite player but I realised I wasn't good enough. "I got into video editing for professional Fornite players and started working with small gaming streamers for free when I was in school. "By summer 2021 I was making £10,000 per month and I told my parents I didn't want to sit my Advanced Highers. "They had their doubts and asked me to go back to school - but I quit after one day." Ross explained how his disability enabled him to get into the gaming world and hone his craft. He said: "Having muscular dystrophy was somewhat of an advantage for me as it meant I spent a lot more time on a computer when I was younger as I couldn't go out and play sports like other kids." The then-schoolboy began offering his video editing services to small streamers for fun. Just as he was about to give up, Ross struck gold. "I was working with a streamer called Bugha who only had about 20,000 subscribers," he said. "Bugha played in the Fortnite World Cup in 2018 and won $3million. Overnight, his subscriber count hit one million.' Bugha– now with 4.7million YouTube subscribers – stuck with Ross, and business 'spiralled from there'. Ross then caught the eye of another famous Fornite player called Clix who had 3.5 million YouTube subscribers. Following pressure from his parents, Ross went back to George Heriot's School for one day before quitting - and landed one of the biggest opportunities of his career. He was asked to run a gaming channel fronted by UK YouTube icon KSI who boasts more than 22 million subscribers. He now works as a freelancer, and manages content for professionals like Clix and top E-Sports organisation Sentinels, based in Los Angeles. Top Fortnite streamers like Clix earn around £100,000 per month and Ross collects a lucrative commission for producing content. The young entrepreneur now has his sights on relocating to the US to be closer to his star clients. He added: "It's crazy how things have turned out. "My job didn't exist a few years ago - now business is thriving. "I hope to move to Dallas. With Fortnite, the servers are based there and many of the players are. "I'm planning on taking the leap to move to be closer to my clients." Join Edinburgh Live's Whatsapp Community here and get the latest news sent straight to your messages.

The BBC cracks down on AI scraping.
The BBC cracks down on AI scraping.

The Verge

time20 minutes ago

  • The Verge

The BBC cracks down on AI scraping.

Posted Jun 20, 2025 at 12:01 PM UTC The BBC cracks down on AI scraping. The British broadcaster has threatened legal action against Perplexity for allegedly using BBC content to train AI, saying that the Perplexity chatbot was regurgitating its content verbatim. This is the first time that the BBC has taken action against an AI company, demanding that Perplexity cease scraping BBC content, delete copies of infringing material, and provide the broadcaster with 'financial compensation.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store