02-06-2025
How lawmakers are regulating real-world AI in 2025
When it comes to regulating artificial intelligence, lawmakers are stuck between needing to act quickly and needing to get it right.
That tension was at the heart of 'What Does Real-World AI Regulation Look Like?', a panel discussion at the 2025 Builders Conference featuring Pennsylvania state Rep. Napoleon J. Nelson, Wilmington Councilmember James Spadola and John Hopkins-Gillespie, director of policy at responsible AI governance startup Trustible AI.
From autonomous vehicles to AI-powered teaching bots, use cases are outpacing public understanding — and policymakers are struggling to catch up.
'When tech starts moving at speeds that make tech people's eyes go crazy,' Nelson said, 'How do we create governance structures that work for something that's going so darn fast?'
Nelson advised lawmakers not to 'chase headlines' by relying on simplified narratives or a surface-level understanding of AI to decide when and how to legislate. In fact, legislators should purposely craft 'reactive' legislation to the tech, according to Spadola.
'I want to prevent people from getting hurt as much as we can,' Spadola said, 'but I think it's good to let the market do what the market's going to do and figure out how we deal with the negative ramifications.'
Companies are handling this by setting their own guardrails in the absence of formal regulation, according to Hopkins-Gillespie, whose company helps organizations implement AI governance.
'The last thing you want to do is be in the news for having something go wrong because you didn't have a safety check or quality assurance process in place,' Hopkins-Gillespie said.
Real use cases, real policy gaps
Throughout the discussion, the legislators shared real-world examples of approaching AI in their current roles.
Spadola sees the tech as a net positive and has already put it to work in Wilmington. The city already uses AI-powered tools like ShotSpotter and license plate readers for public safety.
'I look at AI as: How can we help run the municipality better?' Spadola said.
Nelson pointed to a recent case in Pennsylvania involving cyber charter schools applying to use AI-powered chatbots as teachers. The applications were denied, but not because the technology was deemed unsafe. The government just didn't have policies in place to fully audit its quality.
'We don't have a real framework to understand, well, what's our policy on hallucination rates? And how many times can a chatbot say something that's incorrect that then violates a contract?' Nelson said. 'We don't know.'
The need for education through collaboration
The panelists agreed on the need for stronger collaboration between tech companies, policymakers and communities.
Hopkins-Gillespie called for more proactive education, where the industry side breaks down what legislators should know about AI.
'Most policymakers are not experts on these things,' he said. 'And so, where there are gaps, where there are opportunities for us to educate and show some of the work we're doing, where some things have worked, where some things haven't gone great.'
The collaboration also extends to the relationship between constituents and elected officials. Spadola recommended that constituents build relationships with their legislators before asking for help navigating AI-related concerns.
'Form a relationship now,' he said. 'It's just better to get known now than before you come with a problem or an ask.'
In the meantime, AI regulation is a waiting game. The policy bottleneck, according to Nelson, is political, not technical.
'It doesn't take seven years to actually codify, to write good policy,' Nelson said. 'It takes seven months to write good policy, and the other six and a half years is just fighting over it.'
This shouldn't stop businesses from still embracing it in the meantime, though.
'Don't fear the technology,' Hopkins-Gillespie said. 'Learn about the technology.'