
AI: The future belongs to those who put the humans in the machine first
In 1993, Ghost in the Machine imagined a future where consciousness could exist inside a computer. Three decades later, that vision has blurred into reality and machine intelligence is no longer a science fiction trope - it's a tool we use every day. But the real shift isn't just about building smarter systems; it's about building systems that support smarter humans.
As generative AI spreads across legal practice, the advantage is no longer in what you know, but how well you reason because recall is easy - anyone can pull up case law. The real edge lies in interpretation, explanation and judgment. And while today's models don't always reason perfectly - neither do humans. The better question is: can AI help lawyers reason better?
This is where things get interesting.
More data ≠ better model
Let's start with the false promise of infinite data. It's widely understood that throwing thousands of pages of legislation, regulation, case law and other legal documents at a model doesn't make it smarter. In fact, it often makes it worse because legal reasoning depends on, amongst other things, quality, relevance and clarity. A carefully curated dataset of law and precedent on an expertise domain in a particular jurisdiction (and potentially some related jurisdictions) can outperform a bloated corpus of global case law riddled with inconsistencies and irrelevance.
Here, the model doesn't need to 'know the law' - it needs to retrieve it with precision and reason over the top with discipline. That's why in most practical applications in a specific domain of expertise, Retrieval-Augmented Generation (RAG) will probably beat full fine-tuning. RAG lets you plug into a general-purpose model that's already been trained on a vast body of knowledge, and then layer on your own curated legal content in real-time - without the need for full re-training. It's fast, flexible and keeps you close to the constantly evolving edge of legal precedent. If fine-tuning is like rewriting the engine, RAG is like swapping in smarter fuel - giving you a model that reasons over your trusted material instead of guessing based on a noisy global corpus.
This is the difference between dumping legal textbooks on your desk and actually having a partner walk you through the implications.
Reasoning over regurgitation
Take a real-world query:
"Can an employee working remotely in Melbourne still claim a travel allowance under their enterprise agreement?"
An untrained model might respond with this:
"There are hundreds of examples of travel allowances in Australian enterprise agreements…shall I find these for you and list them?"
Helpful? Not really.
A well-trained legal AI might say this instead:
"It depends on the specific terms of the enterprise agreement that applies to the employee. Travel allowances are typically tied to physical attendance at a designated worksite and if an employee's role has been formally varied to remote or hybrid including under a flexible work arrangement, the allowance may no longer apply. You'd need to check whether the agreement defines a primary work location, whether remote work was agreed under (Section 65 of the Fair Work Act or otherwise) and whether there are any clauses preserving travel entitlements in such cases."
Now we're not 'just' talking about answers; we're talking about prompts for strategic thinking.
Scaling senior expertise, insight and judgment, not just recall
The much deeper question is this: how do we train AI not just to answer; but to remind us to ask better questions? Clients don't pay us for information; they pay for interpretation and come to top-tier firms because they want the kind of insight only senior legal professionals can provide - the kind that draws on pattern recognition through lots of relevant experience, strategic insight and framing and an understanding of nuance built across decades of practice.
The real opportunity lies in scaling what clients actually value most: the expertise of senior partners - including their insight, experience, judgment and contextual thinking. This means training AI to reason like a partner - to recognise what matters, frame choices, reason through trade-offs and flag what clients will care about
We should be asking "How do we encode that?" How do we teach a model to say not just 'here's what the law says', but 'here's how you might think about this and here's what clients like yours have cared about in similar cases'. This represents an all important shift from knowledge to judgment and from retrieval to reasoning.
Because the goal isn't to build a machine that knows everything but to build one that helps your lawyers engage with better questions, surface richer perspectives and unlock more strategic conversations that create value for clients.
It's important to remember: AI hears what is said, but great lawyers listen for what isn't said. That's where real context lives - within tone, hesitation and the unspoken concerns that shape top-tier legal advice. To build AI that supports nuanced thinking, we need to train it on more than documents; we need to model real-world interactions and teach it to recognise the emotional cues that matter. This isn't about replacing human intelligence but about amplifying it, helping lawyers read between the lines and respond with sharper insight. This, in turn, might open up brand new use cases. Imagine if AI could listen in on client-lawyer conversations not just for note-taking but to proactively suggest risks, flag potential misunderstandings or surface relevant precedents in real time based on the emotional and contextual cues it detects.
From knowledge to insight: What great training looks like
If we want to AI to perform like a partner, we need the model not to give lawyers the answer but to do what a senior partner would do in conversation:
"Here's what you need to think about... Here are two approaches clients tend to prefer... and here's a risk your peers might not spot."
This kind of reasoning-first response can help younger lawyers engage with both the material and the client without needing to escalate every issue to their senior. Importantly, it's not about skipping the partner - it's about scaling their thinking. Scaling the apprenticeship model in ways not possible in the past.
If you're not solving for: What the client really cares about, and why
How to recognise the invisible threads between past matters, and current situations, options and decisions,
How to ask the kinds of questions a senior prcatitioner would ask
The kind of prompt to use to achieve this
…then you're not training AI…you're just hoping like hell that it helps.
This is also where RAG and training intersect. Rather than re-training the model from scratch, we can use RAG to ensure the model is drawing from the right content - legal guidance, judgment notes, contextual memos - while training it to reason the way our top partners do. Think of it less like coding a robot; and more like mentoring a junior lawyer with access to every precedent you've ever relied on.
Some critics, including recent research, have questioned whether today's large language models can truly reason or reliably execute complex logical tasks. It's a fair challenge and one we acknowledge but it's also worth noting that ineffective reasoning isn't new. Inconsistency, bias and faulty heuristics have long been a part of human decision-making. The aim of legal AI isn't to introduce flawless reasoning, but to scale the kind of strategic thought partners already apply every day and to prompt richer thinking, not shortcut it.
How to structure a real firm-level AI rollout
As AI becomes embedded in professional services, casual experimentation is no longer enough. Legal firms need structured adoption strategies and one of the best frameworks could be what Wharton professor Ethan Mollick calls the 'Lab, Library, and Leadership' model for making AI work in complex organisations.
In his breakdown: Lab = the experimental sandbox where teams pilot real-world use cases with feedback loops and measurable impact.
Library = the curated knowledge base of prompts, best practices, guardrails and insights (not just raw documents, but how to use these well).
Leadership = the top-down cultural shift that's needed to legitimise, resource and scale these efforts.
For law firms, this maps elegantly to our current pressing challenges: the Lab is where legal teams experiment with tools like RAG based models on live matters. The Library is the evolving playbook of prompt templates, safe document sources and past legal reasoning. And Leadership (arguably the most vital) is what determines whether those ideas ever leave the lab and reach real matters and clients. As Mollick puts it, "AI does not currently replace people, but it does change what people with AI are capable of." The firms that win in this next chapter won't just use AI - they'll teach their people how to build with it.
And critically, they'll keep teaching it.
Most models, including GPT-4, are built on datasets with a cut-off and as a consequence they are often months or even years out of date. If you're not feeding the machine fresh experiences and insights, you're working with a version of reality that's already stale. This isn't a 'one and done' deployment - it's an ongoing dialogue and by structuring feedback loops from live matters, debriefs and partner insights, firms can ensure the model evolves alongside the business, not behind it.
Putting humans in the machine
Ultimately, legal AI isn't about machine innovation; it's about human innovation and the real challenge is how to capture and scale the experience, insight, judgment and strategic thinking of senior lawyers. That requires sitting down with partners to map how they approach a question, what trade-offs they consider and how they advise clients through complexity. That's the real creativity and that's what we need to encode into the machine.
Lawyer 2.0 isn't just AI-assisted - it's trained by the best, for the benefit of the many. The future of legal work will belong to those who put humans in the machine first.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Scoop
12 hours ago
- Scoop
Tonga's Health System Hit By Cyberattack
A team of Australian cyber experts flew to Tonga this week after the country's National Health Information System was breached, leading to a demand for payment from the hackers. Talanoa O Tonga reports the Health Minister Dr Ana Akauola saying the system has been shut down, and staff are handling data manually. Dr Akauola said that hackers encrypted the system and demanded payment, but she has assured MPs "the hackers won't damage the information" on the system. This system was introduced in 2019 with Asian Development Bank (ADB) support to digitise Tonga's health records before going "live" in 2021. Police Minister Paula Piukala was critical of past governments for ignoring warnings that Tonga's digital infrastructure is not fully prepared for these threats. Journalist Sifa Pomana said the hackers are demanding millions of dollars, according to Tonga Police. Residents are being urged to bring essential records to the hospital to help with manual record-keeping.


NZ Autocar
2 days ago
- NZ Autocar
Aussies to soon ride their own FTN Motion Streetdogs
FTN Motion, the Kiwi firm handcrafting electric motorbikes in Hamilton, is now producing its first Streetdog run for Australia. Over 50 build slots are already reserved by Australian riders looking for reduced emissions and a fun commute. The idea for Streetdog came after founder Luke Sinclair rode motorcycles across Southeast Asia, where they're the primary means of transport for many. Back in New Zealand, he and co-founder Kendall Bristow set out to recreate that feeling of freedom and fun in an electric format. The result is a café racer-inspired commuter bike that's quiet, soulful, and eye-catching. It features removable batteries, 30L of integrated storage, and a retro-inspired riding experience. The firm has sold over 200 Streetdogs in New Zealand to date. For many it has replaced their car and improved the owners' carbon footprint. FTN Motion is treading a different path from most in the e-mobility sector, with a strong design ethos and small-batch production runs. The firm believes that the journey between A and B can be the best part of someone's day. Soon the prospective Australian owners will discover that for themselves. Here, the Streetdog 50 kicks off at $10,990. It can manage 80-100km to a charge, and the removable battery takes 5.5 hours to rezip from a three-pin socket. You can ride this on a car licence. The faster Streetdog 80 gets up to 80km/h (hence the name). It has 60-80km of range on one battery, up to 140km on two (there's room for a second battery). Price starts at $12,990 and you need a motorcycle licence for this one.


Techday NZ
2 days ago
- Techday NZ
Businesses embrace AI but lack training & infrastructure for growth
A recent global survey by TeamViewer has found that while small and mid-sized businesses (SMBs) are adopting artificial intelligence (AI) at a rapid pace, their overall maturity in using the technology remains limited, with a particular need for better training and infrastructure. The survey drew responses from 1,400 business leaders worldwide, including 200 based in Australia, and highlighted both the promise and persistent challenges of AI adoption for smaller organisations. The majority of SMB decision makers—95%—acknowledge they need additional training to use AI effectively, despite 72% describing themselves as AI experts. Productivity and insight Australian business leaders reported notable benefits: 33% said increased efficiency and productivity were the main advantages, and a further 28% cited improved insights into processes and performance. However, challenges remain substantial. Respondents identified the lack of education on using AI, perceived security or legal risks, and high implementation costs as their three key concerns. The Australian findings also showed that 70% of business leaders see AI as critical to business innovation and growth, with 67% specifically concerned about the security risks related to data management and 74% stating they use AI cautiously with security measures in place. Additionally, 59% believe AI will have a positive impact on their revenue growth in the coming year provided it can be scaled effectively, while 68% expect AI will drive the most significant productivity increase in a century. Adoption is common, but depth lags The survey found that 86% of SMB leaders globally are comfortable with AI tools being used by employees outside of IT. Yet, regular use remains limited; only one third of SMB respondents use AI daily, and just 16% do so weekly. Despite this, 35% of SMB leaders describe their use of AI as "very mature", compared to just 22% among larger enterprises. Failure to embrace automation through AI is viewed as a tangible risk. For 28% of SMB respondents, increased operational costs resulting from missed automation opportunities are the biggest consequence of inaction. In comparison, larger loss of competitive edge is a greater concern elsewhere, cited by 26% of the wider business community. Security and training Despite optimism about AI's potential—72% of SMB leaders globally expect AI to generate the greatest productivity boom of the century and 76% say it is essential to business performance—skills and security gaps persist. More than one third (38%) say insufficient AI training is a significant hurdle, with 74% expressing concern about data management risks and 65% only using AI in tightly controlled security environments. Furthermore, 77% would not risk a week's salary on their business's ability to manage risks such as unauthorised AI tool usage. Infrastructure and investment Infrastructure readiness is also a major issue. Nearly half (47%) of SMB decision makers say their current systems are insufficient to scale AI as swiftly as they would like. Still, signs of further investment are apparent: 75% of SMBs plan to increase their AI investment in the next 12 months, with the same proportion expecting to ramp up spending within six to twelve months. This suggests a move from initial experimentation towards more advanced implementations. Focus on practical tools TeamViewer has introduced TeamViewer Intelligence, featuring session insights and analytics, alongside its digital assistant, TeamViewer CoPilot, which is designed to help IT teams improve efficiency during support sessions. This tool aims to enable agents to stay focused, automate tasks, and receive real-time guidance without switching applications. "SMBs are clearly motivated to embrace AI, but many are still searching for the right way to turn early adoption into lasting impact," said Artus Rupalla, Director of Product Management at TeamViewer. "The key isn't just more tools, but smarter integration – solutions that bring automation, insight, and consistency into everyday operations. This research confirms what we're seeing across our customer base: SMBs want AI that solves real problems, not just theoretical ones. With practical tools like TeamViewer Intelligence, we can help these businesses move from experimentation to execution and drive real performance gains." The TeamViewer survey report focuses on respondents from businesses with 200–999 employees, and its findings underline the continued challenges faced by SMBs striving to keep pace with AI developments while addressing skills and infrastructure limitations.