Latest news with #GhostintheMachine


Techday NZ
4 days ago
- Techday NZ
AI: The future belongs to those who put the humans in the machine first
In 1993, Ghost in the Machine imagined a future where consciousness could exist inside a computer. Three decades later, that vision has blurred into reality and machine intelligence is no longer a science fiction trope - it's a tool we use every day. But the real shift isn't just about building smarter systems; it's about building systems that support smarter humans. As generative AI spreads across legal practice, the advantage is no longer in what you know, but how well you reason because recall is easy - anyone can pull up case law. The real edge lies in interpretation, explanation and judgment. And while today's models don't always reason perfectly - neither do humans. The better question is: can AI help lawyers reason better? This is where things get interesting. More data ≠ better model Let's start with the false promise of infinite data. It's widely understood that throwing thousands of pages of legislation, regulation, case law and other legal documents at a model doesn't make it smarter. In fact, it often makes it worse because legal reasoning depends on, amongst other things, quality, relevance and clarity. A carefully curated dataset of law and precedent on an expertise domain in a particular jurisdiction (and potentially some related jurisdictions) can outperform a bloated corpus of global case law riddled with inconsistencies and irrelevance. Here, the model doesn't need to 'know the law' - it needs to retrieve it with precision and reason over the top with discipline. That's why in most practical applications in a specific domain of expertise, Retrieval-Augmented Generation (RAG) will probably beat full fine-tuning. RAG lets you plug into a general-purpose model that's already been trained on a vast body of knowledge, and then layer on your own curated legal content in real-time - without the need for full re-training. It's fast, flexible and keeps you close to the constantly evolving edge of legal precedent. If fine-tuning is like rewriting the engine, RAG is like swapping in smarter fuel - giving you a model that reasons over your trusted material instead of guessing based on a noisy global corpus. This is the difference between dumping legal textbooks on your desk and actually having a partner walk you through the implications. Reasoning over regurgitation Take a real-world query: "Can an employee working remotely in Melbourne still claim a travel allowance under their enterprise agreement?" An untrained model might respond with this: "There are hundreds of examples of travel allowances in Australian enterprise agreements…shall I find these for you and list them?" Helpful? Not really. A well-trained legal AI might say this instead: "It depends on the specific terms of the enterprise agreement that applies to the employee. Travel allowances are typically tied to physical attendance at a designated worksite and if an employee's role has been formally varied to remote or hybrid including under a flexible work arrangement, the allowance may no longer apply. You'd need to check whether the agreement defines a primary work location, whether remote work was agreed under (Section 65 of the Fair Work Act or otherwise) and whether there are any clauses preserving travel entitlements in such cases." Now we're not 'just' talking about answers; we're talking about prompts for strategic thinking. Scaling senior expertise, insight and judgment, not just recall The much deeper question is this: how do we train AI not just to answer; but to remind us to ask better questions? Clients don't pay us for information; they pay for interpretation and come to top-tier firms because they want the kind of insight only senior legal professionals can provide - the kind that draws on pattern recognition through lots of relevant experience, strategic insight and framing and an understanding of nuance built across decades of practice. The real opportunity lies in scaling what clients actually value most: the expertise of senior partners - including their insight, experience, judgment and contextual thinking. This means training AI to reason like a partner - to recognise what matters, frame choices, reason through trade-offs and flag what clients will care about We should be asking "How do we encode that?" How do we teach a model to say not just 'here's what the law says', but 'here's how you might think about this and here's what clients like yours have cared about in similar cases'. This represents an all important shift from knowledge to judgment and from retrieval to reasoning. Because the goal isn't to build a machine that knows everything but to build one that helps your lawyers engage with better questions, surface richer perspectives and unlock more strategic conversations that create value for clients. It's important to remember: AI hears what is said, but great lawyers listen for what isn't said. That's where real context lives - within tone, hesitation and the unspoken concerns that shape top-tier legal advice. To build AI that supports nuanced thinking, we need to train it on more than documents; we need to model real-world interactions and teach it to recognise the emotional cues that matter. This isn't about replacing human intelligence but about amplifying it, helping lawyers read between the lines and respond with sharper insight. This, in turn, might open up brand new use cases. Imagine if AI could listen in on client-lawyer conversations not just for note-taking but to proactively suggest risks, flag potential misunderstandings or surface relevant precedents in real time based on the emotional and contextual cues it detects. From knowledge to insight: What great training looks like If we want to AI to perform like a partner, we need the model not to give lawyers the answer but to do what a senior partner would do in conversation: "Here's what you need to think about... Here are two approaches clients tend to prefer... and here's a risk your peers might not spot." This kind of reasoning-first response can help younger lawyers engage with both the material and the client without needing to escalate every issue to their senior. Importantly, it's not about skipping the partner - it's about scaling their thinking. Scaling the apprenticeship model in ways not possible in the past. If you're not solving for: What the client really cares about, and why How to recognise the invisible threads between past matters, and current situations, options and decisions, How to ask the kinds of questions a senior prcatitioner would ask The kind of prompt to use to achieve this …then you're not training AI…you're just hoping like hell that it helps. This is also where RAG and training intersect. Rather than re-training the model from scratch, we can use RAG to ensure the model is drawing from the right content - legal guidance, judgment notes, contextual memos - while training it to reason the way our top partners do. Think of it less like coding a robot; and more like mentoring a junior lawyer with access to every precedent you've ever relied on. Some critics, including recent research, have questioned whether today's large language models can truly reason or reliably execute complex logical tasks. It's a fair challenge and one we acknowledge but it's also worth noting that ineffective reasoning isn't new. Inconsistency, bias and faulty heuristics have long been a part of human decision-making. The aim of legal AI isn't to introduce flawless reasoning, but to scale the kind of strategic thought partners already apply every day and to prompt richer thinking, not shortcut it. How to structure a real firm-level AI rollout As AI becomes embedded in professional services, casual experimentation is no longer enough. Legal firms need structured adoption strategies and one of the best frameworks could be what Wharton professor Ethan Mollick calls the 'Lab, Library, and Leadership' model for making AI work in complex organisations. In his breakdown: Lab = the experimental sandbox where teams pilot real-world use cases with feedback loops and measurable impact. Library = the curated knowledge base of prompts, best practices, guardrails and insights (not just raw documents, but how to use these well). Leadership = the top-down cultural shift that's needed to legitimise, resource and scale these efforts. For law firms, this maps elegantly to our current pressing challenges: the Lab is where legal teams experiment with tools like RAG based models on live matters. The Library is the evolving playbook of prompt templates, safe document sources and past legal reasoning. And Leadership (arguably the most vital) is what determines whether those ideas ever leave the lab and reach real matters and clients. As Mollick puts it, "AI does not currently replace people, but it does change what people with AI are capable of." The firms that win in this next chapter won't just use AI - they'll teach their people how to build with it. And critically, they'll keep teaching it. Most models, including GPT-4, are built on datasets with a cut-off and as a consequence they are often months or even years out of date. If you're not feeding the machine fresh experiences and insights, you're working with a version of reality that's already stale. This isn't a 'one and done' deployment - it's an ongoing dialogue and by structuring feedback loops from live matters, debriefs and partner insights, firms can ensure the model evolves alongside the business, not behind it. Putting humans in the machine Ultimately, legal AI isn't about machine innovation; it's about human innovation and the real challenge is how to capture and scale the experience, insight, judgment and strategic thinking of senior lawyers. That requires sitting down with partners to map how they approach a question, what trade-offs they consider and how they advise clients through complexity. That's the real creativity and that's what we need to encode into the machine. Lawyer 2.0 isn't just AI-assisted - it's trained by the best, for the benefit of the many. The future of legal work will belong to those who put humans in the machine first.


Scotsman
5 days ago
- Entertainment
- Scotsman
Hidden Door Festival, Edinburgh review: 'a welcome return'
Sign up to our Arts and Culture newsletter Sign up Thank you for signing up! Did you know with a Digital Subscription to The Scotsman, you can get unlimited access to the website including our premium content, as well as benefiting from fewer ads, loyalty rewards and much more. Learn More Sorry, there seem to be some issues. Please try again later. Submitting... Hidden Door, The Paper Factory, Edinburgh ★★★★★ 'We played here in November and it was minus twelve degrees and we couldn't feel our fingers, but it was already pretty good,' said Glasgow art-punks Brenda on Saturday night, as the trio – a flurry of sharp melodies and hot sarcasm in boiler suits and red PVC – delivered one of the many highlight sets of a weekend marking the welcome return of Edinburgh's Hidden Door festival. It's a sign of the developer era that the festival's original calling to occupy and celebrate disused and derelict spaces and turn them into one-week music and arts festivals is increasingly drawing a blank on undeveloped sites in central Edinburgh. The Paper Factory – the former Saica paper and cardboard factory by Gogar roundabout – is on the city's far western edge, behind the Edinburgh Gateway tram/train interchange and in view of Edinburgh Airport's control tower. One day soon it too will be demolished, consumed by the sprawl of Scotland's capital. Advertisement Hide Ad Advertisement Hide Ad Tinderbox Orchestra at Hidden Door For a few days, however, it was a site of pilgrimage. Much of what happened on the Friday and Saturday reflected the building's heritage, from Jill Martin Boualaxai's festival-commissioned performance art piece Ghost in the Machine, a gestural physical work performed in and around a piece of illuminated plant machinery, its shell being hammered on by Edinburgh's reactivated Sativa Drummers, to the group work we have all been here – now into the light. The latter featured an installation in the cluttered workers' locker room featuring a stark electronic soundtrack and headphone interviews with former factory workers, a ghost memory of what used to take place here. Elsewhere in the vast space dubbed the 'Factory Floor', meanwhile, the Sativa Drummers reemerged on Saturday night to carry out their own thundering dancefloor performance, recreating the sound of industry as music for dance and catharsis. Other art, meanwhile, just sat beautifully in the space without responding much to it at all, for example Abby Warlow and Lewis Gourlay's vaulting, ceiling-height films of figures dancing, or Juliana Capes' captivating Rainbow Pods, a cascade of multicoloured balls caught in motion in mid-flow from the ceiling. Advertisement Hide Ad Advertisement Hide Ad Alice Faye at Hidden Door PIC: Dan Mosley for Hidden Door Most spectacular of all, the Crane Shed hosted another commissioned work, choreographer Tess Letham's dance piece Spectral, with four female figures, two grounded and two spinning on aerial wires, filling this vast space to clubby music and lighting arrangements by Dave House and Sam Jones. There was also great music, and lots of it, across two large rooms perfectly reimagined as concert spaces; emerging singer-songwriter Alice Faye; the spiky, stylish beat-pop of PVC; the driving, heartfelt Americana of Katy J Pearson; the sheer catharsis of Tinderbox Orchestra, their joyous, semi-synchronised dance moves providing a self-professed antidote to the fearmongering of the moment; and the brilliant Voka Gentle, who cycled through spiky New Wave, dirty glam and gorgeous, understated lyricism.