logo
Exclusive: Appian's Marc Wilson on why AI needs process to deliver business value

Exclusive: Appian's Marc Wilson on why AI needs process to deliver business value

Techday NZ16-05-2025

Artificial intelligence doesn't work in a vacuum. That's the key message from Marc Wilson, founder and Chief Executive Ambassador at Appian.
He believes businesses are still struggling to generate value from AI because "they're not integrating it properly into their core operations."
Speaking with TechDay during a recent interview, Wilson offered a blunt assessment of the current AI landscape in enterprise.
"One of the biggest misconceptions that most businesses and most organisations have about AI is that it's indistinguishable from magic - that it just shows up and solves everybody's problems," he said.
According to Wilson, too many companies treat AI as an end goal rather than a tool to improve specific outcomes.
"I've heard time and time again, senior leaders in organisations basically coming to us and saying, 'I have to deploy AI,' as if that's an end state. The truth is, if you don't look at AI through the lens of value, it's indistinguishable from a science experiment."
Appian's core philosophy is clear: AI works best in process. Wilson emphasised that for AI to drive change, it must be "operationalised" - embedded directly into the workflows that govern how an organisation functions. "For an AI capability to affect change in a positive way, it needs to plug into one of those operational flows," he said.
"A good example here in Australia is our work with Netwealth," Wilson said. "They used Appian to orchestrate how client service requests were handled, embedding AI to classify and route customer emails."
"They achieved 98% accuracy - and got the project running within minutes."
Wilson highlighted Hitachi's efforts to unify customer and sales data from across its hundreds of operating companies, and Queensland's National Injury Insurance Scheme, which used Appian's generative AI to extract data from documents with 100% accuracy.
Appian also recently launched its new Agent Studio platform at the event, introducing what Wilson described as "agentic AI". Unlike standalone tools that execute isolated tasks, Appian's approach allows AI agents to function as structured contributors within business processes.
"With our agentic studio, we're able to tie agentic AI into larger, meatier processes - tasking agents the same way you'd task people or systems," Wilson said. "We're combining multiple agents into an overall journey."
That structured approach, Wilson argued, is essential to scale AI safely and effectively. Without a clear framework, he warned, AI agents risk becoming uncontrolled or ineffective. "More organisations are going to get very frustrated very quickly, because they're just going to have this agent, they expect it to do something, and they'll prod it and hope," he said.
"If it's not tied into a structure, there's a lot that can go wrong."
Governance, he added, must be built in from the start.
"Governance and structure are going to become increasingly synonymous," he said. "This is what processes you're allowed to call, what data you're allowed to see, and the limits of your actions. I've created a circle that within it, the AI can do lots of things, but I've constrained the inputs and outputs."
Another critical piece is data. AI's performance depends on access to high-quality, integrated information - but that's a challenge when data is spread across disconnected systems.
"One of the problems that most organisations have today is that a lot of their data is siloed," Wilson said. "Those silos stop really good AI development and learning."
Appian's solution is its patented data fabric, which allows data to be accessed and written across disparate systems without physically moving it.
"It creates a virtualised database, allowing you to consolidate customer data and write back to systems," Wilson said. "The AI capabilities come along with that."
Wilson is clear about the risks of poorly integrated AI. There's the obvious threat of rogue agents making unauthorised decisions, but there's also the quieter failure mode - when organisations fail to realise any return at all.
"If you can't integrate it effectively, if you can't bring it to your processes that matter, it's going to be something that people look at in a year or two and say, 'Yeah, that was a lot of hype, and it really didn't deliver.'"
For companies still waiting to see ROI, Wilson had a simple diagnosis: "That's probably an organisation that's trying to stand up AI by itself, looking at it, waiting for it to produce something without having it truly integrated."
His advice? Start small, and start practical.
"Identify a core business process and think about how AI can remove friction, add speed, or cut costs. We've seen AI take something that took 50 days down to five hours."
And if it feels a little mundane? That might be a good sign.
"Some of the most impactful AI today is going to be boring - and that might be exactly what you want to get started on," Wilson said. "Boring becomes interesting when it drives real value."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Exclusive: Logistics firms face rising OT cyber threats amid global tensions
Exclusive: Logistics firms face rising OT cyber threats amid global tensions

Techday NZ

time2 days ago

  • Techday NZ

Exclusive: Logistics firms face rising OT cyber threats amid global tensions

Cyber attackers are increasingly targeting logistics and supply chain networks, aiming to destabilise nations and gain strategic leverage without ever crossing a border. According to Leon Poggioli, ANZ Regional Director at Claroty, the recent cyber espionage affecting logistics firms supporting Ukraine is not an isolated trend but part of a broader pattern. "There's two key reasons nation states do this," he explained during a recent interview with TechDay. "One is to disrupt the other nation's defences, and the other is to put political pressure on the general public by interfering with their supply chains." These attacks frequently target operational technology (OT) systems - the core infrastructure behind physical processes in logistics, energy, manufacturing and healthcare. Poggioli said attackers exploit connectivity in these environments to carry out sabotage remotely. "A lot of these environments have some kind of external connectivity, so that gives an attacker an ability to remotely trigger a cyber attack and disrupt those supply chains." In some cases, tactics have extended to disrupting weapons infrastructure, such as drones. "When one nation uses drones, the other will defend itself by trying to jam signals and disrupt that infrastructure," he explained. Compared to IT systems, OT vulnerabilities can be far more complex and risky to remediate. Poggioli noted that in OT, even small changes can impact safety and operations. "In the IT world, it's easy to push patches out," he said. "In OT, even a minor change can disrupt operations, so remediation needs to be more targeted." Claroty's platform is built to help organisations quickly cut through large volumes of vulnerability data to find what really matters. "A site may have 1,000 vulnerabilities, but we can whittle that down to the five that make the most impact," he said. "That becomes a manageable number that a cyber leader and OT asset manager can act on within weeks." Recent data from Claroty's global survey of cybersecurity professionals reinforces the growing financial and operational risks posed by cyber attacks on cyber-physical systems (CPS). Nearly half of respondents (45%) reported financial impacts of $500,000 USD or more from such attacks in the past year, with over a quarter suffering losses of at least $1 million. These costs were largely driven by lost revenue, recovery expenses, and employee overtime. "It's a growing concern across multiple sectors, particularly in chemical manufacturing, energy, and mining – more than half of organisations in those sectors reported losses over half a million dollars," Poggioli said. Ransomware remains a major burden, especially in sectors like healthcare where 78% of organisations reported paying over $500,000 to regain access to encrypted systems. "These are real costs, not theoretical risks," he added. "And they're rising." Operational downtime is also widespread. Nearly half of global respondents experienced more than 12 hours of downtime following an attack, with one-third suffering outages lasting a full day or more. "When operations halt, the financial and reputational damage mounts quickly," Poggioli said. He added that one of the most pressing vulnerabilities is the level of remote access in these environments. "We're seeing around 45% of CPS assets connected to the internet," he said. "Most of that is done through VPNs that were never built for OT security." Third-party access is another growing concern, with 82% of respondents saying at least one cyber attack in the past year came through a supplier. Nearly half said five or more attacks stemmed from third-party connections, yet 63% admit they don't fully understand how these third parties are connected to their CPS environment. Poggioli pointed to this as a critical blind spot. "Legacy access methods and poor visibility are allowing attackers in through the back door," he said. Even more concerning is the risk from insiders. "You want to be able to trust your team, but someone with inside knowledge can do more damage than an external attacker," Poggioli said. "Even air-gapped environments need constant monitoring." A cyber attack on Denmark's power grid in 2023 served as a wake-up call. "One operator didn't even know they had the vulnerable firewall in their system," he said. "That's why visibility is so important. You can't secure what you don't know exists." While preparedness across the logistics sector varies, Poggioli believes the industry is slowly recognising the strategic value of cybersecurity. "It's going to become a point of competitive advantage," he said. "Customers are going to start asking serious questions about cyber security and supply chain integrity." He drew a sharp distinction between cyber criminals and state-backed actors. "Cyber criminals want fast financial gain, but nation states are more focused on political objectives," he said. "They have better resources and longer timelines. That changes the game." Poggioli warned that just because no incident has occurred doesn't mean attackers aren't already embedded in critical networks. "There's growing evidence of adversaries nesting in these systems," he said. "My hypothesis is they're preparing for future conflict. If war breaks out, they're already in position to strike." For logistics firms looking to strengthen their defences, Poggioli said the first step is basic visibility. "Most people I speak to admit they don't know 100% what's out there or how it's connected," he said. "Start with an asset inventory. Once you have that, you can start risk modelling and reduce exposure." There are signs that resilience strategies are making a difference. According to the Claroty report, 56% of professionals now feel more confident in their CPS systems' ability to withstand cyber attacks than they did a year ago, and 72% expect measurable improvements in the next 12 months. Still, Poggioli said complacency is not an option. "If you don't know how big the problem is, you won't know how to solve it," he said. "Once you understand the risks, you can act to protect your operations and show the business the value of cyber security."

‘Nanogirl' informs South on AI's use
‘Nanogirl' informs South on AI's use

Otago Daily Times

time3 days ago

  • Otago Daily Times

‘Nanogirl' informs South on AI's use

Even though "Nanogirl", Dr Michelle Dickinson, has worked with world leading tech giants, she prefers to inspire the next generation. About 60 Great South guests were glued to their Kelvin Hotel seats on Thursday evening as the United Kingdom-born New Zealand nanotechnologist shared her knowledge and AI's future impact. Business needed to stay informed about technology so it could future-proof, she said. The days were gone where the traditional five year business plan would be enough to futureproof due to the breakneck speed technology has been advancing. Owners also needed to understand the importance of maintaining a customer-centric business or risk becoming quickly irrelevant. "I care about that we have empty stores." The number of legacy institutions closing was evidence of its model not moving with the customer. "Not being customer-centric is the biggest threat to business." Schools were another sector which needed to adapt to the changing world as it predominantly catered to produce an "average" student. "Nobody wants their kids to be average." Were AI technology to be implemented it could be used to develop personalised learning models while removing the stress-inducing and labour-intensive tasks from teachers' workload. "Now you can be the best teacher you can be and stay in the field you love. "I don't want our teachers to be burnt out, I want them to be excited to be teaching." In 30 seconds, new technology could now produce individualised 12-week teaching plans aligned to the curriculum, in both Ma¯ori and English she said. Agriculture was another sector to benefit from the developing technology. Better crop yields and cost savings could now be achieved through localised soil and crop tracking information which pinpointed what fertiliser needs or moisture levels were required in specific sections of a paddock. While AI was a problem-solving tool which provided outcomes on the information available to it, to work well, it still needed the creative ideas to come from humans, she said. "People are the fundamentals of the future . . . and human side of why we do things should be at the forefront. "We, as humans, make some pretty cool decisions that aren't always based on logic." Personal and commercial security had also become imperative now there was the ability to produce realistic "deep-fake" productions with videos and audio was about to hit us. She urged families and organisations to have "safe words" that would not be present in deep fake recordings and allow family members or staff to identify fake from genuine cries for help. "This is the stuff we need to be talking about with our kids right now." Great South chief executive Chami Abeysinghe said Dr Dickinson's presentation raised some "thought-provoking" questions for Southland's business leaders. She believed there needed to be discussions about how Southland could position itself to be at the forefront of tech-driven innovation. "I think some of the points that she really raised was a good indication that we probably need to get a bit quicker at adopting and adapting. "By the time we get around to thinking about it, it has already changed again." AI was able to process information and data in a fraction of the time humans did, but the technology did not come without risks and it was critical businesses protected their operations. "If we are going to use it, we need to be able to know that it's secure." Information on ChatGPT entered the public realm that everyone could have access to and business policies had not kept up. "You absolutely have to have a [AI security] policy."

Nearly half of developers say over 50% of code is AI-generated
Nearly half of developers say over 50% of code is AI-generated

Techday NZ

time3 days ago

  • Techday NZ

Nearly half of developers say over 50% of code is AI-generated

Cloudsmith's latest report shows that nearly half of all developers using AI in their workflows now have codebases that are at least 50% AI-generated. The 2025 Artifact Management Report from Cloudsmith surveyed 307 software professionals in the US and UK, all working with AI as part of their development, DevOps, or CI/CD processes. Among these respondents, 42% reported that at least half of their current codebase is now produced by AI tools. Despite the large-scale adoption of AI-driven coding, oversight remains inconsistent. Only 67% of developers who use AI review the generated code before every deployment. This means nearly one-third of those working with AI-assisted code are deploying software without always performing a human review, even as new security risks linked to AI-generated code are emerging. Security concerns The report points to a gap between the rapid pace of AI integration in software workflows and the implementation of safety checks and controls. Attacks such as 'slopsquatting'—where malicious actors exploit hallucinated or non-existent dependencies suggested by AI code assistants—highlight the risks when AI-generated code is left unchecked. Cloudsmith's data shows that while 59% of developers say they apply extra scrutiny to AI-generated packages, far fewer have more systematic approaches in place for risk mitigation. Only 34% use tools that enforce policies specific to AI-generated artifacts, and 17% acknowledge they have no controls in place at all for managing AI-written code or dependencies. "Software development teams are shipping faster, with more AI-generated code and AI agent-led updates," said Glenn Weinstein, CEO at Cloudsmith. "AI tools have had a huge impact on developer productivity, which is great. That said, with potentially less human scrutiny on generated code, it's more important that leaders ensure the right automated controls are in place for the software supply chain." Developer perceptions The research reveals a range of attitudes towards AI-generated code among developers. While 59% are cautious and take extra steps to verify the integrity of code created by AI, 20% said they trust AI-generated code "completely." This suggests a marked difference in risk appetite and perception within developer teams, even as the majority acknowledge the need for vigilance. Across the sample, 86% of developers reported an increase in the use of AI-influenced packages or software dependencies in the past year, and 40% described this increase as "significant." Nonetheless, only 29% of those surveyed felt "very confident" in their ability to detect potential vulnerabilities in open-source libraries, from which AI tools frequently pull suggestions. "Controlling the software supply chain is the first step towards securing it," added Weinstein. "Automated checks and use of curated artifact repositories can help developers spot issues early in the development lifecycle." Tooling and controls The report highlights that adoption of automated tools specifically designed for AI-generated code remains limited, despite the stated importance of security among software development teams. While AI technologies accelerate the pace of software delivery and updating, adoption of stricter controls and policy enforcement is not keeping up with the new risks posed by machine-generated code. The findings indicate a potential lag in upgrading security processes or artifact management solutions to match the growing use of AI in coding. Developers from a range of industries—including technology, finance, healthcare, and manufacturing—participated in the survey, with roles spanning development, DevOps management, engineering, and security leadership in enterprises with more than 500 employees. The full Cloudsmith 2025 Artifact Management Report also explores other key issues, including how teams decide which open-source packages to trust, the expanding presence of AI in build pipelines, and the persistent challenges in prioritising tooling upgrades for security benefits.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store