Latest news with #TechDay


Techday NZ
2 days ago
- Business
- Techday NZ
Exclusive: Logistics firms face rising OT cyber threats amid global tensions
Cyber attackers are increasingly targeting logistics and supply chain networks, aiming to destabilise nations and gain strategic leverage without ever crossing a border. According to Leon Poggioli, ANZ Regional Director at Claroty, the recent cyber espionage affecting logistics firms supporting Ukraine is not an isolated trend but part of a broader pattern. "There's two key reasons nation states do this," he explained during a recent interview with TechDay. "One is to disrupt the other nation's defences, and the other is to put political pressure on the general public by interfering with their supply chains." These attacks frequently target operational technology (OT) systems - the core infrastructure behind physical processes in logistics, energy, manufacturing and healthcare. Poggioli said attackers exploit connectivity in these environments to carry out sabotage remotely. "A lot of these environments have some kind of external connectivity, so that gives an attacker an ability to remotely trigger a cyber attack and disrupt those supply chains." In some cases, tactics have extended to disrupting weapons infrastructure, such as drones. "When one nation uses drones, the other will defend itself by trying to jam signals and disrupt that infrastructure," he explained. Compared to IT systems, OT vulnerabilities can be far more complex and risky to remediate. Poggioli noted that in OT, even small changes can impact safety and operations. "In the IT world, it's easy to push patches out," he said. "In OT, even a minor change can disrupt operations, so remediation needs to be more targeted." Claroty's platform is built to help organisations quickly cut through large volumes of vulnerability data to find what really matters. "A site may have 1,000 vulnerabilities, but we can whittle that down to the five that make the most impact," he said. "That becomes a manageable number that a cyber leader and OT asset manager can act on within weeks." Recent data from Claroty's global survey of cybersecurity professionals reinforces the growing financial and operational risks posed by cyber attacks on cyber-physical systems (CPS). Nearly half of respondents (45%) reported financial impacts of $500,000 USD or more from such attacks in the past year, with over a quarter suffering losses of at least $1 million. These costs were largely driven by lost revenue, recovery expenses, and employee overtime. "It's a growing concern across multiple sectors, particularly in chemical manufacturing, energy, and mining – more than half of organisations in those sectors reported losses over half a million dollars," Poggioli said. Ransomware remains a major burden, especially in sectors like healthcare where 78% of organisations reported paying over $500,000 to regain access to encrypted systems. "These are real costs, not theoretical risks," he added. "And they're rising." Operational downtime is also widespread. Nearly half of global respondents experienced more than 12 hours of downtime following an attack, with one-third suffering outages lasting a full day or more. "When operations halt, the financial and reputational damage mounts quickly," Poggioli said. He added that one of the most pressing vulnerabilities is the level of remote access in these environments. "We're seeing around 45% of CPS assets connected to the internet," he said. "Most of that is done through VPNs that were never built for OT security." Third-party access is another growing concern, with 82% of respondents saying at least one cyber attack in the past year came through a supplier. Nearly half said five or more attacks stemmed from third-party connections, yet 63% admit they don't fully understand how these third parties are connected to their CPS environment. Poggioli pointed to this as a critical blind spot. "Legacy access methods and poor visibility are allowing attackers in through the back door," he said. Even more concerning is the risk from insiders. "You want to be able to trust your team, but someone with inside knowledge can do more damage than an external attacker," Poggioli said. "Even air-gapped environments need constant monitoring." A cyber attack on Denmark's power grid in 2023 served as a wake-up call. "One operator didn't even know they had the vulnerable firewall in their system," he said. "That's why visibility is so important. You can't secure what you don't know exists." While preparedness across the logistics sector varies, Poggioli believes the industry is slowly recognising the strategic value of cybersecurity. "It's going to become a point of competitive advantage," he said. "Customers are going to start asking serious questions about cyber security and supply chain integrity." He drew a sharp distinction between cyber criminals and state-backed actors. "Cyber criminals want fast financial gain, but nation states are more focused on political objectives," he said. "They have better resources and longer timelines. That changes the game." Poggioli warned that just because no incident has occurred doesn't mean attackers aren't already embedded in critical networks. "There's growing evidence of adversaries nesting in these systems," he said. "My hypothesis is they're preparing for future conflict. If war breaks out, they're already in position to strike." For logistics firms looking to strengthen their defences, Poggioli said the first step is basic visibility. "Most people I speak to admit they don't know 100% what's out there or how it's connected," he said. "Start with an asset inventory. Once you have that, you can start risk modelling and reduce exposure." There are signs that resilience strategies are making a difference. According to the Claroty report, 56% of professionals now feel more confident in their CPS systems' ability to withstand cyber attacks than they did a year ago, and 72% expect measurable improvements in the next 12 months. Still, Poggioli said complacency is not an option. "If you don't know how big the problem is, you won't know how to solve it," he said. "Once you understand the risks, you can act to protect your operations and show the business the value of cyber security."


Techday NZ
10-06-2025
- Techday NZ
Exclusive: SquareX's Audrey Adeline on why the browser is 'the new endpoint'
The browser is the new battleground. That's the message from Audrey Adeline of cybersecurity company SquareX, who has launched a practical Browser Detection and Response Manual to help organisations understand and defend against attacks in what she calls "the most used app on your device." "Eighty per cent of the time spent on a device is now in the browser," she explained to TechDay during a recent interview. "Yet it's one of the least protected surfaces in cybersecurity." Unveiled at the RSA Conference (RSAC'25) earlier this year, the manual has struck a chord with security leaders worldwide, selling out quickly and prompting strong feedback. The manual, written by Audrey Adeline and Vivek Ramachandran is titled: 'The Browser Security Field Manual'. "We were one of the top-selling books at the RSA bookstore," Adeline said. "A lot of CISOs reached out to us afterwards to say it helped their teams rethink browser security." Originally from Indonesia, Adeline's own path into tech was unconventional. "I grew up in a very traditional economy. Most of my family ran consumer businesses - nobody was in STEM," she said. After studying biochemistry at Cambridge and working in cancer research, she pivoted into consulting, and eventually joined Sequoia to evaluate tech companies, including cybersecurity firms. Her passion for deep tech and research led her to SquareX, where she now leads the Year of Browser Bugs (YOBB) project, uncovering browser-based architectural vulnerabilities each month. These include high-profile exploits like polymorphic extensions, which can impersonate legitimate browser tools like password managers and crypto wallets. "The danger is users don't realise they're entering credentials into a fake extension," Adeline explained. "These are architectural issues that legitimate browser features enable, and they're much harder to detect or patch." That urgency drove the creation of the manual. "We kept seeing the same problem - people using the browser constantly, but having very little visibility or protection," she said. "Existing tools just don't give you a clear picture of how the breach occurred." The manual's first edition is now being followed by a second, set for release at DEF CON and Black Hat in August. It will feature commentary from CISOs at Fortune 500 companies to ground the guidance in real-world enterprise experience. "We didn't want to just make it theoretical," Adeline said. "Each chapter now includes perspectives on actual problems faced by security teams." Access to the manual is currently via request form, though Adeline said digital availability is expected closer to August. Developing the manual was not without challenges. "The biggest hurdle was the lack of consolidated resources," she said. "There's research out there, but it's scattered. We had to pull together a lot of primary sources and make it digestible - from beginner concepts to advanced attacks." Browser-based threats have spiked recently, with attackers targeting the browser as the new endpoint for enterprise data. "Think about it," she said. "We don't download files anymore. Our files, apps, identities - everything is now in the browser. It's where 60 to 70 per cent of enterprise data lives." Adeline warned that the shift in attacker behaviour is permanent. "It's not just a trend. There's a fundamental change in how we work, and attackers are following the data." To help teams assess their own posture, SquareX has also launched a free browser attack testing tool. "Seeing is believing," she said. "You can test against 49 different browser-based attacks and see which ones bypass your current solutions." She sees two main approaches to browser defence: dedicated secure browsers, or solutions like SquareX's browser extension, which converts any existing browser into a secure one. "Most organisations can't migrate everyone to a new browser," she said. "Extensions are more practical, and updates are seamless." SquareX positions itself as the EDR for the browser, focusing on detection and response at a granular level. "We're obsessed with user experience. You can't compromise productivity just to get security," she said. The company's design avoids the risks of dedicated browsers, which often lag behind on security patches. "Every time Chrome issues a patch, those browsers need to be updated manually. That creates a gap where zero-days can thrive," she explained. Future plans include a red team edition of the manual and continuous updates as attacks evolve. "I wouldn't be surprised if there are multiple versions by next year," Adeline said. Her advice to security leaders just waking up to the browser as a threat vector is clear: "You need browser-native security to tackle browser-native threats." Adeline believes the industry must go beyond reacting to breaches and start anticipating them. "The best defence is understanding what attackers are doing," she said. "You can't just play catch-up." For her, the inclusion of peer input in the manual is crucial. "Security leaders want to hear from their peers. They need validation that this is a permanent shift, not a passing concern," she said. Asked what's changed to make browsers such a prime target now, Adeline points to a confluence of technology and behaviour. "Chrome has added countless new features like WebAssembly and WebRTC. These make browsers powerful enough to replace local apps," she explained. "Since COVID, we've seen everything move online. Now attackers are simply going where the data is." "The browser is the new endpoint," she said. "It's where we work - and where we're vulnerable."


Techday NZ
22-05-2025
- Business
- Techday NZ
Exclusive: Informatica launches AI agents to transform data management
Informatica has launched a bold new chapter in enterprise data management, introducing autonomous AI Agent Engineering that promises to redefine how businesses build, connect and manage intelligent AI workflows. Announced during this year's Informatica World event, the centrepiece of the launch is CLAIRE Agents – a suite of autonomous digital assistants designed to automate and optimise the full spectrum of data management tasks goals. Accompanying this is AI Agent Engineering, a new service within Informatica's Intelligent Data Management Cloud platform, that empowers organisations to build, connect and manage intelligent multi-agent AI systems and compose business applications quickly, securely and at scale. CLAIRE Agents represent what Informatica describes as "the next evolution in autonomous data management." Unlike traditional automation tools that perform static, rule-based tasks, these agents can reason and make decisions dynamically, based on enterprise-wide data. Gaurav Pathak, Vice President of Product Management, AI and Metadata used the metaphor of autonomous driving to explain the shift, during a recent interview with TechDay. "Traditional automation is like cruise control – it keeps things going at a steady pace. Agents, on the other hand, are like a self-driving car," he said. "They plan, adapt to changing conditions and navigate complex environments based on goals, not just tasks." CLAIRE Agents include specialised assistants such as the Data Quality Agent, Data Lineage Agent and ELT Agent, which are capable of monitoring, remediating and optimising data across complex hybrid ecosystems. These features are powered by Informatica's Intelligent Data Management Cloud's metadata system of intelligence, a context-rich engine that combines human-curated and AI-generated metadata to ground the agents in the specific needs of each organisation. "Without metadata, agents are flying blind," Pathak explained. "It's the map of the world for our AI – it grounds them in the unique semantics and structures of an organisation's data landscape." The company also unveiled AI Agent Engineering, a platform designed to let customers build and connect their own agents across cloud and on-premises environments. This service aims to address growing enterprise demand to create domain-specific AI tools that can work collaboratively and access trusted data across systems. Sumeet Kumar Agrawal, VP of Product Management, who leads the new service, added that many customers are now looking to evolve from static business processes to agentic solutions that adapt and scale. "We're seeing a proliferation of agents – every app vendor has their own agents, for example, Salesforce has its own, SAP has its own, etc. – but what's missing is the connective tissue," he explained. "AI Agent Engineering provides the framework to build, connect and manage these agents holistically, so they can solve real end-to-end business problems." Agrawal added that data is the backbone of this vision. "The reasoning of any agent is only as good as the data it has access to. We provide a clean, trusted data foundation so agents can act with confidence," he said. Both executives stressed the importance of no-code interfaces in democratising AI adoption across technical and non-technical teams. "Writing code is just 20% of the job – maintaining it, securing it and ensuring performance is the real challenge," said Pathak. "No-code makes AI explainable and manageable for everyone." CLAIRE Copilot, which enables users to generate complex data pipelines using natural language, is now generally available. First launched in preview earlier this year, it acts as a pair programmer for data engineers and complements the agentic approach by giving users interactive control over tasks while agents handle broader goals autonomously. Informatica's latest strategy also includes broad ecosystem integration, with support for leading cloud and AI platforms including AWS, Azure, Google Cloud, Snowflake and Databricks. This flexibility, the company says, ensures enterprises can use their preferred AI models while maintaining control over data security and compliance. Security remains a key concern for enterprises experimenting with generative AI. Informatica says its metadata system enforces access controls and data governance rules at every step. "Agents only access data a user is permitted to see, and we've put strict guardrails in place – for example, they can't issue delete commands," said Pathak. Agrawal added: "Every agent deployment comes with built-in security policies – rate limiting, IP restrictions, authentication protocols – everything an enterprise needs to operate safely." The announcement has already drawn attention from key Informatica clients. Desigan Reddi, VP IT and Operations at Wescom Financial, described the agent engineering service as "a game-changer". "It enables us to build and orchestrate intelligent workflows securely and at scale – without the need for complex coding," he said. "This no-code, metadata-aware approach aligns perfectly with our vision of making advanced AI accessible and actionable." As to what the future holds, Informatica's vision is for CLAIRE to become the "front end of data" across the enterprise. "We want users to simply tell CLAIRE what they want – a report, a pipeline, a governance task – and have the agents take care of the rest," said Pathak. Asked whether there's such a thing as too many AI agents, Pathak said, "It's not the number that matters – it's whether they're connected and working together to solve the problem."


Techday NZ
16-05-2025
- Business
- Techday NZ
Exclusive: Appian's Marc Wilson on why AI needs process to deliver business value
Artificial intelligence doesn't work in a vacuum. That's the key message from Marc Wilson, founder and Chief Executive Ambassador at Appian. He believes businesses are still struggling to generate value from AI because "they're not integrating it properly into their core operations." Speaking with TechDay during a recent interview, Wilson offered a blunt assessment of the current AI landscape in enterprise. "One of the biggest misconceptions that most businesses and most organisations have about AI is that it's indistinguishable from magic - that it just shows up and solves everybody's problems," he said. According to Wilson, too many companies treat AI as an end goal rather than a tool to improve specific outcomes. "I've heard time and time again, senior leaders in organisations basically coming to us and saying, 'I have to deploy AI,' as if that's an end state. The truth is, if you don't look at AI through the lens of value, it's indistinguishable from a science experiment." Appian's core philosophy is clear: AI works best in process. Wilson emphasised that for AI to drive change, it must be "operationalised" - embedded directly into the workflows that govern how an organisation functions. "For an AI capability to affect change in a positive way, it needs to plug into one of those operational flows," he said. "A good example here in Australia is our work with Netwealth," Wilson said. "They used Appian to orchestrate how client service requests were handled, embedding AI to classify and route customer emails." "They achieved 98% accuracy - and got the project running within minutes." Wilson highlighted Hitachi's efforts to unify customer and sales data from across its hundreds of operating companies, and Queensland's National Injury Insurance Scheme, which used Appian's generative AI to extract data from documents with 100% accuracy. Appian also recently launched its new Agent Studio platform at the event, introducing what Wilson described as "agentic AI". Unlike standalone tools that execute isolated tasks, Appian's approach allows AI agents to function as structured contributors within business processes. "With our agentic studio, we're able to tie agentic AI into larger, meatier processes - tasking agents the same way you'd task people or systems," Wilson said. "We're combining multiple agents into an overall journey." That structured approach, Wilson argued, is essential to scale AI safely and effectively. Without a clear framework, he warned, AI agents risk becoming uncontrolled or ineffective. "More organisations are going to get very frustrated very quickly, because they're just going to have this agent, they expect it to do something, and they'll prod it and hope," he said. "If it's not tied into a structure, there's a lot that can go wrong." Governance, he added, must be built in from the start. "Governance and structure are going to become increasingly synonymous," he said. "This is what processes you're allowed to call, what data you're allowed to see, and the limits of your actions. I've created a circle that within it, the AI can do lots of things, but I've constrained the inputs and outputs." Another critical piece is data. AI's performance depends on access to high-quality, integrated information - but that's a challenge when data is spread across disconnected systems. "One of the problems that most organisations have today is that a lot of their data is siloed," Wilson said. "Those silos stop really good AI development and learning." Appian's solution is its patented data fabric, which allows data to be accessed and written across disparate systems without physically moving it. "It creates a virtualised database, allowing you to consolidate customer data and write back to systems," Wilson said. "The AI capabilities come along with that." Wilson is clear about the risks of poorly integrated AI. There's the obvious threat of rogue agents making unauthorised decisions, but there's also the quieter failure mode - when organisations fail to realise any return at all. "If you can't integrate it effectively, if you can't bring it to your processes that matter, it's going to be something that people look at in a year or two and say, 'Yeah, that was a lot of hype, and it really didn't deliver.'" For companies still waiting to see ROI, Wilson had a simple diagnosis: "That's probably an organisation that's trying to stand up AI by itself, looking at it, waiting for it to produce something without having it truly integrated." His advice? Start small, and start practical. "Identify a core business process and think about how AI can remove friction, add speed, or cut costs. We've seen AI take something that took 50 days down to five hours." And if it feels a little mundane? That might be a good sign. "Some of the most impactful AI today is going to be boring - and that might be exactly what you want to get started on," Wilson said. "Boring becomes interesting when it drives real value."


Techday NZ
11-05-2025
- Business
- Techday NZ
Exclusive: How sustainability challenges are putting data centres under pressure
Data centres are expanding at breakneck speed across the globe, driven by the explosive rise of artificial intelligence (AI). But experts are warning that unless sustainability and resilience are prioritised now, the long-term viability of these facilities, and the services they power could be at serious risk. "AI is the single most disruptive technology that has the ability to have a positive impact on society in terms of bringing better services to everybody in a more sustainable way," explained David Mudd, Global Head of Digital Trust Assurance at BSI. Mudd, who has worked across the electronics, telecoms, and digital trust sectors, has been with BSI for nearly eight years. His work focuses on ensuring digital infrastructure, including data centres, is robust, secure, and sustainable. But while AI holds promise for society, it brings a sharp increase in demand for data storage and processing. "Every time you take a photo or send a message, that's more data being stored," he told TechDay during a recent interview. "Generative AI in particular sucks up vast quantities of data and requires huge processing power—often 1,000 times more than a standard internet search," he explained. This surge is triggering an unprecedented boom in data centre construction, from massive hyperscale facilities to smaller, localised edge centres. "We're seeing data centres like mini cities, consuming the power of a small town," Mudd said. "But we're also seeing more of them in more places, driven by demand and by national data sovereignty concerns." That, he added, is where the sustainability challenges become urgent. "Before we even start on efficiency, we have to ask - is there power even available to build these facilities? In some regions, there's already a three-to-five-year waiting list for power connections." Beyond electricity, water is another growing concern. "We've seen headlines calling AI 'thirsty', and it's not far off. Data centres use vast amounts of water for cooling - either directly or indirectly through the power stations that feed them," Mudd said. Carbon emissions from energy use and construction materials such as concrete are also part of the equation. The risk, he warned, is that in the rush to meet AI demand, corners could be cut. "There's a real pressure to get data centres up fast. But what gets built today will be around for 30 or 40 years. If shortcuts are taken, we're risking their long-term availability and effectiveness." That's especially important given how critical data centres have become to modern life. "They're now part of our critical infrastructure. Banking, healthcare, utilities - all rely on them," he explained. "In the UK and Europe, data centres have been formally designated as such." Climate change poses additional risks. "We've already seen sustained high temperatures in London leading to increased outages," said Mudd. "It's not just hotter places like Dubai where we need to worry. Even relatively temperate areas are seeing extreme events like floods and heatwaves that strain existing infrastructure." Good location choices and careful design are essential. "You might need to build near a high-tech hub, but is there power? How will the local community respond? What are the climate risks - floods, heat, lightning?" he said. International standards play a key role in getting this right. "Standards like the European EN 50600 series and its ISO equivalent, ISO/IEC 22237, provide globally agreed best practice across the entire data centre lifecycle - from design and construction to maintenance and eventual end-of-life," Mudd said. "This isn't just one viewpoint - it's 100 countries agreeing on what good looks like." While these standards don't solve everything, they help align stakeholders and enable trust across a global industry. "No one organisation has all the best answers," he added. "Having a common language helps everyone work together more effectively." Reducing water and energy consumption, particularly for AI workloads, requires both conventional and advanced solutions. "From evaporative cooling towers and cold aisle containment to on-chip liquid cooling, there are options at every level," Mudd explained. "Even just having solar panels and wind turbines on site is something every organisation should consider." Still, Mudd cautioned against focusing solely on the data centres themselves. "They're only one part of the picture. We also need to rethink software design, telecom infrastructure and societal expectations." "There's been this idea that data creation, storage and processing are free - just like we used to think of energy. That has to change." According to Mudd, talent shortages are "another hurdle." "The data centre industry is facing the same workforce crunch as the broader ICT sector," he said. "New markets especially are struggling to build that critical mass of expertise. That's why it's vital to engage with universities and young engineers now." His message to those considering a career in data centres is simple: "Without data centres, there would be no AI. If you're helping to design and operate them, you're enabling a smarter, more inclusive, and sustainable society." So what should developers be doing right now? "Look at industry best practice and understand the long-term risks of a short-term mindset," Mudd said. "We've got a one-off chance to get this right for the next 30 years. The opportunity is now."