logo
Gen Z & millennials seek balance, security & AI skills at work

Gen Z & millennials seek balance, security & AI skills at work

Techday NZ15-05-2025

Deloitte's 2025 Gen Z and Millennial Survey has found that younger generations in New Zealand and globally are experiencing shifts in workplace priorities, technology use, and attitudes towards financial security and career development.
The survey, which gathered responses from more than 23,000 individuals across 44 countries, shows that Gen Z and millennials are increasingly seeking a balance of financial reward, personal fulfilment, and well-being in their professional lives. According to the study, these groups are expected to make up 74% of the global workforce by 2030.
Lauren Foster, Partner at Deloitte New Zealand, said: "Instead of chasing corner offices, Gen Z and millennial workforces are looking for careers that pay fairly, align with their values and support their well-being. This shift presents a challenge for employers to rethink what leadership pathways look like and how they can support purpose-driven growth at work."
Findings from the New Zealand snapshot of the survey indicate that financial security remains a significant concern. Sixty-four percent of Gen Zs and 56% of millennials in New Zealand report living paycheck-to-paycheck. About half—47% of Gen Zs and 53% of millennials—said they worry they will not be able to retire with financial comfort.
In terms of work-related stress, 47% of Gen Zs and 45% of millennials in New Zealand said they feel stressed or anxious all or most of the time, with almost a third (31% Gen Zs and 29% millennials) identifying their jobs as a contributing factor to these feelings. Foster said: "Continued financial pressures and workplace stress are taking a toll on Gen Z and millennial workers. Many are feeling stretched, both economically and emotionally, but they're also driving a shift in what they expect from employers. They're looking for workplaces that actively support wellbeing, offer flexibility, and create a culture where people can thrive – not just survive."
The report highlights the growing role of Generative AI (GenAI) at work. Usage is on the rise globally, with 74% of Gen Zs and 77% of millennials expecting GenAI to impact their work in the next year. In New Zealand, 36% of Gen Zs and 48% of millennials are already using GenAI in their everyday roles.
Respondents using GenAI report perceived improvements in work quality and work/life balance. Despite these positives, more than six in ten express concern that GenAI could eliminate jobs, and many are seeking roles they see as safe from technology-driven disruption. The survey indicates a strong demand for ongoing training, with many respondents prioritising the development of both technical and soft skills.
Globally, over 80% of Gen Z and millennial respondents believe that soft skills such as empathy and leadership are more important for career advancement than technical skills alone. Foster said: "Gen Zs and millennials are adopting generative AI tools at work and acknowledge the benefits of doing so. However, there's an undercurrent of concern too. They are enjoying the potential of these tools but wary of what they could mean for their own job security and the human side of work."
The survey also explores changing attitudes towards education and leadership aspirations. Only 6% of Gen Zs globally cited reaching a senior leadership position as a primary career goal, though opportunities for learning and development remain among the top reasons for choosing an employer. In New Zealand, the expectation gap between what young workers want from managers and what they experience remains significant. While 57% of New Zealand Gen Zs and 62% of millennials want their managers to mentor them, only 44% of Gen Zs and 38% of millennials say this actually happens.
Attitudes towards higher education are also evolving. Thirty percent of Gen Zs and 37% of millennials in New Zealand chose not to pursue higher education, compared to 31% of Gen Zs and 32% of millennials globally. The cost of tuition was the main concern for New Zealand respondents, with Foster noting: "More young people are questioning the value of traditional higher education, especially as the cost of living rises. The New Zealand snapshot shows the cost of tuition is the main concern for Gen Zs and millennials when it comes to the higher education system – and more so than what was seen globally. Fifty-seven percent of Gen Zs and 49% of millennials in New Zealand were concerned about the cost of tuition compared to 40% of Gen Zs and 38% of millennials globally."
Purpose in work was identified as highly significant, with roughly nine in ten Gen Zs and millennials globally stating that a sense of purpose is important to their job satisfaction and well-being. While some define purpose as making a positive social impact, others focus on earning money, maintaining work/life balance, or acquiring new skills that enable contributions outside of work.
The survey was based on online responses from 510 New Zealanders alongside the global sample, capturing the perspectives of 302 Gen Zs and 208 millennials living and working in New Zealand between October and December 2024.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Most firms overestimate AI governance as privacy risks surge
Most firms overestimate AI governance as privacy risks surge

Techday NZ

timea day ago

  • Techday NZ

Most firms overestimate AI governance as privacy risks surge

Kiteworks has released its AI Data Security and Compliance Risk Survey, highlighting gaps between AI adoption and governance maturity in the Asia-Pacific (APAC) region and globally. The survey, based on responses from 461 cybersecurity, IT, risk management, and compliance professionals, reveals that only 17% of organisations have implemented technical controls that block access to public AI tools alongside data loss prevention (DLP) scanning. Despite this, 26% of respondents state that over 30% of the data employees input into public AI tools is private, and 27% confirm this figure specifically for the APAC region. These findings appear against a backdrop of rising incidents; Stanford's 2025 AI Index Report recorded a 56.4% year-on-year increase in AI privacy incidents, totalling 233 last year. According to the Kiteworks survey, only 40% of organisations restrict AI tool usage via training and audits, 20% rely solely on warnings without monitoring, and 13% lack any specific policies, leaving many exposed to data privacy risks. A disconnect between adoption and controls "Our research reveals a fundamental disconnect between AI adoption and security implementation," said Tim Freestone, Chief Strategy Officer at Kiteworks. "When only 17% have technical blocking controls with DLP scanning, we're witnessing systemic governance failure. The fact that Google reports 44% of zero-day attacks target data exchange systems undermines the very systems organisations rely on for protection." The survey indicates a persistent overconfidence among organisations regarding their AI governance maturity. While 40% of respondents say they have fully implemented an AI governance framework, Gartner's data shows only 12% of organisations possess dedicated AI governance structures, with 55% lacking any frameworks. Deloitte's research further highlights this gap, showing just 9% achieve 'Ready' level governance maturity despite 23% considering themselves 'highly prepared'. This discrepancy is compounded by industry data indicating that 86% lack visibility into AI data flows. EY's recent study suggests that technology companies continue to deploy AI at a rapid pace, with 48% already using AI agents and 92% planning increased investment—a 10% rise since March 2024—with 'tremendous pressure' to justify returns, thereby elevating incentives to adopt AI quickly but at the expense of security. "The gap between self-reported capabilities and measured maturity represents a dangerous form of organisational blindness," explained Freestone. "When organisations claiming governance discover their tracking reveals significantly more risks than anticipated according to Deloitte, and when 91% have only basic or in-progress AI governance capabilities, this overconfidence multiplies risk exposure precisely when threats are escalating." Legal sector and policy awareness According to survey data, the legal sector exhibits heightened concern about data leakage, with 31% of legal professionals identifying it as a top risk. However, implementation lags are evident, with 15% lacking policies or controls for public AI use and 19% relying on unmonitored warnings. Only 23% of organisations overall have comprehensive privacy controls and regular audits before deploying AI systems. Within legal firms, 15% had no formal privacy controls but prioritised rapid AI uptake – an improvement over the 23% average across sectors, but still significant in a sector where risk mitigation is fundamental. Thomson Reuters figures support this, reporting that just 41% of law firms have AI-related policies, despite 95% foreseeing AI as central within five years. Security controls and data exposure in APAC APAC organisations closely mirror global patterns, with 40% relying on employee training and audits, 17% utilising technical controls with DLP scanning, and 20% issuing warnings with no enforcement. Meanwhile, 11% provide only guidelines, and 12% have no policy in place. This means that 83% lack automated controls, despite the APAC region's position at the forefront of the global AI market. The exposure of private data follows global trends: 27% report that more than 30% of AI-ingested data is private, 24% report a 6–15% exposure rate, and 15% are unaware of their exposure levels. A slight improvement in visibility is indicated, which may reflect regional technical expertise. For AI governance, 40% of APAC respondents claim thorough implementation, 41% say partial implementation, while 9% have no plans, and 3% are planning to implement controls. Regulatory complexity and cross-border risks APAC's position involves navigating a complex landscape of national regulations, including China's Personal Information Protection Law, Singapore's PDPA, Japan's APPI, Australia's Privacy Act reforms, India's draft Digital Personal Data Protection Act, and South Korea's PIPA. The survey highlights that a 60% visibility gap in AI data flows in the region is particularly challenging, given the region's diversity, which limits the ability to comply with data localisation, cross-border data transfer rules, and consent requirements. Weak controls in APAC expose organisations to difficulties in monitoring compliance with China's data localisation regulations, managing Singapore-Australia digital agreements, and knowing how AI tools route data through restricted jurisdictions. Organisational strategies and gaps Regarding privacy investment, 34% of organisations employ balanced approaches that involve data minimisation and the selective use of privacy-enhancing technologies. Some 23% have comprehensive controls and audits, while 10% maintain basic policies but focus on AI innovation, and another 10% address privacy only when required by law. Meanwhile, 23% have no formal privacy controls while prioritising rapid AI adoption. Kiteworks recommends that businesses recognise the overestimation of their governance maturity, deploy automated and verifiable controls for compliance, and prepare for increasing regulatory scrutiny by quantifying and addressing any exposure gaps. "The data reveals organisations significantly overestimate their AI governance maturity," concluded Freestone. "With incidents surging, zero-day attacks targeting the security infrastructure itself, and the vast majority lacking real visibility or control, the window for implementing meaningful protections is rapidly closing."

C-suite divisions slow GenAI adoption due to security worries
C-suite divisions slow GenAI adoption due to security worries

Techday NZ

time3 days ago

  • Techday NZ

C-suite divisions slow GenAI adoption due to security worries

A new report from NTT DATA has highlighted a misalignment among senior executives regarding the adoption and security implications of generative artificial intelligence (GenAI) in organisations globally. NTT DATA's report, "The AI Security Balancing Act: From Risk to Innovation," is based on survey responses from more than 2,300 senior GenAI decision makers, including over 1,500 C-level executives across 34 countries. The findings underscore a gap between the optimism of CEOs and the caution of Chief Information Security Officers (CISOs) concerning GenAI deployment. C-Suite perspectives The report indicates that 99% of C-Suite executives are planning to increase their GenAI investments over the next two years, with 67% of CEOs preparing for significant financial commitments. In comparison, 95% of Chief Information Officers (CIOs) and Chief Technology Officers (CTOs) report that GenAI is already influencing, or will soon drive, greater spending on cybersecurity initiatives. Improved security was named among the top three benefits realised from GenAI adoption in the past year. Despite these high expectations, a considerable number of CISOs express reservations. Nearly half (45%) of CISOs surveyed shared negative sentiments about GenAI rollouts, identifying security gaps and the challenge of modernising legacy infrastructure as primary barriers. The report also finds differences in the perception of policy clarity. More than half of CISOs (54%) stated that internal GenAI policies are unclear, compared with just 20% of CEOs. This suggests a disconnect between business leaders' strategic vision and concerns raised by operational security managers. "As organisations accelerate GenAI adoption, cybersecurity must be embedded from the outset to reinforce resilience. While CEOs champion innovation, ensuring seamless collaboration between cybersecurity and business strategy is critical to mitigating emerging risks," said Sheetal Mehta, Senior Vice President and Global Head of Cybersecurity at NTT DATA, Inc. "A secure and scalable approach to GenAI requires proactive alignment, modern infrastructure and trusted co-innovation to protect enterprises from emerging threats while unlocking AI's full potential." Operational and skills challenges The study highlights that, while 97% of CISOs consider themselves GenAI decision makers, 69% acknowledge their teams currently lack the necessary skills to work effectively with GenAI technologies. Only 38% of CISOs said their organisation's GenAI and cyber security strategies are aligned, compared with 51% of CEOs. Another area of concern identified is the absence of clearly defined policies for GenAI use within organisations. According to the survey, 72% of respondents had yet to implement a formal GenAI usage policy, and just 24% of CISOs strongly agreed their company has an adequate framework for balancing the risks and rewards of GenAI adoption. Infrastructure and technology barriers Legacy technology also poses a significant challenge to GenAI integration. The research found that 88% of security leaders believe outdated infrastructure is negatively affecting both business agility and GenAI readiness. Upgrading systems such as Internet of Things (IoT), 5G, and edge computing was identified as crucial for future progress. To address these obstacles, 64% of CISOs reported prioritising collaboration with strategic IT partners and co-innovation, rather than relying on proprietary AI solutions. When choosing GenAI technology partners, security leaders ranked end-to-end service integration as their most important selection criterion. "Collaboration is highly valued by line-of-business leaders in their relationships with CISOs. However, disconnects remain, with gaps between the organisation's desired risk posture and its current cybersecurity capabilities," said Craig Robinson, Research Vice President, Security Services at IDC. "While the use of GenAI clearly provides benefits to the enterprise, CISOs and Global Risk and Compliance leaders struggle to communicate the need for proper governance and guardrails, making alignment with business leaders essential for implementation." Survey methodology The report's data derives from a global survey of 2,300 senior GenAI decision makers. Of these respondents, 68% were C-suite executives, with the remainder comprising vice presidents, heads of department, directors, and senior managers. The research, conducted by Jigsaw Research, aimed to capture perspectives on both the opportunities and risks associated with GenAI across different regions and sectors. The report points to the need for structured governance, clarity in strategic direction, and investment in modern infrastructure to ensure successful and secure GenAI deployments in organisations.

Gartner: Generative AI to power 75% of analytics by 2027
Gartner: Generative AI to power 75% of analytics by 2027

Techday NZ

time3 days ago

  • Techday NZ

Gartner: Generative AI to power 75% of analytics by 2027

Gartner has forecast that by 2027, 75% of new analytics content will be contextualised for intelligent applications through generative AI, a shift expected to enable closer connections between insights and actions across business software and processes. According to Gartner, this change signifies a move away from traditional analytic tools, instead ushering in an era where AI-driven analytics supports more dynamic and autonomous decision-making capabilities. Georgia O'Callaghan, Director and Analyst at Gartner, outlined the implications of this anticipated evolution: "We're moving from an era where analytic tools help business people make decisions, to a future where GenAI-powered analytics becomes perceptive and adaptive. This will enable dynamic and autonomous decisions that have the potential to transform enterprise and consumer software, business processes and models." Backing this projection, Gartner cited results from a survey of 403 analytics or AI leaders, conducted from October to December 2024. The survey found that over half of organisations currently use AI tools for generating automated insights and for employing natural language queries within analytics and AI development. However, Gartner noted that the present systems are predominantly static in nature, often lacking the capacity for truly dynamic or automated analytics delivery. Autonomous analytics platforms Gartner further predicts that within the next two years, augmented analytics platforms will evolve to become autonomous. By 2027, these platforms are expected to fully manage and execute 20% of business processes, enabled by their ability to operate proactively, collaboratively, and within continuously updated contexts. The next phase, as described by Gartner, will see the integration of AI agents and generative AI-driven technologies that can continuously monitor changing conditions and interpret environments such as market shifts, changes in customer behaviour, or supply chain disruptions. O'Callaghan explained the benefits of this evolution: "Perceptive analytics will use AI agents and other GenAI-fueled technologies to continuously monitor evolving conditions and perceive the target environment, such as market shifts, customer behaviour changes or supply chain disruptions." She added, "Guidance and analysis can then be autonomously adjusted in response, creating a more resilient and responsive analytical infrastructure. As these capabilities emerge and be adopted by organisations, their potential to reshape business operations and drive growth will only continue to expand." Managing risks Despite these potential benefits, Gartner research also draws attention to the risks associated with increasing reliance on perceptive analytics, particularly regarding the level of autonomous action allowed without human validation. Such over-reliance could lead to negative consequences, including unforeseen errors, potential reputational damage, and regulatory scrutiny. One of the key risks outlined is "agent drift," where AI systems gradually move away from intended goals due to evolving data or other interactions. To mitigate this, Gartner points to the emergence of guardian agents—systems specifically designed to monitor AI operations and enforce compliance with policies and rules, keeping analytics within safe and approved boundaries. O'Callaghan highlighted the importance of governance, stating, "Building guardian agents will need to be a key focal point of new governance initiatives for data and analytics leaders, as agentic and perceptive analytics become the standard way of insight delivery across platforms." Industry response Gartner analysts are continuing to provide analysis and recommendations on developments within data and analytics strategies, with a particular emphasis on driving business value and best practices for deploying AI responsibly within organisations. The research underlines that as generative AI continues to influence analytics platforms, companies will need to adopt new governance models and develop mechanisms to prevent unintended system behaviours, ensuring that the promise of intelligent applications is realised safely and effectively.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store