logo
‘Tone deaf': US tech company responsible for global IT outage to cut jobs and use AI

‘Tone deaf': US tech company responsible for global IT outage to cut jobs and use AI

The Guardian09-05-2025

The cybersecurity company that became a household name after causing a massive global IT outage last year has announced it will cut 5% of its workforce in part due to 'AI efficiency'.
In a note to staff earlier this week, released in stock market filings in the US, CrowdStrike's chief executive, George Kurtz, announced that 500 positions, or 5% of its workforce, would be cut globally, citing AI efficiencies created in the business.
'We're operating in a market and technology inflection point, with AI reshaping every industry, accelerating threats, and evolving customer needs,' he said.
Kurtz said AI 'flattens our hiring curve, and helps us innovate from idea to product faster', adding it 'drives efficiencies across both the front and back office'.
'AI is a force multiplier throughout the business,' he said.
Other reasons for the cuts included market demand for sustained growth and expanding the product offering.
The company expects to incur up to US$53m in costs as a result of the job cuts.
CrowdStrike reported in March revenue of US$1bn for the fourth financial quarter of 2025, up 25% on the same quarter in 2024, with a loss of US$92m.
In July last year, CrowdStrike pushed out a faulty update to its software designed to detect cybersecurity threats that brought down 8.5m Windows systems worldwide.
The outage caused chaos at airports, and took down computers in hospitals, TV networks, payment systems and people's personal computers.
Aaron McEwan, vice-president of research and advisory at consultancy Gartner, said he was sceptical when companies announced AI efficiencies close to reduced revenue forecasts, as CrowdStrike had in March.
'I think particularly in the tech sector … it's a way of justifying a reduction in the workforce because [of] a financial issue,' he said. 'So either they're not tracking well financially, or they're trying to send a message to investors that good times are around the corner. So I'm immediately sceptical.'
McEwan said companies were facing pressure to deliver on the big investments made in AI.
'The productivity gains that we expect to see from AI just aren't flowing through.'
Gartner research showed across workforces less than 50% of employees are using AI in their job, and only 8% of employees are using AI tools to improve productivity.
Toby Walsh, professor of artificial intelligence at the University of New South Wales, said CrowdStrike's announcement was 'pretty tone deaf' after the outage last year.
'They would have been better redeploying this 5% of people to emergency response and bug fixing,' he said.
Walsh said the market should expect more of these announcements in future.
'It's pretty simple: more profits for companies, less work for workers. But we should learn from the first Industrial Revolution. If we stand up in solidarity, we can use these savings to improve quality and quantity of work for all.'
Niusha Shafiabady, associate professor in computational intelligence at the Australian Catholic University, said AI job replacements were an 'unavoidable reality'.
'No matter what we believe is moral and right, this change will happen. Unfortunately, a lot of people will lose their traditional jobs to AI and technology,' she said.
'If [companies] see that they are saving money by using AI and technology and enhancing their services, they will ask their employees to leave. This is the reality.'
A World Economic Forum report in 2023 found nearly 23% of all jobs globally will change in the next five years due to AI and other macroeconomic trends. While 69m jobs are expected to be created, 83m jobs could be eliminated, leading to a net decrease of 2%, Shafiabady said.
McEwan said companies – tech companies in particular – would be looking for ways to use AI to reduce workforces over time.
'I have no doubt that there will be the emergence of companies that are able to reduce their workforce and substantially because of AI,' he said.
'It'll depend on the type of product that they're selling. But at the moment most companies would be wise to look at how they can use AI to augment their workforce rather than replace.'
Has your job been lost to AI? Get in touch – josh.taylor@theguardian.com

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

New Midjourney AI Video Generator Tested : Transform Your Photos into Videos
New Midjourney AI Video Generator Tested : Transform Your Photos into Videos

Geeky Gadgets

time41 minutes ago

  • Geeky Gadgets

New Midjourney AI Video Generator Tested : Transform Your Photos into Videos

What if your favorite still image could suddenly spring to life, telling a story through movement and emotion? With Midjourney's new video model, that vision is no longer a distant dream. This innovative tool takes static images and transforms them into short, dynamic animations, opening up a world of possibilities for creators. But here's the catch: while the technology is undeniably exciting, it's not without its growing pains. From resolution limitations to steep costs, this experimental feature is as much a challenge as it is a breakthrough. So, is it worth diving into this new frontier of image-to-video animation, or does it fall short of its ambitious promise? In this exploration, Thaeyne takes us through the core features of Midjourney's video model, from its automatic and manual animation modes to its creative potential and technical hurdles. You'll discover how this tool can bring your ideas to life, but also where it might leave you frustrated. Whether you're a professional looking to push the boundaries of visual storytelling or an enthusiast curious about the future of animation, this deep dive will help you decide if this experimental technology is the right fit for your creative ambitions. The question is: how far are you willing to go to turn stillness into motion? Midjourney Video Model Overview Core Features: How the Video Model Works Midjourney's video model focuses exclusively on image-to-video animation, deliberately excluding text-to-video capabilities for now. It offers two distinct animation modes, each catering to different user needs: Automatic Mode: This mode applies predefined motion patterns to your image, making it a quick and accessible option for generating animations without requiring advanced input. This mode applies predefined motion patterns to your image, making it a quick and accessible option for generating animations without requiring advanced input. Manual Mode: For users seeking greater control, this mode enables you to define specific movements, allowing for tailored animations that align with your creative vision. Additionally, the tool provides two motion intensity settings—low and high. The low-motion setting generates subtle, realistic animations, ideal for maintaining a natural look. In contrast, the high-motion setting creates more dynamic and dramatic effects, though it can sometimes result in exaggerated or unnatural movements. Despite these options, the model occasionally encounters motion errors, particularly when operating in high-motion mode, which can detract from the overall quality of the animation. Watch this video on YouTube. Access and Cost: What You Need to Know The video model is currently accessible only through Midjourney's web portal, with no integration into Discord. While this web-based approach may simplify navigation for some users, it limits accessibility for those accustomed to Discord-based workflows, which have been a hallmark of Midjourney's other tools. Cost is another critical consideration. Video generation is significantly more expensive than image creation, costing approximately eight times as much. For frequent users, the Pro subscription tier, priced at $60 per month, includes a 'relaxed mode' that allows for unlimited video generation. However, for casual users or those experimenting with the tool, the high costs may present a barrier to regular use, making it less accessible for non-professional projects. Midjourney Video AI Mode Watch this video on YouTube. Below are more guides on Midjourney from our extensive range of articles. Technical Specifications and Limitations The videos produced by Midjourney's model are short, typically ranging between 5 and 20 seconds in length. The resolution is capped at 480p, which may not meet the standards required for professional or high-quality projects. To achieve higher resolutions, you'll need to rely on external upscaling tools such as Cupscale or Topaz, adding additional steps to your workflow and increasing the time and effort required to finalize a project. The tool performs best when animating single-subject images, where it can effectively bring static visuals to life. However, it struggles with more complex scenarios, such as: Animating multiple faces, which often results in inaccuracies or distorted movements. Generating realistic facial expressions or achieving precise lip-syncing, which remains unreliable. These limitations underscore the experimental nature of the tool and its current inability to handle intricate visual elements effectively. As such, it is better suited for simpler projects that do not require high levels of detail or precision. User Experience: Opportunities and Challenges The video model encourages creative experimentation, allowing you to explore a variety of prompts and animation styles. This flexibility can lead to unique and visually engaging results, particularly for single-subject animations. However, the user experience is not without its challenges. For example: Downloading videos can be cumbersome due to limited resolution options and the absence of streamlined export features, which complicates the workflow. The tool's performance can be inconsistent, especially when handling complex prompts or operating in high-motion settings, leading to mixed results. Despite these hurdles, early feedback has been largely positive, particularly regarding the tool's ability to create smooth and visually appealing animations for simpler projects. This suggests that, even in its current experimental phase, the video model holds significant creative potential for users willing to navigate its complexities. Future Potential and Current Suitability Midjourney's video model represents a promising step forward in the realm of image-to-video animation, offering you a novel way to bring static images to life. Its creative possibilities are undeniable, but the tool's current limitations—such as low resolution, high costs, and technical challenges—highlight its experimental status. As the technology evolves, future updates may address these shortcomings, potentially improving resolution, reducing costs, and enhancing the tool's ability to handle complex animations. For now, the video model is best suited for users who are willing to embrace its experimental nature and explore its potential for innovative visual storytelling. Whether you are a professional seeking to push creative boundaries or an enthusiast experimenting with new tools, this feature offers a glimpse into the future of animation technology. Media Credit: Thaeyne Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Birmingham artist secures colour palette commission in New York
Birmingham artist secures colour palette commission in New York

BBC News

time2 hours ago

  • BBC News

Birmingham artist secures colour palette commission in New York

A Birmingham artist who turned his love of colour and typography into a business during the Covid lockdown is celebrating his first overseas Barnfield created The Colour Palette Company and his work can be found across the UK - with towns and cities showcasing colours that are readily associated with local he has produced a colour palette for the Corning Museum of Glass in New York worked with Birmingham Museums Trust, the Museum of Liverpool and York Museums, this latest work reflects the history and artistry of glass. "I knew our colour palettes were proving popular, and we have had interest from overseas – then out of nowhere the museum reached out after seeing our work on social media," said Mr Barnfield. "It's a huge honour to collaborate with such a prestigious institution and to bring our colour storytelling to an international institution for the first time."He added: "It has been quite a journey and one that I'm really enjoying. The Corning Museum of Glass feels like a milestone moment, especially seeing it's one of the largest museum gift shops in the United States." Follow BBC Birmingham on BBC Sounds, Facebook, X and Instagram.

What iOS 26 Beta Tells Us About the iPhone 17
What iOS 26 Beta Tells Us About the iPhone 17

Geeky Gadgets

time2 hours ago

  • Geeky Gadgets

What iOS 26 Beta Tells Us About the iPhone 17

Apple's Worldwide Developers Conference (WWDC) has once again provided a window into the future of its flagship products. This year, the unveiling of iOS 26 has fueled speculation about the upcoming iPhone 17, hinting at significant advancements in both design and functionality. From a potential redesign featuring innovative materials to enhanced multitasking capabilities, the iPhone 17 could represent a pivotal step forward for Apple's smartphone lineup. Here's a closer look at what these updates might mean for users. The video below from The Apple Circle gives us more details about the iPhone 17. Watch this video on YouTube. A Bold Redesign: The Liquid Glass Revolution The iPhone 17 is rumored to introduce a innovative 'liquid glass' design, a material that promises to redefine the device's aesthetic and durability. This advanced material is expected to provide a sleek, fluid appearance while maintaining the robust build quality that Apple is known for. The liquid glass design aligns seamlessly with iOS 26's updated visual language, creating a cohesive and modern user experience. Additional design enhancements are also anticipated: Thinner bezels, offering a more immersive and edge-to-edge display experience. Refinements to the Dynamic Island feature, which debuted in earlier models, to improve functionality and visual appeal. While the Dynamic Island may not see drastic changes due to hardware constraints, these subtle updates aim to enhance the overall usability and aesthetic of the device. The combination of these design elements could position the iPhone 17 as one of the most visually striking smartphones in Apple's history. Reimagining the iPhone Naming Strategy Apple is reportedly exploring a shift in its naming conventions for the iPhone lineup, which could mark a significant departure from its traditional numerical identifiers. Instead of continuing with names like 'iPhone 17,' the company might adopt simpler titles such as 'iPhone' or 'iPhone Pro.' Another possibility is aligning the iPhone's name with its iOS version, such as 'iPhone 26,' to emphasize the close relationship between hardware and software. This potential rebranding strategy could offer several benefits: Streamlining product differentiation, making it easier for consumers to understand the lineup. Strengthening Apple's ecosystem identity by highlighting the seamless integration between devices and software. By simplifying its naming conventions, Apple could reinforce its focus on user experience and ecosystem cohesion, making its products more accessible to a broader audience. Productivity Features Take Center Stage iOS 26 is expected to introduce a range of productivity-focused features, positioning the iPhone 17 as a versatile tool for both personal and professional use. These updates could significantly enhance multitasking capabilities, addressing long-standing user demands for greater functionality. Key productivity features rumored for iOS 26 include: Split-screen functionality, allowing users to run multiple apps simultaneously for improved multitasking. A desktop-like mode, inspired by iPadOS and competitors like Samsung DeX, allowing the iPhone to function as a productivity hub when connected to an external display. These features could transform the iPhone 17 into a powerful device for work, bridging the gap between mobile and desktop experiences. By integrating these capabilities, Apple aims to cater to users who rely on their smartphones for a wide range of tasks, from managing workflows to creative projects. Seamless Hardware-Software Integration Apple's commitment to seamless hardware-software integration remains a cornerstone of its product philosophy, and the iPhone 17 is expected to exemplify this approach. With iOS 26, the synergy between the device's design and functionality will likely be more evident than ever. Key examples of this integration include: The liquid glass design, which complements iOS 26's updated visual language for a unified aesthetic. Enhanced multitasking features, such as split-screen functionality, that demonstrate the interplay between form and function. This cohesive approach ensures that every aspect of the iPhone 17 works together to deliver a seamless and intuitive user experience. By aligning hardware and software development, Apple continues to set itself apart in the competitive smartphone market. Listening to Customer Feedback Apple's focus on addressing user feedback is evident in the rumored updates for the iPhone 17. Features like slimmer bezels and advanced multitasking capabilities reflect the company's responsiveness to customer demands. By incorporating these enhancements, Apple not only strengthens its relationship with its user base but also demonstrates its commitment to delivering products that meet real-world needs. This user-centric approach could play a significant role in shaping the iPhone 17 as a device that resonates with a wide range of users. By prioritizing features that enhance usability and functionality, Apple continues to build on its reputation for innovation and customer satisfaction. Expanding the Apple Ecosystem The updates introduced with the iPhone 17 are likely to have a ripple effect across Apple's broader ecosystem, further enhancing the interconnected experience that defines the brand. For instance: The desktop-like mode could integrate seamlessly with macOS, allowing users to transition effortlessly between devices for improved productivity. Design consistency across the iPhone, iPad, Mac, and Apple Watch reinforces Apple's ecosystem identity, creating a unified and cohesive user experience. These integrations highlight Apple's dedication to delivering a seamless experience across its product lineup. By making sure that its devices work together harmoniously, Apple continues to strengthen its ecosystem and provide added value to its users. What Lies Ahead As anticipation builds for the iPhone 17, the combination of a liquid glass design, enhanced multitasking features, and a potential rebranding strategy signals a bold evolution for Apple's flagship device. Rooted in customer feedback and ecosystem integration, these updates reflect Apple's ongoing commitment to innovation and user-centric design. While many details remain speculative, the iPhone 17 is poised to set new benchmarks in both functionality and aesthetics, solidifying its place as a leader in the smartphone market. Stay informed about the latest in Liquid glass aesthetic by exploring our other resources and articles. Source & Image Credit: The Apple Circle Filed Under: Apple, Apple iPhone, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store