logo
Expel expands MDR platform to boost email threat detection

Expel expands MDR platform to boost email threat detection

Techday NZ22-04-2025

Expel has announced the expansion of its managed detection and response (MDR) service to cover email-based threats with new integrations.
The company is integrating its MDR platform with Proofpoint, Abnormal Security, and Sublime Security to strengthen protection against phishing, business email compromise (BEC), and other inbox-based attacks.
With email remaining a frequent entry point for credential theft, malware installation, and unauthorised access, Expel's enhanced solution aims to identify potential threats earlier in the attack lifecycle. This is intended to help customers reduce risk and improve their security posture with more effective detection and response capabilities.
"Identity-based incidents, largely originating from emails, made up 68% of all incidents among Expel customers last year," said Yonni Shelmerdine, Chief Product Officer, Expel.
"Incorporating email threat data enables us to identify and block attacks as soon as they hit the inbox, and gives customers insight into the threat actors working to gain access to their organisation. We're delivering the most comprehensive MDR solution in the market, and these capabilities further solidify that commitment while providing our customers with unparalleled visibility and protection across critical attack vectors."
The expansion comes at a time when security teams are being challenged by a surge in sophisticated email threats, partly driven by the growth of generative artificial intelligence. This increase has resulted in higher volumes of security alerts, putting additional strain on security resources.
Expel has developed its own detections specifically tailored for email security tools and platforms. These proprietary detections are designed to minimise irrelevant alerts and reduce the number of email-based threats that reach end users' inboxes.
The company's approach seeks to strengthen early detection and response capability, which is considered a critical factor for organisations aiming to reduce the likelihood and impact of cyber threats.
Expel's platform integrates data from various email security providers and combines it with contextual information from endpoints, users, and network activity. This enables the system to uncover the full sequence of email-based attack campaigns and take targeted actions to limit potential damage.
Expel continues to build its MDR coverage with what it describes as a technology-agnostic approach, aiming to help customers get more value from existing security investments. The company now offers integrations for over 130 different technology categories, spanning endpoint, cloud, Kubernetes, software-as-a-service, network, SIEM, email, identity, and others.
The expanded MDR service is part of Expel's ongoing efforts to address the security risks associated with the most commonly exploited attack vectors in enterprise environments.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Mirantis unveils architecture to speed & secure AI deployment
Mirantis unveils architecture to speed & secure AI deployment

Techday NZ

time3 days ago

  • Techday NZ

Mirantis unveils architecture to speed & secure AI deployment

Mirantis has released a comprehensive reference architecture to support IT infrastructure for AI workloads, aiming to assist enterprises in deploying AI systems quickly and securely. The Mirantis AI Factory Reference Architecture is based on the company's k0rdent AI platform and designed to offer a composable, scalable, and secure environment for artificial intelligence and machine learning (ML) workloads. According to Mirantis, the solution provides criteria for building, operating, and optimising AI and ML infrastructure at scale, and can be operational within days of hardware installation. The architecture leverages templated and declarative approaches provided by k0rdent AI, which Mirantis claims enables rapid provisioning of required resources. This, the company states, leads to accelerated prototyping, model iteration, and deployment—thereby shortening the overall AI development cycle. The platform features curated integrations, accessible via the k0rdent Catalog, for various AI and ML tools, observability frameworks, continuous integration and delivery, and security, all while adhering to open standards. Mirantis is positioning the reference architecture as a response to rising demand for specialised compute resources, such as GPUs and CPUs, crucial for the execution of complex AI models. "We've built and shared the reference architecture to help enterprises and service providers efficiently deploy and manage large-scale multi-tenant sovereign infrastructure solutions for AI and ML workloads," said Shaun O'Meara, chief technology officer, Mirantis. "This is in response to the significant increase in the need for specialized resources (GPU and CPU) to run AI models while providing a good user experience for developers and data scientists who don't want to learn infrastructure." The architecture addresses several high-performance computing challenges, including Remote Direct Memory Access (RDMA) networking, GPU allocation and slicing, advanced scheduling, performance tuning, and Kubernetes scaling. Additionally, it supports integration with multiple AI platform services, such as Gcore Everywhere Inference and the NVIDIA AI Enterprise software ecosystem. In contrast to typical cloud-native workloads, which are optimised for scale-out and multi-core environments, AI tasks often require the aggregation of multiple GPU servers into a single high-performance computing instance. This shift demands RDMA and ultra-high-performance networking, areas which the Mirantis reference architecture is designed to accommodate. The reference architecture uses Kubernetes and is adaptable to various AI workload types, including training, fine-tuning, and inference, across a range of environments. These include dedicated or shared servers, virtualised settings using KubeVirt or OpenStack, public cloud, hybrid or multi-cloud configurations, and edge locations. The solution addresses the specific needs of AI workloads, such as high-performance storage and high-speed networking technologies, including Ethernet, Infiniband, NVLink, NVSwitch, and CXL, to manage the movement of large data sets inherent to AI applications. Mirantis has identified and aimed to resolve several challenges in AI infrastructure, such as: Time-intensive fine-tuning and configuration compared to traditional compute systems; Support for hard multi-tenancy to ensure security, isolation, resource allocation, and contention management; Maintaining data sovereignty for data-driven AI and ML workloads, particularly where models contain proprietary information; Ensuring compliance with varied regional and regulatory standards; Managing distributed, large-scale infrastructure, which is common in edge deployments; Effective resource sharing, particularly of high-demand compute components such as GPUs; Enabling accessibility for users such as data scientists and developers who may not have specific IT infrastructure expertise. The composable nature of the Mirantis AI Factory Reference Architecture allows users to assemble infrastructure using reusable templates across compute, storage, GPU, and networking components, which can then be tailored to specific AI use cases. The architecture includes support for a variety of hardware accelerators, including products from NVIDIA, AMD, and Intel. Mirantis reports that its AI Factory Reference Architecture has been developed with the goal of supporting the unique operational requirements of enterprises seeking scalable, sovereign AI infrastructures, especially where control over data and regulatory compliance are paramount. The framework is intended as a guideline to streamline the deployment and ongoing management of these environments, offering modularity and integration with open standard tools and platforms.

Over 80,000 Microsoft Entra ID accounts hit by major takeover campaign
Over 80,000 Microsoft Entra ID accounts hit by major takeover campaign

Techday NZ

time12-06-2025

  • Techday NZ

Over 80,000 Microsoft Entra ID accounts hit by major takeover campaign

Proofpoint has identified an active account takeover campaign targeting Microsoft Entra ID users and exploiting the TeamFiltration penetration testing framework. The campaign, which Proofpoint has named UNK_SneakyStrike, has involved attackers gaining unauthorised access to native applications including Microsoft Teams, OneDrive, and Outlook. According to the company's research, since December 2024 this activity has impacted over 80,000 user accounts across hundreds of organisations, resulting in several instances of successful account takeover. Attack methods UNS_SneakyStrike deploys the TeamFiltration pentesting framework to carry out its attacks, leveraging the Microsoft Teams API and Amazon Web Services (AWS) servers in multiple geographical regions. The attackers execute user-enumeration and password-spraying attacks to identify and compromise target accounts. TeamFiltration, which was first released in January 2021, is a post-exploitation tool originally designed for legitimate penetration testing and risk evaluation of Microsoft 365 environments. The tool automates a variety of tactics, techniques, and procedures (TTPs) associated with account takeover campaigns, including account enumeration, password spraying, and data exfiltration. The attackers have exploited access to specific resources and applications with TeamFiltration's features for persistent access. These include "backdooring" via OneDrive, accomplished by uploading malicious files to a user's OneDrive and replacing desktop files with rogue versions, potentially containing malware or macros for ongoing access. Proofpoint noted, "TeamFiltration helps automate several tactics, techniques, and procedures (TTPs) used in modern ATO attack chains. As with many security tools that are originally created and released for legitimate uses, such as penetration testing and risk evaluation, TeamFiltration was also leveraged in malicious activity." Identifying the activity Proofpoint researchers analysed TeamFiltration's public GitHub documentation and configuration files to identify a rare user agent string — representing an outdated Teams client — being used during suspicious activity. This served as a key indicator for tracking unauthorised uses of the tool. They also observed attempts by attackers to access sign-in applications from devices incompatible with those services, suggesting the use of user agent spoofing as a means to disguise the source of the attacks. Another indicator was the pattern of attempted access to a defined list of Microsoft OAuth client applications. The applications are capable of obtaining special "family refresh tokens," allowing attackers to exchange them for access tokens to exploit various native Microsoft applications. Proofpoint found that TeamFiltration's most recent client ID list contained some inaccuracies, with incorrect mappings for 'Outlook' and 'OneNote'. Despite this, the tool's configuration closely aligned with a known family of client IDs published publicly by another cyber security research initiative. AWS infrastructure and behaviour TeamFiltration requires an AWS account to conduct its simulated attacks. Its password spraying function systematically rotates through different AWS Regions, and its enumeration features rely either on a disposable Microsoft 365 Business Basic account or, following recent updates, on a OneDrive-based method. Proofpoint stated, "TeamFiltration's enumeration function leverages the disposable account and the Microsoft Teams API to verify the existence of user accounts within a given Microsoft Entra ID environment before launching password spraying attempts. A recent update to the tool's code introduced a OneDrive-based enumeration method, enhancing its enumeration capabilities." Attacks attributed to TeamFiltration have been observed originating from AWS infrastructure and rotating across multiple AWS regions, with password spraying attempts systematically spread for wider impact and to hinder detection. Campaign analysis Proofpoint began tracking a distinct activity set, UNK_SneakyStrike, after differentiating malicious use of TeamFiltration from legitimate penetration testing activity. The main difference was that attackers operated in indiscriminate, high-volume bursts across many cloud tenants, while security assessments tend to be more targeted and controlled. Proofpoint threat researchers have recently uncovered an active account takeover (ATO) campaign, tracked as UNK_SneakyStrike, using the TeamFiltration pentesting framework to target Entra ID user accounts. Using a combination of unique characteristics, Proofpoint researchers were able to detect and track unauthorized activity attributed to TeamFiltration. According to Proofpoint findings, since December 2024 UNK_SneakyStrike activity has affected over 80,000 targeted user accounts across hundreds of organizations, resulting in several cases of successful account takeover. Attackers leverage Microsoft Teams API and Amazon Web Services (AWS) servers located in various geographical regions to launch user-enumeration and password-spraying attempts. Attackers exploited access to specific resources and native applications, such as Microsoft Teams, OneDrive, Outlook, and others. The volume of login attempts linked to TeamFiltration saw a marked increase starting in December 2024, peaking in January 2025. Over 80,000 user accounts across approximately 100 cloud tenants were targeted, with multiple cases of account takeover observed. Patterns and regional targeting UNK_SneakyStrike activities typically occur in concentrated bursts, focusing on numerous users within a single cloud environment, and then pausing for periods of four to five days. The apparent strategy varies by organisation size: all users within smaller tenant environments are targeted, but only specific user subsets are selected among larger tenants. The primary sources for malicious login activity were traced to AWS infrastructure in three regions: the United States (42% of IP addresses), Ireland (11%), and Great Britain (8%). Tool risks and future outlook Proofpoint noted that penetration testing tools such as TeamFiltration are intended to benefit defensive security operations, but acknowledged their potential for malicious use. "While tools such as TeamFiltration are designed to assist cyber security practitioners in testing and improving defense solutions, they can easily be weaponized by threat actors to compromise user accounts, exfiltrate sensitive data, and establish persistent footholds." The company expects such advanced tools to become more common among attackers. "Proofpoint anticipates that threat actors will increasingly adopt advanced intrusion tools and platforms, such as TeamFiltration, as they pivot away from less effective intrusion methods." Proofpoint has provided security indicators, including a list of observed IP addresses and user agent strings, to aid organisations in detecting potential unauthorised access related to this campaign. The company recommends correlating these indicators with additional context and behavioural analytics for accurate detections.

iFLYTEK wins CNCF award for AI model training with Volcano
iFLYTEK wins CNCF award for AI model training with Volcano

Techday NZ

time10-06-2025

  • Techday NZ

iFLYTEK wins CNCF award for AI model training with Volcano

iFLYTEK has been named the winner of the Cloud Native Computing Foundation's End User Case Study Contest for advancements in scalable artificial intelligence infrastructure using the Volcano project. The selection recognises iFLYTEK's deployment of Volcano to address operational inefficiencies and resource management issues that arose as the company expanded its AI workloads. iFLYTEK, which specialises in speech and language artificial intelligence, reported experiencing underutilised GPUs, increasingly complex workflows, and competition among teams for resources as its computing demands expanded. These problems resulted in slower development progress and placed additional strain on infrastructure assets. With the implementation of Volcano, iFLYTEK introduced elastic scheduling, directed acyclic graph (DAG)-based workflows, and multi-tenant isolation into its AI model training operations. This transition allowed the business to improve the efficiency of its infrastructure and simplify the management of large-scale training projects. Key operational improvements cited include a significant increase in resource utilisation and reductions in system disruptions. DongJiang, Senior Platform Architect at iFLYTEK, said, "Before Volcano, coordinating training under large-scale GPU clusters across teams meant constant firefighting, from resource bottlenecks and job failures to debugging tangled training pipelines. Volcano gave us the flexibility and control to scale AI training reliably and efficiently. We're honoured to have our work recognized by CNCF, and we're excited to share our journey with the broader community at KubeCon + CloudNativeCon China." Volcano is a cloud native batch system built on Kubernetes and is designed to support performance-focused workloads such as artificial intelligence and machine learning training, big data processing, and scientific computing. The platform's features include job orchestration, resource fairness, and queue management, intended to maximise the efficient management of distributed workloads. Volcano was first accepted into the CNCF Sandbox in 2020 and achieved Incubating maturity level by 2022, reflecting increasing adoption for compute-intensive operations. iFLYTEK's engineering team cited the need for an infrastructure that could adapt to the rising scale and complexity of AI model training. Their objectives were to improve allocation of computing resources, manage multi-stage workflows efficiently, and limit disruptions to jobs while ensuring equitable resource access among multiple internal teams. The adoption of Volcano yielded several measurable outcomes for iFLYTEK's AI infrastructure. The company reported a 40% increase in GPU utilisation, contributing to lower infrastructure costs and reduced idle periods. Additionally, the company experienced a 70% faster recovery rate from training job failures, which contributed to more consistent and uninterrupted AI development. The speed of hyperparameter searches—a process integral to AI model optimisation—was accelerated by 50%, allowing the company's teams to test and refine models more swiftly. Chris Aniszczyk, Chief Technology Officer at CNCF, said, "iFLYTEK's case study shows how open source can solve complex, high-stakes challenges at scale. By using Volcano to boost GPU efficiency and streamline training workflows, they've cut costs, sped up development, and built a more reliable AI platform on top of Kubernetes, which is essential for any organization striving to lead in AI." As artificial intelligence workloads become increasingly complex and reliant on large-scale compute resources, the use of tools like Volcano has expanded among organisations seeking more effective operational strategies. iFLYTEK's experience with the platform will be the subject of a presentation at KubeCon + CloudNativeCon China, where company representatives will outline approaches to managing distributed model training within Kubernetes-based environments. iFLYTEK will present its case study, titled "Scaling Large Model Training in Kubernetes Clusters with Volcano," sharing technical and practical insights with participants seeking to optimise large-scale artificial intelligence training infrastructure.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store