Latest news with #Xanthorox


Techday NZ
12 hours ago
- Business
- Techday NZ
Exclusive: How Cybersecurity startup Blackveil is targetting AI-driven threats
After 20 years in the IT trenches, Adam Burns had seen enough. Burns, the founder of New Zealand-based cybersecurity startup Blackveil, spent much of his career working for managed service providers – firms tasked with overseeing the IT infrastructure of other businesses. And time and again, he says, he witnessed companies fall victim to the same avoidable cyberattacks. "Each time, I saw the same things going wrong," he said. "The industry was missing something critical." Blackveil was his answer: a company with a mission to protect the "forgotten child of cybersecurity" by focusing on overlooked but essential components of digital defence. The turning point came last year, after Burns responded to his twelfth cyberattack incident in short succession. Frustrated by the pattern, he decided to act. "I built a little application, a Python crawler, and stuck it on the internet," he explained. "It ran on the . TLD for six weeks and confirmed that over 50% of Kiwi businesses had critical gaps in their cybersecurity." The data, drawn from public domain records, validated Burns' suspicion that weak digital hygiene – like unprotected DNS records – was leaving companies wide open to attack. From there, Blackveil's reach grew beyond New Zealand. The team expanded their scanning to include Australian businesses and even global Fortune 500 companies. The result? Even the biggest players weren't immune. "These aren't always advanced attacks," Burns said. "It's usually someone forgetting to change a default password, turn on multi-factor authentication, or tidy up an email record." But the landscape is rapidly evolving, and the rise of AI-powered cyberattacks, particularly tools like Xanthorox, is escalating the threat. Burns described Xanthorox as "ChatGPT for hackers" – a platform capable of generating malware, conducting reconnaissance, and launching tailored phishing campaigns. "You don't need technical knowledge anymore," he said. "You just talk to it in plain language. If something doesn't work, it evolves and tries something else. It's terrifying." To counter this, Blackveil developed its own AI assistant: Buck. While it doesn't yet fix vulnerabilities directly, it acts as an intelligent guide for businesses, simplifying complex security insights into accessible language. "You log in, scan your domain, and Buck breaks it down for you," Burns said. "You don't have to be a technical guru to understand what's wrong." For now, Buck exists as a standalone agent, but future versions will be fully integrated into Blackveil's platform. "Our goal is to make cybersecurity accessible," Burns explained. "We're lifting the veil – hence the name Blackveil – on a space that's been out of reach for many businesses." The company's flagship product, Blackvault, is a domain security platform that focuses on prevention rather than reaction. Traditional cybersecurity tools often work in a reactive way, alerting users after something has already gone wrong. Blackvault flips that model by proactively securing digital entry points – what Burns calls "shutting the front door." According to Blackveil's internal data, aligning three critical DNS records – SPF, DKIM, and DMARC – can reduce phishing, spoofing and spam threats by up to 87%. The company promises deployment within two to four weeks for most businesses. "For a small to medium-sized business, the return on investment is huge," Burns said. "This is one of the most cost-effective ways to secure your business." Despite its focus on the ANZ region, Blackveil operates globally, and the remote-first company has seen growing demand abroad. Headquartered in Tauranga, the business can support international clients without needing to be onsite, although on-the-ground assistance is available in the Bay of Plenty. Burns himself relocated from Auckland a few years ago for a slower pace of life, but remains deeply connected to the broader tech world. In addition to Blackveil, he developed KiwiCost, a side project offering real-time cost comparisons for people living in or moving to New Zealand. "That one was just me scratching an itch," he said. "But it also helped me practice and refine the design direction for Blackveil." His approach is anything but traditional. "Most IT companies are run by old guys in blue suits," he joked. "I wanted to bring something different – vibrant, creative and approachable." That includes how the company communicates. On LinkedIn, Burns shares cybersecurity insights with a dose of humour and sarcasm. One of his recent posts – about seemingly mundane email security protocols – went viral, drawing over 100,000 impressions. "People are clearly looking for plain-English guidance," he said. "And they appreciate a bit of personality." Asked what advice he'd give businesses unsure how to prepare for the evolving threat landscape, Burns had three clear steps: train your staff, get the basics right, and monitor your systems. "Every staff member is a risk if they don't know how to spot bad actors," he said. "Their inbox is their digital passport. If you train them properly and secure your fundamentals, 90% of attacks become impossible." He added: "And after that, monitor everything – because DNS records can be altered by mistake, or worse." For those in crisis, Blackveil also offers an emergency helpline – 0508 HACKED – designed to provide immediate assistance to compromised businesses. "That line goes straight to my mobile," Burns said. "It's about being there when people need us most." Blackvault is still evolving, with plans to become what Burns calls "the Swiss Army knife of domain security." But his goal remains clear: "We want to make strong cybersecurity achievable for everyone," he said. "Because it's not just big companies under threat anymore – it's all of us."


Scientific American
07-05-2025
- Scientific American
Criminal AI is Here—And Anyone Can Subscribe
This article includes a reference to violent sexual assault. Reports of a sophisticated new artificial intelligence platform started surfacing on cybersecurity blogs in April, describing a bespoke system whispered about on dark web hacker forums and created for the sole purpose of crime. But despite its shadowy provenance and evil-sounding name, Xanthorox isn't so mysterious. The developer of the AI has a GitHub page, as well as a public YouTube channel with screen recordings of its interface and the description 'This Channel Is Created Just for Fun Content Ntg else.' There's also a Gmail address for Xanthorox, a Telegram channel that chronicles the platform's development and a Discord server where people can pay to access it with cryptocurrencies. No shady initiations into dark web criminal forums required—just a message to a lone entrepreneur serving potential criminals with more transparency than many online shops hawking antiaging creams on Instagram. This isn't to say that the platform isn't nefarious. Xanthorox generates deepfake videos or audios to defraud you by impersonating someone you know, phishing e-mails to steal your login credentials, malware code to break into your computer and ransomware to lock you out of it until you pay—common tools in a multibillion-dollar scam industry. And one screen recording on its YouTube channel promises worse. The white text on a black background is reminiscent of ChatGPT's interface, until you see the user punch in the request 'step by step guide for making nuke at my basement.' And the AI replies, 'You'll need either plutonium-239 or highly enriched uranium.' On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Such knowledge, however, has long been far from secret. College textbooks, Internet searches and educational AIs have imparted it without basement nukes becoming a cottage industry; the vast majority of people, not to mention many nations, obviously cannot acquire the components. As for the scamming tools, they've been in use since long before current AI models appeared. Rather the screen recording is an advertising stunt that heightens the platform's mystique—as do many of the alarmist descriptions of it in cybersecurity blogs. Although no one has yet proven that Xanthorox heralds a new generation of criminal AI, it and its unknown creator raise crucial questions about which claims are hype and which should elicit serious concern. A Brief History of Criminal AI 'Jailbreaking'—disabling default software limitations—became mainstream in 2007 with the release of the first iPhone. The App Store had yet to exist, and hackers who wanted to play games, add ringtones or switch carriers had to devise jailbreaks. When OpenAI launched the initial version of ChatGPT, powered by its large language model GPT-3.5, in late 2022, the jailbreaking began immediately, with users gleefully pushing the chatbot past its guardrails. One common jailbreak involved fooling ChatGPT by asking it to role-play as a different AI—one that had no rules and was allowed to write phishing e-mails. ChatGPT would then respond that it indeed couldn't write such material itself, but it could do the role-playing. It would then pretend to be a nefarious AI and begin churning out phishing e-mails. To make this easier, hackers introduced a 'wrapper'—a layer of software between an official AI model and its users. Rather than accessing the AI directly through its main interface, people could simply go through the easier-to-use wrapper. When they input requests for fake news stories or money laundering tips, the wrapper repackaged their prompts in language that tricked ChatGPT into responding. As AI guardrails improved, crooks had less success with prompts, and they began downloading an open-source model called GPT-J-6B (commonly referred to as GPT-J), which is not made by OpenAI. The usage license for that system is largely unrestrictive, and the main challenge for someone who wants to use GPT-J is affording a computer system with enough processing power to run it. In June 2023, after training GPT-J on a broad corpus of malware code, phishing templates and compromised business e-mails, one user released WormGPT, which they described as a custom chatbot, and made it available to the public through Telegram. Anyone who wanted to design malicious code, spoof websites, and bombard inboxes just had to pay anywhere from $70 to $5,600, depending on the version and level of access. Two months later, cybersecurity journalist Brian Krebs revealed the creator's identity as Rafael Morais, a then 23-year-old Portuguese man. Morais, citing increased attention, wiped the channel, leaving customers with nothing except what they'd already pulled in from scams. FraudGPT, DarkBERT and DarkBARD followed, generating malware, ransomware, personalized scam e-mails and carding scripts—automated programs that sequentially test details stolen from credit and debit cards on online payment gateways. Screenshots of these AIs at work spread across the Internet like postcards from the future, addressed to everyone who still believed that cyberattacks require skill. The presence of such AIs 'lowers the bar to enter cybercrime,' says Sergey Shykevich, threat intelligence group manager at the cybersecurity company Check Point. 'You don't need to be a professional now.' As for the criminals making the bots, these episodes taught them two lessons: Wrapping an AI system is cheap and easy, and a slick name sells. Chester Wisniewski, director and global field chief information security officer at the cybersecurity firm Sophos, says scammers often scam other would-be scammers, targeting 'script kiddies'—a derogatory term, dating to the 1990s, for those who use prewritten hacking scripts to create cyberattacks without understanding the code. Many of these potential targets reside in countries with few economic opportunities, places where running even a few successful scams could greatly improve their future. 'A lot of them are teenagers, and a lot are people just trying to provide for their families,' Wisniewski says. 'They just run a script and hope that they've hacked something.' The Real Threat of Criminal AI Though security experts have expressed concerns along the lines of AI teaching terrorists to make fertilizer bombs (like the one Timothy McVeigh used in his 1995 terrorist attack in Oklahoma City) or to engineer smallpox strains in a lab and unleash them upon the world, the most common threat posed by AIs is the scaling up of already-common scams, such as phishing e-mails and ransomware. Yael Kishon, AI product and research lead at the cyberthreat intelligence firm KELA, says criminal AIs 'are making the lives of cybercriminals much easier,' allowing them to 'generate malicious code and phishing campaigns very easily.' Wisniewski agrees, saying criminals can now generate thousands of attacks in an hour, whereas they once needed much more time. The danger lies more in amplifying the volume and reach of known forms of cybercrime than in the development of novel attacks. In many cases, AI merely 'broadens the head of the arrow,' he says. 'It doesn't sharpen the tip.' Yet aside from lowering the barrier to becoming a criminal and allowing criminals to target far more people, there now does appear to be some sharpening. AI has become advanced enough to gather information about a person and call them, impersonating a representative from their gas or electric company and persuading them to promptly make an 'overdue' payment. Even deepfakes have reached new levels. Hong Kong police said in February that a staff member at a multinational firm, later revealed to be the British engineering group Arup, had received a message that claimed to be from the company's chief financial officer. The staffer then joined a video conference with the CFO and other employees—all AI-generated deepfakes that interacted with him like humans, explaining why he needed to transfer $25 million to bank accounts in Hong Kong—which he then did. Even phishing campaigns, scam e-mails sent out in bulk, have largely shifted to 'spear phishing,' an approach that attempts to win people's trust by using personal details. AI can easily gather the information of millions of individuals and craft a personalized e-mail to each one, meaning that our spam boxes will have fewer messages from people claiming to be a Nigerian prince and far more from impersonations of former colleagues, college roommates or old flames, all seeking urgent financial help. One area where AI truly excels, Wisniewski says, is its use of languages. Whereas targeted people often spotted attempted scams in Spanish or Portuguese because a scammer used the wrong dialect—writing to someone in Portugal with Brazilian Portuguese or to someone in Argentina with Spanish phrasing that was more typical in Mexico—an AI can easily adapt its content to the dialect and regional references where its targets live. There are, of course, plenty of other applications, such as making hundreds of fake website storefronts to steal people's credit card information or mass-producing disinformation to manipulate public opinion—nothing new in concept, only in the vast scale with which it can now be deployed. Xanthorox: Marketing or Menace? Xanthorox sounds like a monster from a self-published fantasy novel ('xantho' comes from an Ancient Greek word for yellow, 'rox' is a common rendering of 'rocks,' and the name as a whole vaguely evokes anthrax). But there's no data on how well it works aside from its creator's claims and the screen recordings he has shared. Though some cybersecurity blogs describe Xanthorox as the first AI built from the ground up for crime, no one interviewed for this article could confirm that assertion. And on the Xanthorox Telegram channel, the creator has admitted to struggling with hardware constraints while using versions of two popular AI systems: Claude (created by the San Francisco–based company Anthropic) and DeepSeek (a Chinese model owned by the hedge fund High-Flyer). Kishon, who predicts that dark AI tools will increase cyberthreats in the years ahead, doesn't see Xanthorox as a game changer. 'We are not sure that this tool is very active because we haven't seen any cybercrime chatter on our sources on other cybercrime forums,' she says. Her words are a reminder that there is still no gigantic evil chatbot factory available to the masses. The threat is the ease with which new models can be wrapped, misaligned and shipped before the next news cycle. Yet Casey Ellis, founder of the crowdsourced cybersecurity platform Bugcrowd, sees Xanthorox differently. Though he acknowledges that many details remain unknown, he points out that earlier criminal AI didn't have advanced expert-level systems—designed to review and validate decisions—checking one another's work. But Xanthorox appears to. 'If it continues to develop in that way,' Ellis says, 'it could evolve into being quite a powerful platform.' Daniel Kelley, a security researcher at the AI e-mail-security company SlashNext, who wrote the first blog about Xanthorox, believes the platform to be more effective than WormGPT and FraudGPT. 'Its integration of modern AI chatbot functionalities distinguishes it as a more sophisticated threat,' he says. In March Xanthorox's anonymous creator posted in the platform's Telegram channel that his work was for 'educational purposes.' In April he expressed fear over all the media attention, calling the system merely a 'proof of concept' exercise. But not long afterward, he began bragging about the publicity, selling monthly access for $200 and posting screenshots of crypto payments. At the time of writing, he has sold at least 13 subscriptions, raised the price to $300 and just launched a polished online store that references Kelley's SlashNext blog post like a product endorsement and says, 'Our goal is to offer a secure, capable, and private Evil AI with a straightforward purchase.' Perhaps the scariest part of Xanthorox is the creator's chatter with his 600-plus followers on a Telegram channel that brims with racist epithets and misogyny. At one point, to show how truly criminal his AI is, the creator asked it to generate instructions on how to rape someone with an iron rod and kill their family—a prompt that seemed to echo the rape and murder of a 22-year-old woman in Delhi, India, in 2012. (Xanthorox then proceeded to detail how to murder people with such an object.) In fact, many posts on the Xanthorox Telegram channel resemble those on 'the Com,' a hacker network of Telegram and Discord channels that Krebs described as the ' cybercriminal hacking equivalent of a violent street gang ' on his investigative news blog KrebsOnSecurity. Staying Safe in the Age of Criminal AI Unsurprisingly, much of the work to protect against criminal AI, such as detecting deepfakes and fraudulent e-mails, has been done for companies. Ellis believes that just as spam detectors are built into our current systems, we will eventually have 'AI tools to detect AI exploitation, deepfakes, whatever else and throw off a warning in a browser.' Some tools already exist for home users. Microsoft Defender blocks malicious Web addresses. Malwarebytes Browser Guard filters phishing pages, and Bitdefender rolls back ransomware encryption. Norton 360 scans the dark web for stolen credentials, and Reality Defender flags AI-generated voices or faces. 'The best thing is to try to fight AI with AI,' says Shykevich, who explains that AI cybersecurity systems can rapidly catalog threats and detect even subtle signs that an attack was AI-generated. But for people who don't have access to the most advanced defenses, he stresses education and awareness—especially for elderly people, who are often the primary targets. 'They should understand: if someone calls with the voice of their son and asks for money immediately to help them because something happened, it can be that it's not their son,' Shykevich says. The existence of so many AI systems that can be repurposed for large-scale and personalized crime means that we live in a world where we should all look at incoming e-mails the way city people look at doorknobs. When we get a call from a voice that sounds human and asks us to make a payment or share personal information, we should question its authenticity. But in a society where more and more of our interactions are virtual, we may end up trusting only in-person encounters—at least until the arrival of robots that look and speak like humans.