
UK Taps Ukraine Lessons to Plow £1 Billion Into Warfare Systems
The UK will invest more than £1 billion ($1.4 billion) in a new digital targeting system to allow the country's armed forces to pinpoint and eliminate enemy targets more swiftly as part of a forthcoming revamp of Britain's defensive capabilities.
In its strategic defense review, expected to be published in full next week, the UK will also set up a new Cyber and Electromagnetic Command to protect military networks against tens of thousands of cyber attacks a year and help coordinate Britain's own cyber operations, the Ministry of Defence said Thursday in a statement. The command will also lead operations to jam enemy signals to drones and missiles and help intercept military communications.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
21 minutes ago
- Yahoo
Why is the Palestine Action group being banned by the UK government?
The home secretary is set to ban the Palestine Action protest group and effectively brand it as a terrorist organisation following a break-in at an RAF base. Yvette Cooper is expected to deliver a ministerial statement to Parliament on Monday, in which she is expected to lay out her plans to proscribe the group. New legislation, which would have to be debated by MPs and peers, will be needed to enact a ban of Palestine Action, whose activists entered the RAF Brize Norton base in Oxfordshire and vandalised two planes. A video of the break-in, shared online by the group on Friday, is being viewed as an embarrassing episode for the Ministry of Defence (MoD), particularly as the two protesters managed to exit the base without being arrested. Prime minister Sir Keir Starmer described the vandalism as "disgraceful" and said it is the government's "responsibility to support those who defend us". But some have questioned whether the damage carried out by Palestine Action should meet the threshold of classing it as a "proscribed organisation". Cooper has decided to proscribe the group, making it a criminal offence to belong to or support Palestine Action. The decision by the home secretary comes after the group posted footage online showing two people inside the base at RAF Brize Norton. One person can be seen riding an electric scooter to an Airbus Voyager air-to-air refuelling tanker and appearing to spray paint into its jet engine. The incident is being investigated by counter-terror police and has prompted a review of security at RAF bases. The group has staged a series of demonstrations in recent months, including spraying the London offices of Allianz Insurance with red paint over its alleged links to Israeli defence company Elbit, and vandalising Donald Trump's Turnberry golf course in South Ayrshire. Friday's incident at Brize Norton prompted calls for the group to be banned, but proscription will require Cooper to lay an order in Parliament, which must then be debated and approved by both MPs and peers. Some 81 organisations have been proscribed under the Terrorism Act 2000, including Islamist terrorist groups such as Hamas and al Qaida. Far-right groups such as National Action, and Russian private military company Wagner Group have also been banned. Another 14 organisations connected with Northern Ireland are also banned under previous legislation, including the IRA and UDA. Belonging to or expressing support for a proscribed organisation, along with a number of other actions, are criminal offences carrying a maximum sentence of 14 years in prison. Ministers have said a ban is justified, with defence secretary John Healey describing the vandalism of RAF planes as "totally unacceptable". "These aircraft are used by our military personnel to support security and peace around the world," he said. "This action does nothing to support Gaza or our push for peace and stability in the Middle East". Shadow foreign secretary Dame Priti Patel said: "There can be no place in a democracy for groups that use violence, sabotage and potential terrorist acts to pursue their political goals. "We've called for these groups to be investigated and banned, those responsible to be prosecuted, and any links to foreign agents to be exposed." Former Home Secretary Suella Braverman said banning the group is "absolutely the correct decision", writing on X: "We must have zero tolerance for terrorism." Shadow justice secretary Robert Jenrick and Reform UK leader Nigel Farage have also come out in support of a ban. Lord Walney, who served as the government's independent adviser on political violence, told Sky News the move was "long overdue", claiming that the group had acted as "the enemy within". "They have terrorised working people for a number of years and there's a number of serious violent charges that are going through the court system at the moment," he added. Defending its actions, a spokesperson for Palestine Action said: 'When our government fails to uphold their moral and legal obligations, it is the responsibility of ordinary citizens to take direct action. "The terrorists are the ones committing a genocide, not those who break the tools used to commit it.' The spokesperson accused the UK of failing to meet its obligations to prevent or punish genocide, in relation to Israel's onslaught in Gaza. In a statement on X, the group said: "By making plans to ban us, the British state is effectively saying they value the property used to commit genocide more than the people killed. Saeed Taji Farouky, a spokesperson for the group, told Times Radio: "The idea that Palestine Action could end up on the same list as groups like ISIS is just absolutely absurd. This is a knee jerk reaction." Addressing a crowd of pro-Palestine demonstrators in Whitehall on Saturday, former Scottish first minister Humza Yousaf accused the government of "abusing" anti-terror laws. Human rights group Amnesty International UK said it is "deeply concerned at the use of counter terrorism powers to target protest groups". "Terrorism powers should never have been used to aggravate criminal charges against Palestine Action activists and they certainly shouldn't be used to ban them," it added. Former justice secretary Lord Charlie Falconer said vandalising aircraft at RAF Brize Norton would not solely provide legal justification for proscribing the protest group. Appearing on Sky News' Sunday Morning with Trevor Phillips, he said: "I think the question will probably not be what we know about them publicly, but there would need to be something that was known by those who look at these sorts of things that we don't know about. "They got into the air base which might suggest they've got some degree of ability to make them dangerous, I don't know. 'But generally, that sort of demonstration wouldn't justify proscription so there must be something else that I don't know about ." Keir Starmer says Kneecap Glastonbury performance is not 'appropriate' (The Independent) RAF base 'targeted in Iran spy plot' (The Telegraph) Briton arrested for alleged terrorism offences and spying on RAF base in Cyprus (The Guardian)
Yahoo
22 minutes ago
- Yahoo
Pre-IPO Anduril Now Worth $30 Billion
Unicorn defense and AI stock Anduril hit its first $1 billion valuation six years ago. Today, Anduril is making $1 billion a year in revenue, its privately-owned stock has gone up 30-fold, and it wants to IPO. Anduril's a popular defense stock, but it costs far too much to justify buying. These 10 stocks could mint the next wave of millionaires › Once upon a time, unicorn companies -- privately owned, but valued in excess of $1 billion -- were a rare breed. Lately, there seem to be whole herds of them. Take Anduril for example. The privately held artificial intelligence and defense company founded by Oculus VR inventor Palmer Luckey first hit the magic $1 billion private market value back in 2019. But Anduril didn't stop there. Growing steadily over the past half-decade, the privately traded defense stock last week raised $2.5 billion in new cash, and the company as a whole is now valued at $30.5 billion, according to a report from Bloomberg. That's a growth rate any investor would love to get a piece of. And while you cannot invest in Anduril stock yet, you might soon be able to. Because Anduril says it's going to IPO. Anduril Chairman Trae Stephens told Bloomberg his company aims to "scale into the largest problems for the national security community." Of particular note, Anduril recently took over a gigantic $22 billion Pentagon augmented reality contract from Microsoft. But Anduril needs cash to reach the scale it wants "to shore up the balance sheet and make sure we have the ability to deploy capital into these manufacturing and production problem sets that we're working on." Pre-IPO companies like Anduril have three ways they can do that. They can take out loans (at currently high interest rates). Or they can hold an IPO and sell their shares to public investors for cash. Or they can sell shares discretely, in a private stock offering. This last route is the one Anduril took, but in his interview with Bloomberg, Stephens made it clear he's not ruling out an IPO -- at all. "Long term we continue to believe that Anduril is the shape of a publicly traded company," said Stephens. "We're not in any rapid path to doing that [but] we're certainly going through the processes required to prepare for doing something like that in the medium term." Stephens and Luckey might want to shift that focus into the short term, however, because circumstances may never be better to make Anduril a popular IPO stock. It's been only a couple of weeks now since Ukraine launched Operation Spiderweb, deploying more than 100 artificial intelligence (AI)-guided drones from trucks to attack airfields across Russia, causing billions of dollars' worth of damage to military assets -- apparently with no casualties -- for an investment measured in thousands of dollars. The memory of that mission hadn't even faded before Israel launched its own surprise attack on Iran, Operation Rising Lion, last week. While most headlines focus on the exploits of Israel's hundreds of fighter aircraft bombing military and nuclear targets in Iran, Israel's Mossad spy agency apparently also used drones and remotely operated weapons systems to great effect. These attacks weren't just reminiscent of Operation Spiderweb. They were reminiscent of Anduril's own artificial intelligence drone technology. With both successes fresh in investors' minds right now, there may be no better time to launch an IPO to capitalize on this free publicity. But let's not get irrationally exuberant here. What's good for Anduril isn't necessarily good for investors. As popular a stock as Anduril might be if it IPOs, that doesn't necessarily mean you should buy it. Consider what a $30.5 billion valuation means for a future publicly traded Anduril stock. According to Luckey, Anduril roughly doubled its 2023 revenue in 2024, making "about a billion" dollars in 2024 sales. The company isn't believed to be profitable yet, so that doesn't mean much in terms of P/E ratios. But it does mean that Anduril sells for a price-to-sales ratio of about 30.5. Compare that to alternatives in the "new defense tech" space. AeroVironment (NASDAQ: AVAV), which up until about the time of Russia's 2022 invasion of Ukraine was the biggest name in U.S. drone stocks, costs 7.4 times trailing sales -- one quarter of Anduril's valuation. And AeroVironment is a profitable defense stock, earning about $33 million last year. Karman Holdings (NYSE: KRMN), itself a recent defense stock IPO (that I've argued is also overpriced) is closer to Anduril's valuation at 17.3 times sales, but still only about half as expensive. And again, Karman is already earning profits. Don't even ask about more traditional defense contractors like General Dynamics, Lockheed Martin, or Northrop Grumman. Combined, those three giants earned more than $13 billion last year, but their P/S ratios range from only 1.6 (GD and Lockheed) to 1.9 times sales. That's way cheaper than any of the new defense tech stocks, Anduril included, and with far longer track records of success. I'm not here to knock Anduril. As a company, I think it's pretty great, and a superb success story in American business. I have high (if cautious) hopes that Anduril might shake up an entrenched and overly concentrated defense industry that's basically made up of companies like General D, LockMart, and Northrop, and help the Pentagon to spend taxpayer defense dollars more wisely. I just don't think you should invest in Anduril stock. Not at today's valuation, at least. Ever feel like you missed the boat in buying the most successful stocks? Then you'll want to hear this. On rare occasions, our expert team of analysts issues a 'Double Down' stock recommendation for companies that they think are about to pop. If you're worried you've already missed your chance to invest, now is the best time to buy before it's too late. And the numbers speak for themselves: Nvidia: if you invested $1,000 when we doubled down in 2009, you'd have $377,293!* Apple: if you invested $1,000 when we doubled down in 2008, you'd have $37,319!* Netflix: if you invested $1,000 when we doubled down in 2004, you'd have $659,171!* Right now, we're issuing 'Double Down' alerts for three incredible companies, available when you join , and there may not be another chance like this anytime soon.*Stock Advisor returns as of June 9, 2025 Rich Smith has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends AeroVironment and Microsoft. The Motley Fool recommends Lockheed Martin and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy. Pre-IPO Anduril Now Worth $30 Billion was originally published by The Motley Fool


Forbes
26 minutes ago
- Forbes
The Biggest Existential Threat Calls For Philosophers, Not AI Experts
Google's former AI chief, Geoffrey Hinton distinguishes between two ways in which AI poses an ... More existential threat to humanity. (Photo by Jonathan NACKSTRAND / AFP) (Photo by JONATHAN NACKSTRAND/AFP) Geoffrey Hinton, Nobel laureate and former AI chief in Google, recently distinguished between two ways in which AI poses an existential threat to humanity. According to Hinton, the threat unfolds when: Hinton cites cyberattacks, creation of viruses, corruption of elections, and creation of echo chambers as examples of the first way AI poses an existential threat. And deadly autonomous weapons and superintelligent AI that realizes it doesn't need us and therefore decides to kill us as examples of the second. But there is a third existential threat that neither Hinton nor his AI peers seem to worry about. And contrary to their warnings, this third threat is eroding human existence without reaching any of the media headlines. The third way AI poses an existential threat to humanity unfolds when: The simplest definition of an existential threat is 'a threat to something's very existence'. But to know whether humanity's existence is threatened, we must know what it means to exist as a human. And the AI experts don't. Ever since Alan Turing refused to consider the question: 'Can machines think?', AI experts have deftly failed to define basic human traits such as thinking, consciousness and creativity. No one knows how to define these things, they say. And they are right. But they are wrong to use their lack of definitions as an excuse for not taking the question of what it means to be human seriously. And they add to the existential threat to humanity by using terms like human-level intelligence when talking about AI. German philosopher Martin Heidegger said that our relationship with technology puts us in constant ... More danger of losing touch with technology, reality, and ourselves. (Photo by Fritz Eschen / ullstein bild) What Existential Threat Really means Talking about when and how AI will reach human-level intelligence, or outsmart us altogether, without having any idea how to understand human thinking, consciousness, and creativity is not only optimistic. It also erodes our shared understanding of ourselves and our surroundings. And this may very well turn out to be the biggest existential threat of all: that we lose touch with our humanity. In his 1954 lecture, 'The Question Concerning Technology', German philosopher Martin Heidegger said that our relationship with technology puts us in constant danger of losing touch with technology, reality, and ourselves. Unless we get a better grip of what he called the essence of technology, he said we are bound to: When I interviewed Neil Lawrence, DeepMind professor of machine learning at the University of Cambridge, for 'An AI Professor's Guide To Saving Humanity From Big Tech' last year, he agreed that Heidegger's prediction has proven to be frighteningly accurate. But instead of pointing to the essence of technology, he said that 'the people who are in control of the deployment of [technology] are perhaps the least socially intelligent people we have on the planet.' Whether that's why AI experts conveniently avoid talking about the third existential threat is not for me to say. But as long as we focus on them and their speculations about what it takes for machines to reach human-level intelligence, we are not focusing on ourselves and what it takes for us to exist and evolve as humans. Existential Philosophers On Existential Threats Unlike AI experts, founders, and developers, the existential philosophy that Heidegger helped pioneer has not received billions of dollars in annual investment since the 1950's. Quite the contrary. While the AI industry has exploded, the interest and investments in the humanities has declined worldwide. In other words, humanity has for decades invested heavily in understanding and developing artificial intelligence, while we have neglected to understand and develop ourselves as humans. But although existential philosophers like Heidegger, Jean-Paul Sartre, and Maurice Merleau-Ponty have not received as large grants as their colleagues in computer science departments, they have contributed insights that are more helpful when it comes to understanding and dealing with the existential threats posed by AI. In Being and Nothingness, French philosopher Jean-Paul Sartre places human consciousness, or ... More no-thingness (néant), in opposition to being, or thingness (être). Sorbonne, à Paris, France, le 22 mai 1968. (Photo by Pierre BLOUZARD/Gamma-Rapho) Like different AI experts believe in different ways to reach human-level intelligence, different existential philosophers have different ways of describing human existence. But unlike AI experts, they don't consider the lack of definitions a problem. On the contrary, they consider the lack of definitions, theories and technical solutions an important piece in the puzzle of understanding what it means to be human. Existential philosophers have realized that consciousness, creativity, and other human qualities that we struggle to define, are not an expression of 'something', that is, a core, function, or feature that distinguishes us from animals and machines. Rather, they are an expression of 'nothing'. Unlike other creatures, we humans not only exist, we also question our existence. We ask why and for how long we will be here. We exist knowing that at some point we will cease to exist. That we are limited in time and space. And therefore have to ask why, how and with whom we live our lives. For existential philosophers, AI does not pose an existential threat to humanity because it might exterminate all humans. It poses an existential threat because it offers answers faster than humans can ask the questions that help them contemplate their existence. And when humans stop asking existential questions, they stop being human. AI Experts Agree: Existential Threats Call For Philosophy While existential philosophers insist on understanding the existential part of existential threats, AI experts skip the existential questions and go straight to the technical and political answers to how the threats can be contained. That's why we keep hearing about responsible AI and regulation: because that's the part that calls for technical expertise. That's the part where the AI experts are still needed. Demis Hassabis, CEO of Google DeepMind, recently called for new, great philosophers to understand ... More the implications of developments in AI. (Photo byfor TIME) AI experts know how to design and develop 'something', but they have no idea how to deal with 'nothing'. That's probably what Hinton realized when he retired to spend more time on what he described as 'more philosophical work.' That also seems to be what Demis Hassabis, CEO of Google DeepMind, suggests when he says that 'we need new great philosophers to come about to understand the implications of this.' And that's certainly what Nick Bostrom hinted at in my interview with him about his latest book, Deep Utopia, when he declared that some questions are 'beyond his pay grade'. What 20th-century existential philosophy teaches us is that we don't have to wait for the AI experts to retire or for new great philosophers to emerge to deal with the existential threats posed by AI. All we have to do is remind ourselves and each other to ask how we want – and don't want – to live our lives before we trust AI to know the answer.