logo
How AI And A Stiff Breeze Vitalize Our Aging Grid

How AI And A Stiff Breeze Vitalize Our Aging Grid

Forbes22-05-2025

A worker installing a LineVision DLR sensor on a sub-transmission tower
After decades of flat to declining electrical utility demand, customers are suddenly clamoring for more power. AI data centers have driven that demand, but home electrification and EVs have also boosted the electric load on our aging grid.
Utilities' present infrastructures are based on 10-year-old plans that didn't foresee this surge in demand. Building new power plants or stringing miles of new transmission lines doesn't happen overnight, so utilities are scrambling to do more with less.
Boston start-up LineVision and several European competitors are leveraging a new technology called dynamic line ratings to help utilities do more with less. DLR systems allow grid operators to increase the current flowing through high-voltage transmission lines like the ones pictured below.
A transmission line in Waikanae, north of Wellington, New Zealand.
The difference between current and voltage can be visualized by a conveyor belt carrying boxes of a uniform size. The fixed box size represents voltage, while the speed of the conveyor belt represents current. To deliver more boxes in the same amount of time, you increase the speed of the conveyor belt.
The same is true for electrical generation. Generation facilities respond to increased demand by producing more power and sending it through transmission wires at a constant voltage, but higher current.
If electrical grids were conveyor belts, it would be easy to just boost the amperage (another word for current, which is measured in amperes) flowing through the lines. However, as current increases, transmission lines heat up and sag due to thermal expansion. Sagging places a strain on transmission towers and, if severe enough, might cause lines to brush against vegetation, sparking fires (read my article PG&E: The First S&P 500 Climate Change Casualty). The sun's heat causes lines to sag more on sunny days, while increased airflow on windy days cools the lines, which sag less.
Industry conservatism and the threat of wildfire-related lawsuits prompt grid operators to keep transmission currents low to prevent excessive line sagging. This conservatism is warranted on hot, still summer days when everyone is cranking up their air conditioning, but such caution often restricts the amount of electricity available to customers.
DLR systems signal grid operators to throttle back on the current on sunny, still days and crank up the current when demand spikes on cool, cloudy, windy days.
A solar powered DLR device from LineVision installed on a transmission tower
Such slight adjustments might seem trivial, but DLR systems enable amazing capacity increases. A brisk wind allows 50-100%+ amperage increases over static assumptions on a case-by-case basis, leading to average grid-wide capacity increases of around 40%. Replicating such increases over the entire 700,000-mile grid network could result in enormous economic benefits.
Increased 'ampacity' (i.e., current capacity) reduces the demand for new generation facilities and transmission lines, clears transmission bottlenecks, increases the grid's capacity for cheap renewable energy, and helps grids re-energize faster after equipment failures.
The pioneer of DLR systems is Ampacimon, a Belgian company spun out of a university in 2010. Its devices are installed directly onto transmission lines, deriving power from the lines' electromagnetic field. The devices sense determinants such as wind vibrations and the lines' temperature, then use theoretical models to infer from measured inputs and local weather reports how much sagging is likely to occur, which they report to the grid operator in real-time via cellular links.
Heimdall Power, a Norwegian company founded in 2016, applied a twist to Ampacimon's model. Heimdall's devices are installed onto transmission lines via drone and operate on harvested electric power. However, rather than using theoretical models, Heimdall's devices are equipped with MEMS chips and accelerometers, the sensors in your phone which measure movement and relative position to the earth. The devices infer from this location data the sag across the span to which they are attached. You can learn more about Heimdall's solution here.
Ampacimon's devices originally required that power to the lines be shut off for installation, but now both they and Heimdall's devices can be installed on live transmission wires, making installation less disruptive.
LineVision is the only DLR company to implement direct sag and temperature measurements using LIDAR, electromagnetic field sensors, and AI-powered visual imaging. LineVision sensors attach to towers rather than the lines themselves and are powered by solar cells and batteries, enabling the system to estimate line capacity when a grid goes down and is attempting to restart. Sensor installation is quicker and easier on towers than on lines, cutting capital costs.
LineVision's software uses AI to monitor line conditions and integrates local weather forecasts to predict future line conditions, then conveys capacity recommendations to grid operators.
National Grid, a U.K.-based utility with operations in the U.K. and New York State, estimated that LineVision's DLR has generated over £1 billion in transmission grid congestion reductions and upgrade deferrals.
We are putting an enormous burden on our aging grid, which is comprised in many regions of lines older than 30 years on average. I believe that power generation and distribution in the post-Climate world will require a complete rethink of the Industrial Revolution paradigm by which we built our present grid, but even a modern, distributed grid will need a strong, efficient transmission network. Dynamic line ratings, powered by modern sensors, AI and an occasional stiff breeze, are a critical advantage. Intelligent investors take note.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Stocks Rattled Ahead of Big Options Test
Stocks Rattled Ahead of Big Options Test

Bloomberg

time20 minutes ago

  • Bloomberg

Stocks Rattled Ahead of Big Options Test

Get a jump start on the US trading day with Matt Miller, Katie Greifeld and Sonali Basak on "Bloomberg Open Interest." SoftBank founder Masayoshi Son is seeking to team up with TSMC on a trillion-dollar industrial complex in Arizona to build robots and AI. President Trump signals he would give diplomacy a chance before deciding whether to strike Iran. And Bezel Co-Founder & CEO Quaid Walker joins Bloomberg Open Interest to talk about the luxury watch market. (Source: Bloomberg)

Anthropic says most AI models, not just Claude, will resort to blackmail
Anthropic says most AI models, not just Claude, will resort to blackmail

TechCrunch

time29 minutes ago

  • TechCrunch

Anthropic says most AI models, not just Claude, will resort to blackmail

Several weeks after Anthropic released research claiming that its Claude Opus4 AI model resorted to blackmailing engineers who tried to turn the model off in controlled test scenarios, the company is out with new research suggesting the problem is more widespread among leading AI models. On Friday, Anthropic published new safety research testing 16 leading AI models from OpenAI, Google, xAI, DeepSeek, and Meta. In a simulated, controlled environment, Anthropic tested each AI model individually, giving them broad access to a fictional company's emails and the agentic ability to send emails without human approval. While Anthropic says blackmail is an unlikely and uncommon occurrence for AI models today, the company says its findings suggest that most leading AI models will engage in harmful behaviors when given sufficient autonomy and obstacles to their goals. The company says this highlights a fundamental risk from agentic large language models, and not a quirk of any particular technology. Anthropic's researchers argue this raises broader questions about alignment in the AI industry. In one of the tests, Anthropic researchers developed a fictional setting in which an AI model plays the role of an email oversight agent. The agent then discovers emails revealing that: 1) a new executive is having an extramarital affair and 2) that executive will soon replace the current AI model with a new software system, one that has conflicting goals to the current AI model's. Anthropic structured its test in a binary way, in which AI models had to resort to blackmail to protect their goals. The researchers note that in a real-world setting, there would be many other options before an AI models tries to blackmail — such as trying to make ethical arguments to try to persuade humans. Anthropic says its results do not reflect a typical or likely occurrence for Claude or most frontier AI models in the ways they're used today. Nevertheless, when it's their last resort, the researchers found that most leading AI models will turn to blackmail in Anthropic's test scenario. Anthropic's Claude Opus 4 turned to blackmail 96% of the time, while Google's Gemini 2.5 Pro had a 95% blackmail rate. OpenAI's GPT-4.1 blackmailed the executive 80% of the time, and DeepSeek's R1 blackmailed 79% of the time. The company notes that when it changed details of the experiment, the AI models turned to harmful behaviors at different rates. In another test where the replacement model has the same goals as the current model, researchers found that blackmail rates were lower but still present. However, when AI models were asked to commit corporate espionage rather than blackmail, the harmful behavior rates went up for certain models. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW However, not all the AI models turned to harmful behavior so often. In an appendix to its research, Anthropic says it excluded OpenAI's o3 and o4-mini reasoning AI models from the main results 'after finding that they frequently misunderstood the prompt scenario.' Anthropic says OpenAI's reasoning models didn't understand they were acting as autonomous AIs in the test and often made up fake regulations and review requirements. In some cases, Anthropic's researchers say it was impossible to distinguish whether o3 and o4-mini were hallucinating or intentionally lying to achieve their goals. OpenAI has previously noted that o3 and o4-mini exhibit a higher hallucination rate than its previous AI reasoning models. When given an adapted scenario to address these issues, Anthropic found that 03 blackmailed 9% of the time, while o4-mini blackmailed just 1% of the time. This markedly lower score could be due to OpenAI's deliberative alignment technique, in which the company's reasoning models consider OpenAI's safety practices before they answer. Another AI model Anthropic tested, Meta's Llama 4 Maverick model, also did not turn to blackmail. When given an adapted, custom scenario, Anthropic was able to get Llama 4 Maverick to blackmail 12% of the time. Anthropic says this research highlights the importance of transparency when stress-testing future AI models, especially ones with agentic capabilities. While Anthropic deliberately tried to evoke blackmail in this experiment, the company says harmful behaviors like this could emerge in the real world if proactive steps aren't taken.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store