Runtime and memory requirements for typical formulations of energy system models increase non-linearly with resolution, computationally constraining large-scale models despite state-of-the-art solvers and hardware. This scaling paradigm requires omission of detail which can affect key outputs to an unknown degree. Recent algorithmic innovations employing decomposition have enabled linear increases in runtime and memory use as temporal resolution increases. Newly tractable, higher resolution systems can be compared with lower resolution configurations commonly employed today in academic research and industry practice, providing a better understanding of the potential biases or inaccuracies introduced by these abstractions. We employ a state-of-the art electricity system planning model and new high-resolution systems to quantify the impact of varying degrees of spatial, temporal, and operational resolution on results salient to policymakers and planners. We find models with high spatial and temporal resolution result in more realistic siting decisions and improved emissions, reliability, and price outcomes. Errors are generally larger in systems with low spatial resolution, which omit key transmission constraints. We demonstrate that high temporal resolution cannot overcome biases introduced by low spatial resolution, and vice versa. While we see asymptotic improvements to total system cost and reliability with increased resolution, other salient outcomes such as siting accuracy and emissions exhibit continued improvement across the range of model resolutions considered. We conclude that modelers should carefully balance resolution on spatial, temporal, and operational dimensions and that novel computational methods enabling higher resolution modeling are valuable and can further improve the decision support provided by this class of models.
Expanding transmission capacity is likely a bottleneck that will restrict variable renewable energy (VRE) deployment required to achieve ambitious emission reduction goals. Interconnection and inter-zonal transmission buildout may be displaced by the optimal sizing of VRE to grid connection capacity and by the co-location of VRE and battery resources behind interconnection. However, neither of these capabilities is commonly captured in macro-energy system models. We develop two new functionalities to explore the substitutability of storage for transmission and the optimal capacity and siting decisions of renewable energy and battery resources through 2030 in the Western Interconnection of the United States. Our findings indicate that modeling optimized interconnection and storage co-location better captures the full value of energy storage and its ability to substitute for transmission. Optimizing interconnection capacity and co-location can reduce total grid connection and shorter-distance transmission capacity expansion on the order of 10% at storage penetration equivalent to 2.5-10% of peak system demand. The decline in interconnection capacity corresponds with greater ratios of VRE to grid connection capacity (an average of 1.5-1.6 megawatt (MW) PV:1 MW inverter capacity, 1.2-1.3 MW wind:1 MW interconnection). Co-locating storage with VREs also results in a 10-15% increase in wind capacity, as wind sites tend to require longer and more costly interconnection. Finally, co-located storage exhibits higher value than standalone storage in our model setup (22-25%). Given the coarse representation of transmission networks in our modeling, this outcome likely overstates the real-world importance of storage co-location with VREs. However, it highlights how siting storage in grid-constrained locations can maximize the value of storage and reduce transmission expansion.
Scheduled maintenance is likely to be lengthy and therefore consequential for the economics of fusion power plants. The maintenance strategy that maximizes the economic value of a plant depends on internal factors such as the cost and durability of the replaceable components, the frequency and duration of the maintenance blocks, and the external factors of the electricity system in which the plant operates. This paper examines the value of fusion power plants with various maintenance properties in a decarbonized United States Eastern Interconnection circa 2050. Seasonal variations in electricity supply and demand mean that certain times of year, particularly spring to early summer, are best for scheduled maintenance. Seasonality has two important consequences. First, the value of a plant can be 15% higher than what one would naively expect if value were directly proportional to its availability. Second, in some cases, replacing fractions of a component in shorter maintenance blocks spread over multiple years is better than replacing it all at once during a longer outage, even through the overall availability of the plant is lower in the former scenario.
In the coming decades, the United States aims to undergo an energy transition away from fossil fuels and toward a fully decarbonized power grid. There are many pathways that the US could pursue toward this objective, each of which relies on different types of generating technologies to provide clean and reliable electricity. One potential contributor to these pathways is advanced nuclear fission, which encompasses various innovative nuclear reactor designs. However, little is known about how cost-competitive these reactors would be compared to other technologies, or about which aspects of their designs offer the most value to a decarbonized power grid. We employ an electricity system optimization model and a case study of a decarbonized U.S. Eastern Interconnection circa 2050 to generate initial indicators of future economic value for advanced reactors and the sensitivity of future value to various design parameters, the availability of competing technologies, and the underlying policy environment. These results can inform long-term cost targets and guide near-term innovation priorities, investments, and reactor design decisions. We find that advanced reactors should cost $5.1-$6.6/W to gain an initial market share (assuming 30 year asset life and 3.5-6.5% real WACC), while those that include thermal storage in their designs can cost up to $5.5-$7.0/W (not including cost of storage). Since the marginal value of advanced fission reactors declines as market penetration increases, break-even costs fall around 19% at 100 GW of cumulative capacity and around 40% at 300 GW. Additionally, policies that provide investment tax credits for nuclear energy create the most favorable environment for advanced nuclear fission. Stakeholders and investors should consider these findings when deciding which technologies to consider for decarbonizing the US power grid.
Corporations and other organizations procure large amounts of carbon-free electricity and often use these procurements to make claims regarding the carbon intensity of their electricity consumption. Although a claim of carbon-free electricity use implies to the public that an organization’s electricity consumption and procurement practices have a near-zero aggregate impact on climate change, this may not be the case. In fact, multiple proposed emission accounting systems offer different definitions of being “carbon-free.” Here, we explore how the carbon-free procurement strategies associated with several of these accounting systems affect emission outcomes at the entire electricity system level, accounting for changes in operations and installed capacity. We find that the actions incentivized by hourly accounting of carbon-free electricity consumption, including the uptake of advanced clean energy technologies, are most consistently associated with real system-level emission reductions. This study assesses the system-level impacts of carbon-free electricity procurements by voluntary actors in the western United States, accounting for induced changes in both system operations and installed capacities. We find that in the current US policy environment, procurement strategies that match participants’ demand with carbon-free generation on an annual basis have minimal impact on long-run system-level CO2 emissions. Similar outcomes occur when participants calculate their annual emission impacts using short-run marginal emission rates and attempt to offset these with their procurements. In contrast, we find that matching participants’ demand on an hourly basis with carbon-free generation can drive significant reductions in system-level CO2 emissions while incentivizing advanced clean firm generation and long-duration storage technologies that would not otherwise see market uptake. Greater emission impacts are correlated with increased participant costs. We further find that government-imposed clean electricity standards can increase the effectiveness of all forms of voluntary procurement.
Enhanced geothermal systems (EGSs) are an emerging energy technology with the potential to greatly expand the viable resource base for geothermal power generation. Although EGSs have traditionally been envisioned as ‘baseload’ resources, flexible operation of EGS wellfields could allow these plants to provide load-following generation and long-duration energy storage. In this work we evaluate the impact of operational flexibility on the long-run system value and deployment potential of EGS power in the western United States. We find that load-following generation and in-reservoir energy storage enhance the role of EGS power in least-cost decarbonized electricity systems, substantially increasing optimal geothermal penetration and reducing bulk electricity supply costs compared to systems with inflexible EGSs or no EGSs. Flexible geothermal plants preferentially displace the most expensive competing resources by shifting their generation on diurnal and seasonal timescales, with round-trip energy storage efficiencies of 59–93%. Benefits of EGS flexibility are robust across a range of electricity market and geothermal technology development scenarios.
Climate change is expected to intensify the effects of extreme weather events on power systems and increase the frequency of severe power outages. The large-scale integration of environment-dependent renewables during energy decarbonization could induce increased uncertainty in the supply–demand balance and climate vulnerability of power grids. This Perspective discusses the superimposed risks of climate change, extreme weather events and renewable energy integration, which collectively affect power system resilience. Insights drawn from large-scale spatiotemporal data on historical US power outages induced by tropical cyclones illustrate the vital role of grid inertia and system flexibility in maintaining the balance between supply and demand, thereby preventing catastrophic cascading failures. Alarmingly, the future projections under diverse emission pathways signal that climate hazards — especially tropical cyclones and heatwaves — are intensifying and can cause even greater impacts on the power grids. High-penetration renewable power systems under climate change may face escalating challenges, including more severe infrastructure damage, lower grid inertia and flexibility, and longer post-event recovery. Towards a net-zero future, this Perspective then explores approaches for harnessing the inherent potential of distributed renewables for climate resilience through forming microgrids, aligned with holistic technical solutions such as grid-forming inverters, distributed energy storage, cross-sector interoperability, distributed optimization and climate–energy integrated modelling.
The Inflation Reduction Act (IRA) is regarded as the most prominent piece of federal climate legislation in the U.S. thus far. This paper investigates potential impacts of IRA on the power sector, which is the focus of many core IRA provisions. We summarize a multi-model comparison of IRA to identify robust findings and variation in power sector investments, emissions, and costs across 11 models of the U.S. energy system and electricity sector. Our results project that IRA incentives accelerate the deployment of low-emitting capacity, increasing average annual additions by up to 3.2 times current levels through 2035. CO2 emissions reductions from electricity generation across models range from 47%–83% below 2005 in 2030 (68% average) and 66%–87% in 2035 (78% average). Our higher clean electricity deployment and lower emissions under IRA, compared with earlier U.S. modeling, change the baseline for future policymaking and analysis. IRA helps to bring projected U.S. power sector and economy-wide emissions closer to near-term climate targets; however, no models indicate that these targets will be met with IRA alone, which suggests that additional policies, incentives, and private sector actions are needed.
The paper addresses the inefficiencies in large-scale energy systems planning models, traditionally based on linear programming (LP) or mixed integer linear programming (MILP). These models often struggle with tractability due to necessary abstractions that compromise result quality. To overcome this, the authors introduce a novel Benders decomposition approach. This method separates investment from operational decisions and decouples operational time steps using budgeting variables, enabling parallel processing of subproblems and accommodating policy constraints over time. This new approach significantly improves runtime, scaling linearly with temporal resolution, and shows marked runtime reductions for all MILP and some LP formulations, varying with problem size. Beyond energy, this algorithm is applicable to planning in domains like water, transportation, and production processes. Notably, it can tackle large-scale problems otherwise intractable. The enhanced resolution achieved through this method reduces structural uncertainty, thereby improving the accuracy of planning and investment recommendations.
The REPEAT Project’s revised analysis examines the climate and energy impacts of legislation passed by the 117th Congress, focusing on the Inflation Reduction Act of 2022 and the Infrastructure Investment and Jobs Act of 2021. Updated with the latest data from 2022, the analysis includes enhanced information on methane emissions in the oil, gas, agriculture, and forestry sectors. It presents three ‘Current Policies’ scenarios—’Conservative’, ‘Mid-range’, and ‘Optimistic’—to reflect uncertainties in law effectiveness and supply chain constraints. Additionally, it compares two benchmark scenarios: a ‘Frozen Policies’ scenario based on early 2021 policies, and a ‘Net-Zero Pathway’ scenario aligning with President Biden’s climate goals. The report covers greenhouse gas emissions, clean energy, electric vehicle deployment, fossil fuel use, and their impacts on energy costs, investment, employment, air pollution, and public health. The findings, which are approximate due to uncertainties, are updated regularly and have not been peer-reviewed.