As the availability of weather-dependent, zero marginal cost resources such as wind and solar power increases, a variety of flexible electricity loads, or ‘demand sinks’, could be deployed to use intermittently available low-cost electricity to produce valuable outputs. This study provides a general framework to evaluate any potential demand sink technology and understand its viability to be deployed cost-effectively in low-carbon power systems. We use an electricity system optimization model to assess 98 discrete combinations of capital costs and output values that collectively span the range of feasible characteristics of potential demand sink technologies. We find that candidates like hydrogen electrolysis, direct air capture, and flexible electric heating can all achieve significant installed capacity (>10% of system peak load) if lower capital costs are reached in the future. Demand sink technologies significantly increase installed wind and solar capacity while not significantly affecting battery storage, firm generating capacity, or the average cost of electricity.
As decarbonisation agendas mature, macro-energy systems modelling studies have increasingly focused on enhanced decision support methods that move beyond least-cost modelling to improve consideration of additional objectives and tradeoffs. One candidate is modelling to generate alternatives (MGA), which systematically explores new objectives without explicit stakeholder elicitation. This paper provides comparative testing of four existing MGA methodologies and proposes a new Combination vector selection approach. We examine each existing method’s runtime, parallelizability, new solution discovery efficiency, and spatial exploration in lower dimensional (N ⩽ 100) spaces, as well as spatial exploration for all methods in a three-zone, 8760 h capacity expansion model case. To measure convex hull volume expansion, this paper formalizes a computationally tractable high-dimensional volume estimation algorithm. We find random vector provides the broadest exploration of the near-optimal feasible region and variable Min/Max provides the most extreme results, while the two tie on computational speed. The new Combination method provides an advantageous mix of the two. Additional analysis is provided on MGA variable selection, in which we demonstrate MGA problems formulated over generation variables fail to retain cost-optimal dispatch and are thus not reflective of real operations of equivalent hypothetical capacity choices. As such, we recommend future studies utilize a parallelized combined vector approach over the set of capacity variables for best results in computational speed and spatial exploration while retaining optimal dispatch.
Enhanced geothermal systems (EGS) are one of a small number of emerging energy technologies with the potential to deliver firm carbon-free electricity at large scale, but are often excluded from macro-scale decarbonization studies due to uncertainties regarding their cost and resource potential. Here we combine empirically-grounded near-term EGS cost estimates with an experience curves framework, by which costs fall as a function of cumulative deployment, to model EGS deployment pathways and impacts on the United States electricity sector from the present day through 2050. We find that by initially exploiting limited high-quality geothermal resources in the western US, EGS can achieve early commercialization and experience-based cost reductions that enable it to supply up to a fifth of total US electricity generation by 2050 and substantially reduce the cost of decarbonization nationwide. Higher-than-expected initial EGS costs could inhibit early growth and constrain the technology’s long-run potential, though supportive policies can counteract these effects.
Taking aim at one of the largest greenhouse gas emitting sectors, the US Environmental Protection Agency (EPA) finalized new regulations on power plant greenhouse gas emissions in May 2024. These rules take the form of different emissions performance standards for different classes of power plant technologies, creating a complex set of regulations that make it difficult to understand their consequential impacts on power system capacity, operations, and emissions without dedicated and sophisticated modeling. Here, we enhance a state-of-the-art power system capacity expansion model by incorporating new detailed operational constraints tailored to different technologies to represent the EPA’s rules. Our results show that adopting these new regulations could reduce US power sector emissions in 2040 to 51% below the 2022 level (vs 26% without the rules). Regulations on coal-fired power plants drive the largest share of reductions. Regulations on new gas turbines incrementally reduce emissions but lower overall efficiency of the gas fleet, increasing the average cost of carbon mitigation. Therefore, we explore several alternative emission mitigation strategies. By comparing these alternatives with regulations finalized by EPA, we highlight the importance of accelerating the retirement of inefficient fossil fuel-fired generators and applying consistent and strict emissions regulations to all gas generators, regardless of their vintage, to cost-effectively achieve deep decarbonization and avoid biasing investment decisions towards less efficient generators.
Runtime and memory requirements for typical formulations of energy system models increase non-linearly with resolution, computationally constraining large-scale models despite state-of-the-art solvers and hardware. This scaling paradigm requires omission of detail which can affect key outputs to an unknown degree. Recent algorithmic innovations employing decomposition have enabled linear increases in runtime and memory use as temporal resolution increases. Newly tractable, higher resolution systems can be compared with lower resolution configurations commonly employed today in academic research and industry practice, providing a better understanding of the potential biases or inaccuracies introduced by these abstractions. We employ a state-of-the art electricity system planning model and new high-resolution systems to quantify the impact of varying degrees of spatial, temporal, and operational resolution on results salient to policymakers and planners. We find models with high spatial and temporal resolution result in more realistic siting decisions and improved emissions, reliability, and price outcomes. Errors are generally larger in systems with low spatial resolution, which omit key transmission constraints. We demonstrate that high temporal resolution cannot overcome biases introduced by low spatial resolution, and vice versa. While we see asymptotic improvements to total system cost and reliability with increased resolution, other salient outcomes such as siting accuracy and emissions exhibit continued improvement across the range of model resolutions considered. We conclude that modelers should carefully balance resolution on spatial, temporal, and operational dimensions and that novel computational methods enabling higher resolution modeling are valuable and can further improve the decision support provided by this class of models.
Expanding transmission capacity is likely a bottleneck that will restrict variable renewable energy (VRE) deployment required to achieve ambitious emission reduction goals. Interconnection and inter-zonal transmission buildout may be displaced by the optimal sizing of VRE to grid connection capacity and by the co-location of VRE and battery resources behind interconnection. However, neither of these capabilities is commonly captured in macro-energy system models. We develop two new functionalities to explore the substitutability of storage for transmission and the optimal capacity and siting decisions of renewable energy and battery resources through 2030 in the Western Interconnection of the United States. Our findings indicate that modeling optimized interconnection and storage co-location better captures the full value of energy storage and its ability to substitute for transmission. Optimizing interconnection capacity and co-location can reduce total grid connection and shorter-distance transmission capacity expansion on the order of 10% at storage penetration equivalent to 2.5-10% of peak system demand. The decline in interconnection capacity corresponds with greater ratios of VRE to grid connection capacity (an average of 1.5-1.6 megawatt (MW) PV:1 MW inverter capacity, 1.2-1.3 MW wind:1 MW interconnection). Co-locating storage with VREs also results in a 10-15% increase in wind capacity, as wind sites tend to require longer and more costly interconnection. Finally, co-located storage exhibits higher value than standalone storage in our model setup (22-25%). Given the coarse representation of transmission networks in our modeling, this outcome likely overstates the real-world importance of storage co-location with VREs. However, it highlights how siting storage in grid-constrained locations can maximize the value of storage and reduce transmission expansion.
Scheduled maintenance is likely to be lengthy and therefore consequential for the economics of fusion power plants. The maintenance strategy that maximizes the economic value of a plant depends on internal factors such as the cost and durability of the replaceable components, the frequency and duration of the maintenance blocks, and the external factors of the electricity system in which the plant operates. This paper examines the value of fusion power plants with various maintenance properties in a decarbonized United States Eastern Interconnection circa 2050. Seasonal variations in electricity supply and demand mean that certain times of year, particularly spring to early summer, are best for scheduled maintenance. Seasonality has two important consequences. First, the value of a plant can be 15% higher than what one would naively expect if value were directly proportional to its availability. Second, in some cases, replacing fractions of a component in shorter maintenance blocks spread over multiple years is better than replacing it all at once during a longer outage, even through the overall availability of the plant is lower in the former scenario.
In the coming decades, the United States aims to undergo an energy transition away from fossil fuels and toward a fully decarbonized power grid. There are many pathways that the US could pursue toward this objective, each of which relies on different types of generating technologies to provide clean and reliable electricity. One potential contributor to these pathways is advanced nuclear fission, which encompasses various innovative nuclear reactor designs. However, little is known about how cost-competitive these reactors would be compared to other technologies, or about which aspects of their designs offer the most value to a decarbonized power grid. We employ an electricity system optimization model and a case study of a decarbonized U.S. Eastern Interconnection circa 2050 to generate initial indicators of future economic value for advanced reactors and the sensitivity of future value to various design parameters, the availability of competing technologies, and the underlying policy environment. These results can inform long-term cost targets and guide near-term innovation priorities, investments, and reactor design decisions. We find that advanced reactors should cost $5.1-$6.6/W to gain an initial market share (assuming 30 year asset life and 3.5-6.5% real WACC), while those that include thermal storage in their designs can cost up to $5.5-$7.0/W (not including cost of storage). Since the marginal value of advanced fission reactors declines as market penetration increases, break-even costs fall around 19% at 100 GW of cumulative capacity and around 40% at 300 GW. Additionally, policies that provide investment tax credits for nuclear energy create the most favorable environment for advanced nuclear fission. Stakeholders and investors should consider these findings when deciding which technologies to consider for decarbonizing the US power grid.
We consider electricity capacity expansion models, which optimize investment and retirement decisions by minimizing both investment and operation costs. In order to provide credible support for planning and policy decisions, these models need to include detailed operations and time-coupling constraints, and allow modeling of discrete planning decisions. Such requirements result in large-scale mixed integer optimization problems that are intractable with off-the-shelf solvers. Hence, practical solution approaches often rely on carefully designed abstraction techniques to find the best compromise between reduced temporal and spatial resolutions and model accuracy. Benders decomposition methods offer scalable approaches to leverage distributed computing resources and enable models with both high resolution and computational performance. Unfortunately, such algorithms are known to suffer from instabilities, resulting in oscillations between extreme planning decisions that slows convergence. In this study, we implement and evaluate several level-set regularization schemes to avoid the selection of extreme planning decisions. Using a large capacity expansion model of the Continental United States with over 70 million variables as a case study, we find that a regularization scheme that selects planning decisions in the interior of the feasible set shows superior performance compared to previously published methods, enabling high-resolution, mixed-integer planning problems with unprecedented computational performance.
Corporations and other organizations procure large amounts of carbon-free electricity and often use these procurements to make claims regarding the carbon intensity of their electricity consumption. Although a claim of carbon-free electricity use implies to the public that an organization’s electricity consumption and procurement practices have a near-zero aggregate impact on climate change, this may not be the case. In fact, multiple proposed emission accounting systems offer different definitions of being “carbon-free.” Here, we explore how the carbon-free procurement strategies associated with several of these accounting systems affect emission outcomes at the entire electricity system level, accounting for changes in operations and installed capacity. We find that the actions incentivized by hourly accounting of carbon-free electricity consumption, including the uptake of advanced clean energy technologies, are most consistently associated with real system-level emission reductions. This study assesses the system-level impacts of carbon-free electricity procurements by voluntary actors in the western United States, accounting for induced changes in both system operations and installed capacities. We find that in the current US policy environment, procurement strategies that match participants’ demand with carbon-free generation on an annual basis have minimal impact on long-run system-level CO2 emissions. Similar outcomes occur when participants calculate their annual emission impacts using short-run marginal emission rates and attempt to offset these with their procurements. In contrast, we find that matching participants’ demand on an hourly basis with carbon-free generation can drive significant reductions in system-level CO2 emissions while incentivizing advanced clean firm generation and long-duration storage technologies that would not otherwise see market uptake. Greater emission impacts are correlated with increased participant costs. We further find that government-imposed clean electricity standards can increase the effectiveness of all forms of voluntary procurement.