GHG Protocol Consequential Impacts Survey

Ever.green's Response Guide

GHG Protocol Survey deadline: Saturday, January 31, 2026 

Every voice matters: This document contains Ever.green's complete responses to the GHG Protocol's Consequential Impacts survey (52 questions).

You do not have to answer every question to participate.

There are two GHG Protocol surveys. This page is about the consequential impacts survey. We also offer guidance on the main survey about changes to Scope 2 rules.

Our responses reflect our position that the proposed hourly and location matching requirements will make long-term forward contracts (the mechanism everyone agrees drives new renewable energy deployment) more complicated, costly, and sometimes impossible. We're advocating for broader exemptions, stronger legacy protections, and approaches that preserve mechanisms driving new clean energy deployment. Use these responses as a reference, but submit your own individual feedback by January 31, 2026

How to use this document

Make your own copy to draft your responses before submitting through the official survey.

Read Ever.green's recommended responses below. Use them as you see fit as you draft your own answers to the survey questions.

Answer all appropriate questions:
If you have a perspective or data to share to inform the revisions, please answer. We left a number of questions blank that were not targeted at Ever.green. If you have limited time, we recommend answering these essential questions.

Character limits:
Open text questions have a 4,000 character maximum. This supersedes any stated 300 word limit which is stated on some questions.

Answer formats:
Questions marked "Select only one" allow a single answer. Questions marked "Select all that apply" allow multiple answers.

Official Survey Links

When you're ready, fill out the official survey:

Resources from GHG Protocol for Consequential Impacts survey:

Resources for GHG Protocol Scope 2 changes (separate survey):

Consequential Impacts Survey

General Feedback

18. What potential benefits, challenges, or unintended consequences do you foresee with developing and using consequential accounting methods for electricity-sector actions? Please include any practical considerations (e.g., feasibility, data needs, costs, comparability, clarity of claims). (4,000 characters)

SIGNIFICANT POTENTIAL BENEFITS

Consequential accounting provides a structured way to distinguish between electricity actions that primarily reallocate existing clean supply and those that are more likely to drive changes in the power system. Today, many electricity procurement claims rely on implicit consequential narratives without any consistent or transparent method for assessing system-level impact. Developing consequential methods would replace informal or overstated claims with clearer disclosures.

A standardized consequential framework would also create a stronger analytical foundation for other programs to build on, including target-setting initiatives, claims guidance, and voluntary standards. This reduces fragmentation and avoids the need for each program to independently define impact concepts such as additionality, marginal emissions, or system effects.

CHALLENGES AND PRACTICAL CONSIDERATIONS

The main challenges relate to data availability, counterfactual assumptions, and uncertainty. Estimating marginal emissions or system responses will always involve simplifications and modeling choices. The existence of uncertainty should not be grounds for inaction. Directionally accurate methods, applied transparently and conservatively, are preferable to ignoring consequential impacts altogether.

Another challenge is the breadth of what consequential impact could be expected to achieve. In practice, consequential accounting is often asked to serve two distinct purposes. One is qualifying whether an action is likely to have a positive system-level impact, for example by enabling new clean electricity generation on grids that are as carbon-intensive or more carbon-intensive than the reporter’s load. The other is quantifying the avoided emissions from the project enabled, typically expressed as tCO2.

While emissions outcomes should not be ignored, attempting to precisely quantify avoided emissions attributable to individual actors introduces substantial complexity, uncertainty, and risk of false precision. In many cases, consequential methods are more robust when used primarily for qualification and screening purposes rather than producing exact emissions totals. This still improves integrity by distinguishing actions that plausibly contribute to decarbonization from those that do not.

Too often, efforts to define consequential methods are challenged based on edge cases or methodological shortcomings. Perfect certainty is neither achievable nor necessary for these methods to be useful. Similar approaches are already widely used in policy analysis and emissions modeling, despite known limitations.

Costs and complexity also matter, particularly for smaller organizations. Additional challenges include consistent application across projects and transactions and the ability to support auditable analysis. Tests for additionality often rely on private financial models or other non-public information, which can limit transparency and comparability. These realities reinforce the importance of keeping consequential accounting optional and clearly differentiated from attributional Scope 2 reporting.

UNINTENDED CONSEQUENCES AND SAFEGUARDS

The primary risk would arise if consequential methods were used to invalidate attributional reporting. Clear guidance on how consequential accounting is intended to be used alongside attributional accounting is therefore essential.

Other risks include false precision and false confidence. Results should be communicated with appropriate context and disclosures, rather than as definitive or universally comparable figures.

CONCLUSION

Overall, we see great upside and little downside in developing consequential accounting methods for electricity-sector actions. Doing so does not require universal adoption or perfect certainty. It represents a pragmatic step toward recognizing system-level impacts more honestly, while providing a foundation that other programs can refine and build upon over time.

Back to top ↑

Formula for quantifying emissions impacts from electricity projects

This section introduces equations used to quantify consequential emissions impacts from electricity procurement, as a first step toward an emissions impact methodology. It briefly summarizes formulas in existing GHG Protocol guidance and presents the Scope 2 TWG subgroup’s proposed equation, then seeks feedback on its structure (e.g., primary vs. secondary effects, reporting period).

19. Referencing Section 6.1 in the Consequential Electricity-Sector Emissions Impacts document, is the proposed Scope 2 TWG subgroup formula appropriate for quantifying emissions impacts from electricity projects? 

Please refer to the structure of the formula itself, and save comments on methodological details, such as marginal emission rates or eligibility requirements, for following sections of the survey.

Select only one:

Yes

No

20. Please explain your answer to question 19

INDUCED EMISSIONS ARE MISSING
The proposed formula is incomplete and inappropriate as a standalone method for quantifying emissions impacts from electricity projects, especially when applied to energy storage. The formula accounts only for avoided emissions (marginal emission rate × procured electricity), ignores induced emissions (equally important for determining net impact), and does not make clear how additionality is considered.

The omission of induced emissions is particularly problematic for energy storage. Storage introduces new load to the grid when it charges and avoids emissions when it discharges. Because storage shifts energy temporally, its net emissions impact depends on differences in marginal emissions between charging and discharging periods.

If the formula only counts avoided emissions, such as during discharging hours, it will systematically overstate the benefit of storage unless it also subtracts emissions caused during charging. Without accounting for charging-induced emissions, storage that increases net grid emissions can nonetheless appear as a source of avoided emissions under the proposed formula, creating a material credibility risk for emissions impact claims.

For example, a battery that charges at night on a coal-heavy grid and discharges during the day when solar generation is already abundant may increase overall emissions. Under the proposed formula, it would be credited for avoided emissions during discharge hours without accounting for induced emissions during charging.

Even with “hourly matched” market-based procurement, energy storage is not required to demonstrate that clean electricity used to charge is physically deliverable at the time and location of charging. Furthermore, the proposed standard for physical deliverability under the market-based method revisions is insufficient to demonstrate that batteries are actually charged using zero-emission electricity. If claims are intended to assert that storage was charged with 100 percent clean electricity, a substantially stricter standard would be required.

Absent such proof, charging typically induces fossil generation, even during periods when renewables are producing electricity. Unless the marginal generator at the time of charging is a zero-emission resource, additional load causes fossil-fueled generators to ramp to meet demand.

This is why methods such as the Marginal Impact Method explicitly calculate both induced and avoided emissions and determine net impact:

 - Induced emissions: Charging MWh × Marginal Emission Rate (charge)
 - Avoided emissions: Discharging MWh × Marginal Emission Rate (discharge)
 - And then determine the net impact: Avoided − Induced

This structure is consistent with GHG Protocol Project Guidance and CDM methodologies, which assess changes relative to a baseline and include both increases and decreases in emissions. The proposed formula departs from this precedent and risks systematic bias if used in isolation.

ADDITIONALITY IS GATING
Additionality must be an explicit eligibility or gating condition for use of the formula. We acknowledge that additionality may be handled elsewhere but, the formula, as written, could be misused without consideration of additionality. The formula cannot be meaningfully applied to short-term or spot market instruments, such as an unbundled $1 REC, to infer consequential impact. Without additionality, the formula risks attributing system-level emissions impacts to actions that do not plausibly influence generation, dispatch, or investment decisions.

IN SUMMARY
To ensure environmental integrity and applicability to modern grid-interactive technologies such as storage and flexible demand, any consequential accounting approach must account for both electricity consumed and electricity delivered or displaced, evaluated against time- and location-specific marginal emissions, and applied only where additionality can be reasonably demonstrated.

21. Should the quantification of emissions impacts from electricity projects consider secondary effects in addition to primary effects?

Select only one:

Yes

No

22. If you answered “yes” to question 21, please provide additional context for what kinds of secondary effects should be considered, and how these may be quantified.. 

Yes, secondary effects should be included in consequential accounting when they are methodologically robust, standardized, and materially influence the net emissions impact of electricity projects. Primary effects, meaning emissions induced by electricity consumption and avoided through electricity generation or delivery, must always be included. These reflect the direct physical interaction between projects and the power system and are essential for any credible consequential method.

In addition, a limited set of secondary effects should be required where they can be quantified consistently and transparently across projects. In particular, long-term capacity and dispatch effects, such as build-margin or future displacement of fossil generation, are legitimate secondary effects that meaningfully influence the climate impact of electricity procurement and grid-interactive technologies. These effects can often be modeled using standardized data and assumptions and are important for capturing the structural impact of long-term clean energy investments.

By contrast, more speculative or highly assumption-dependent secondary effects, such as full life-cycle emissions, rebound effects, or indirect market responses, should remain optional. These may be appropriate for project-specific studies or lifecycle analyses, but they are not suitable as required components of routine Scope 2 consequential accounting due to their variability, lack of standardization, and limited auditability.

This tiered approach ensures that consequential accounting reflects the most important drivers of real-world emissions outcomes while remaining practical, transparent, and comparable across organizations and technologies.

23. If you answered “no” to question 21, please provide additional explanation for why secondary effects should not be considered.ase explain the preferred order.

Left blank

24. Should the emissions impacts of electricity projects be calculated and reported each reporting year, or should the emissions impacts for the entire lifetime of a project be reported once at the outset of the project?

Select only one:

Reported each year

Reported once for the lifetime of the project

25. Please explain your answer to question 24

Emissions impacts should be reported each year on an ex post basis. This ensures consistency with the core structure of the GHG Protocol, including the location-based and market-based Scope 2 methods, which are all reported annually. Annual reporting preserves comparability across companies and time periods and allows impacts to reflect real grid conditions rather than one-time forecasts.

The emissions impact of electricity actions changes over time as grids decarbonize, transmission expands, and dispatch patterns evolve. Annual reporting captures these dynamics directly, rather than embedding long-term assumptions that introduce uncertainty and create incentives for optimistic or inconsistent modeling.

This is particularly important for grid-interactive technologies such as energy storage and flexible demand, whose emissions impacts depend on when and where electricity is consumed and delivered. Using observed operational data with marginal emissions rates provides a transparent and auditable way to estimate year-over-year impacts.

Organizations that wish to evaluate or disclose longer-term or project-lifetime impacts may do so through separate project-level or scenario-based analyses. However, such estimates should not replace standardized annual reporting for consequential Scope 2 emissions impacts, which is essential for maintaining credibility, comparability, and alignment with corporate GHG inventories.

Back to top ↑

Treatment of Additionality

This section introduces approaches for assessing additionality, the principle that claimed emission reductions or avoided emissions must result from actions that would not have occurred otherwise. It summarizes common tests used in existing programs and seeks feedback on which approaches are most appropriate, feasible, and rigorous for future GHG Protocol guidance. This input will help the AMI TWG refine additionality criteria as part of its continued work on avoided-emissions accounting, consistent with ISB direction.

26. For each of the provided additionality tests, indicate which tests should be included (required or optional) in a framework designed to assess additionality for renewable energy projects?

For these questions, "required" indicates a mandatory test, such that all projects must pass the test in question to be eligible. “Optional” indicates that a test can be used to demonstrate additionality, but is not mandatory. For optional tests, projects have the choice for which tests they use to demonstrate additionality.

Marked as REQUIRED:

Regulatory test

Timing test

Financial analysis test

Marked as OPTIONAL:

Positive list

Marked as NOT Required:

Barrier test

Common practice test

Performance standard

Contractual/tenor test

First-of-its-kind test

27. For the additionality tests you selected as required or optional, please provide commentary detailing why each should be included.

Additionality should be grounded in financial causality. A project or operational change should be considered additional only if the reporting entity’s action is financially material to whether it occurs or continues. This is the only criterion that directly tests whether corporate activity caused a change in the real world.

The core framework should therefore require three elements: a regulatory screen, a timing screen, and a financial analysis. The regulatory test ensures that actions required by law are not credited. The timing test ensures that the intervention occurred before or at the point when financial decisions were made. The financial analysis test then evaluates whether the company’s commitment was large enough, long enough, and sufficiently reliable to change the project’s economic outcome.

Contract structure and price both matter for financial materiality. A long-term contract at a low price may be immaterial, while a short-term contract at a high price could be decisive. There is no single contract length or instrument that guarantees additionality. What matters is whether the net present value or expected return of the project meaningfully changes because of the company’s commitment.

Positive lists can play a useful supporting role by identifying contract types and market contexts that are almost always financing-relevant, such as long-term bundled PPAs or long-term forward REC contracts in regions with limited policy support. When well defined, these can reduce transaction costs by allowing projects to qualify without full financial modeling. However, positive lists must be high-bar, narrowly scoped, and periodically updated as markets evolve, since arrangements that were once financially decisive can become business-as-usual over time.

Other tests such as common practice, first-of-its-kind, barrier, or performance standards are not reliable substitutes for financial causality. These approaches are often subjective, easily gamed, or poorly aligned with how projects are actually financed. They may provide contextual information, but they do not determine whether a corporate action changed a real investment or operational decision.

In summary, additionality in consequential accounting should be based on whether a corporate commitment is financially material and timely, with regulatory safeguards to avoid crediting mandated activity. Positive lists and contract features can help streamline this assessment, but they should always be anchored to the underlying question of whether the action changed what happened in the power system.

28. For each of the provided additionality tests, please indicate which tests are feasible to implement.

Select all that apply:

Regulatory test

Timing test

Financial analysis test

Barrier test

Common practice test

Positive list

Performance standard

Contractual / tenor test

First-of-its-kind test

None (no tests are feasible)

29. Please provide additional context or information on which tests are or are not feasible to implement. 

Several of the proposed additionality tests are feasible to implement in practice, provided they are defined clearly and applied consistently. Note that “feasible” does not mean they are adequate on their own as a test for additionality.

The regulatory test is feasible as a binary screen based on public law and policy. It can be used to exclude projects that are mandated by statute, regulation, or compliance programs.

The timing test is feasible when it is defined in relation to objective project milestones, such as contract execution, financial close, or final investment decision. When anchored to these milestones, it provides a practical way to test whether a corporate action plausibly influenced a project. One circumstance where a timing test can produce a false negative is when a PPA or other revenue contract is signed later to replace expected revenue that fell through after construction began.

The financial analysis test is feasible and widely used in project finance. It requires comparing baseline and post-intervention economics, such as IRR, NPV, or DSCR, to determine whether a contract or investment was financially material. While it requires access to project-level data, it can be implemented with standardized documentation and auditability. We have done just that at Ever.green across numerous projects and developers.

Positive lists are feasible as a way to streamline this process when certain contract types and market contexts are known to be financing-relevant. For example, long-term bundled PPAs or long-term forward REC contracts in specific markets can reasonably be presumed to be financially material. To remain credible, positive lists must be narrowly defined and periodically updated as market conditions change.

The contractual or tenor test is also feasible, since contract length, structure, and price are objective and easily documented. Long-term offtake or revenue agreements are the primary mechanism through which electricity projects obtain financing. However, contract length alone is not a good test for additionality and would be a poor substitute for a financial analysis test.

First-of-its-kind and common practice tests are also feasible in principle, as they rely on observable market and deployment data. However, in mature electricity markets these tests will often fail and provide limited information about whether a given corporate action caused a project to proceed. They are therefore of limited relevance for evaluating the additionality of voluntary procurement intended to accelerate, rather than merely initiate, clean energy deployment.

By contrast, barrier tests and performance standards are less feasible for routine use because they depend heavily on subjective judgments, context-specific assumptions, or benchmarks that are difficult to maintain consistently across regions and over time.

30. Please list any additionality tests not already included here that should be considered as part of an additionality framework for renewable energy projects. Please explain why each test should be considered.

We do not see a need for any other additionality tests beyond those already included in the framework. The existing tests, particularly regulatory, timing, and financial materiality, are sufficient to assess whether a corporate action plausibly caused a project or operational change to occur.

However, there is a need for clearer guidance on how to quantify emissions impacts once additionality has been established, especially for grid-interactive and dispatchable assets such as energy storage or flexible demand. In these cases, emissions impacts may arise from changes in operational behavior rather than from asset construction alone.

Approaches such as dynamic baselining can play a valuable role in impact quantification by comparing observed operation against a modeled counterfactual, for example revenue-maximizing dispatch versus emissions-optimized dispatch. This can help quantify incremental emissions reductions attributable to a reporting entity’s intervention, particularly for behavioral or operational changes.

That said, dynamic baselining should not be treated as a standalone additionality test. It does not establish causality, but rather measures the magnitude of impact once causality has been demonstrated through other tests. We therefore recommend that such methods be addressed within consequential accounting or impact quantification guidance, rather than being incorporated as additionality gates.

Maintaining a clear separation between tests for causality and methods for impact measurement will help preserve rigor, comparability, and auditability while still allowing the framework to recognize meaningful operational emissions reductions.

31. Should regional differences be considered in additionality tests (e.g. different combinations of additionality tests would be relevant or appropriate for different regions)?

Select only one:

Yes

No

Unsure, depends on details

32. If you answered "yes" to question 31, please explain your answer, referencing specific examples of regions that warrant different kinds of tests.

While we answered “no”, we want to provide some context.

Additionality tests should be structurally standardized and applied consistently across regions. Varying the tests themselves by geography would significantly reduce comparability across reporting entities and over time, and would materially increase implementation and audit complexity for companies operating in multiple markets.

Regional differences are better addressed through the inputs and evidence used to apply a common set of tests, rather than by changing the tests themselves. For example, regulatory context, market maturity, revenue expectations, and financing conditions can and should inform how a regulatory, timing, or financial materiality test is evaluated in a given region. However, the underlying logic of those tests should remain consistent.

Maintaining a uniform test structure ensures that additionality claims are comparable across geographies, reduces opportunities for gaming or regulatory arbitrage, and keeps the framework feasible for widespread adoption. This approach also aligns with the global role of the GHG Protocol, which is to provide consistent accounting guidance while allowing for region-specific data and context within that structure.

33. Should the level of rigor in additionality tests be applied differently depending on the type of claim an organization wants to make? (e.g. association vs. causal claim)

Select only one:

Yes

No

34. If you answered "yes" to question 33, please explain, citing the kinds of claims organizations should be able to make given different approaches to additionality tests.

Consequential accounting should be reserved for actions that plausibly cause a change in real-world emissions outcomes. Creating separate categories of “causal” and “associative” claims, each with different rigor requirements, would undermine the purpose of a consequential metric and create avoidable confusion.

Associative claims, such as claims based on unbundled EACs without evidence of financial or operational causality, do not demonstrate that an organization’s action changed generation, dispatch, or investment decisions. Allowing such claims into a consequential framework, even with reduced rigor, would make the metric vulnerable to gaming and blur the distinction between real impact and symbolic association.

Instead, consequential accounting should apply a single, consistent standard that requires plausible causality, net emissions accounting (including both induced and avoided emissions where relevant), and transparent, verifiable methods. Actions that do not meet this threshold can still be disclosed through other channels, but they should not be presented as consequential emissions impacts.

Maintaining one clear standard protects the credibility of consequential accounting, supports comparability across reporting entities, and ensures that the metric distinguishes between actions that genuinely influence emissions outcomes and those that do not.


Lastly, in designing this framework, it is also important to balance rigor with feasibility. Consequential accounting that relies on highly bespoke, expensive, or project-specific analyses risks limiting participation to a small subset of organizations and reducing overall impact. A standardized approach that is practical to implement at scale, even if it involves conservative assumptions and some residual uncertainty, is preferable to a theoretically perfect framework that is rarely used. Transparency, consistency, and auditability are more important than eliminating every possible edge case.

Back to top ↑

Marginal Emission Rates

This section introduces approaches for determining marginal emission rates, the emission factors that represent how changes in electricity generation or consumption affect grid emissions. It outlines existing methodologies for both operating and build margin calculations and requests feedback on which approaches are most appropriate, credible, and feasible to use in avoided-emissions accounting. This input will inform the AMI TWG’s continued development of consistent, sector-agnostic methods in line with ISB direction.

35. Which methodology or methodologies are appropriate for quantifying the operating margin emissions impacts of renewable energy projects?

Select all that apply:

SCED – fuel on the margin

SCED – locational

Scenario modeling

Heat-rate/LMP

Statistical

Capacity factor based

Difference-based

None

36. Which methodology or methodologies for quantifying the operating margin emissions impacts of renewable energy projects are not appropriate?

Select all that apply:

SCED – fuel on the margin

SCED – locational

Scenario modeling

Heat-rate/LMP

Statistical

Capacity factor based

Difference-based

None

37. Please provide any additional explanations or further details regarding which operating margin methodologies are or are not appropriate.

Several proposed methodologies are not appropriate for routine consequential emissions accounting because they rely on assumptions or proxies that do not reliably reflect causal marginal impacts.

(1) Scenario modeling is not well suited for routine accounting. While useful in research and long-term policy analysis, it depends on complex counterfactuals and numerous assumptions about how the grid would have evolved absent an intervention. Differences in modeling choices, baselines, and scenarios undermine comparability across entities and make results difficult to audit or replicate.

(2) Capacity-factor-based methods oversimplify grid operations by assuming that low-utilization generators are marginal. In practice, marginal generators change hour by hour based on system conditions, transmission constraints, and unit commitment decisions. Capacity factor is a poor proxy for marginal response and does not reflect how modern power systems actually dispatch resources.

(3) Difference-based methods attempt to infer marginal impacts by comparing total system emissions before and after an intervention. This approach fails to isolate causality, since grid emissions vary due to many confounding factors such as weather, outages, fuel prices, and demand fluctuations. Without robust controls, observed differences cannot be credibly attributed to a specific action.

By contrast, methods that rely on observed dispatch behavior and marginal response are more appropriate for consequential accounting.

Among these, SCED-based methods should be prioritized. Locational SCED approaches are preferred where available, as they capture congestion and nodal conditions and identify the generator that actually responds to a change in load or generation. Fuel-on-the-margin SCED approaches provide a useful alternative where locational detail is not feasible.

Statistical methods based on observed grid behavior can serve as a fallback where SCED sensitivity analysis is unavailable, provided they are well specified, transparent, and validated against observed outcomes.

Overall, a clear hierarchy should favor methods that use observable data, establish a causal link between the intervention and generator response, and are feasible to implement consistently across regions and reporting entities. This approach balances rigor with scalability and reduces the risk of both false precision and systematic bias.

38. Which methodology or methodologies are appropriate for quantifying the build margin emissions impacts of renewable energy projects?

Select all that apply:

Recent capacity additions

Policy scenario

Capacity expansion modeling

Average emission rate

None

39. Which methodology or methodologies for quantifying the build margin emissions impacts of renewable energy projects are not appropriate?

Select all that apply:

Recent capacity additions

Policy scenario

Capacity expansion modeling

Average emission rate

None

40. Please provide any additional explanations or further details regarding which build margin methodologies are or are not appropriate.

The most appropriate methodology for estimating build margin emissions factors is the recent capacity additions approach. This method is grounded in observed investment behavior rather than modeled futures. By using empirical data on the fossil generation that has most recently entered operation, it provides a transparent and credible proxy for what new clean energy is most likely to displace over time.

Using observed build data improves consistency and auditability, reduces reliance on speculative assumptions, and better aligns emissions factors with real-world market outcomes. It also supports comparability across reporting entities and regions, which is essential for standardized accounting.

By contrast, policy scenarios and capacity expansion models rely heavily on assumptions about future policy, market design, fuel prices, and technology costs. Results from these models can vary widely based on inputs and modeling choices, making them difficult to standardize and unsuitable for routine emissions accounting.

Similarly, average emissions rates do not reflect marginal displacement. By blending baseload, intermediate, and peaking resources, they obscure which fossil generation is actually avoided by new clean energy additions and dilute the signal needed to estimate consequential impacts.

For these reasons, recent capacity additions should be the primary, and preferably sole, methodology endorsed for build margin estimation in consequential electricity accounting.

41. How could GHG Protocol assess these models’ applicability to different types of projects? Factors that could affect applicability may include, but are not limited to, project size, shape, and capacity factor.

Rather than developing different build margin models for different project types, the GHG Protocol should prioritize a single, standardized build margin dataset that is applied consistently across all projects. Differences in project characteristics should be reflected through how build margin and operating margin impacts are weighted, not through the use of different underlying models.

Using a uniform build margin dataset supports comparability, consistency, and transparency across reporting entities and technologies. When different projects apply different build margin models, variation in results is driven as much by methodological choices as by real differences in impact, making results difficult to compare and audit.

Project-specific characteristics such as size, capacity factor, dispatchability, and operational flexibility are better addressed through weighting between build margin and operating margin impacts. For example, dispatchable or flexible resources may have a higher operating margin component, while non-dispatchable generation may rely more heavily on build margin effects. This approach preserves a common analytical foundation while still allowing emissions impacts to reflect meaningful differences in how projects interact with the grid.

To ensure scientific integrity, the selected build margin model should be empirical and backward-looking, rely on observed capacity additions, minimize reliance on speculative assumptions, and be transparent and reproducible using public data. Among the methodologies considered, recent capacity additions best meet these criteria and provide a credible proxy for long-run displacement of fossil generation.

In contrast, tailoring build margin models by project type, or relying on policy scenarios or capacity expansion models, would increase complexity, reduce comparability, and introduce unnecessary subjectivity. A single, centralized build margin dataset combined with clearly defined project-level weighting rules offers a more robust, scalable, and auditable approach for consequential electricity accounting.

42. What other types of emission rates or metrics may be appropriate for assessing the emissions impacts of projects?

A critical metric missing from many existing frameworks is a net marginal emissions metric that explicitly distinguishes between emissions induced by electricity consumption and emissions avoided through electricity procurement or generation.

Consequential emissions accounting should reflect both sides of this equation. Electricity consumption has a causal emissions impact at the margin, just as electricity generation or procurement can avoid emissions at the margin. Any framework that measures only avoided emissions, without accounting for induced emissions from load, captures only part of the system-level impact and risks overstating climate benefits.

This distinction is especially important for dispatchable and grid-interactive technologies such as energy storage, where emissions outcomes depend entirely on when electricity is consumed versus when it is delivered back to the grid. However, the same logic applies more broadly. All electricity consumers induce emissions through load, and changes in load shape, timing, or magnitude can materially affect system emissions.

Failing to account for induced emissions creates blind spots in several common scenarios, including:

- Procuring clean electricity while operating electricity-intensive processes during carbon-intensive hours,
- Increasing total electricity consumption while claiming avoided emissions from procurement,
- Shifting load in ways that unintentionally increase system emissions.

To address this, consequential metrics should calculate both:
- Induced emissions, based on electricity consumed and the marginal emissions rate at the time and location of consumption, and
- Avoided emissions, based on electricity delivered or displaced and the marginal emissions rate at the time and location of displacement.

These components should then be combined to determine net emissions impact. Including both induced and avoided marginal emissions provides a more complete, causal, and decision-relevant picture of emissions impact. It enables more credible disclosures, supports better-informed procurement and operational decisions, and reduces the risk that accounting frameworks reward actions that look beneficial on paper but do not deliver real-world emissions reductions.

Separately, it is important to maintain a clear distinction between attributional Scope 2 inventories and consequential metrics within GHG accounting, to preserve transparency and avoid conflating different types of claims. At the same time, if consequential results cannot meaningfully influence how progress is evaluated or decisions are made, there is a risk that the framework becomes informational rather than motivational. Clear separation in accounting does not preclude the use of consequential results by companies, programs, or stakeholders to assess ambition, quality, or progress toward climate goals.

43. What is the maximum appropriate level of spatial granularity for marginal emission rates?

Select only one:

Country

Grid region

Balancing area

Zonal

Nodal

44. Please provide context regarding your answer to question 43.

The appropriate spatial granularity for consequential emissions accounting should balance accuracy with feasibility and comparability. Where available, finer spatial resolution, including nodal data, can improve the accuracy of marginal emissions estimates by better reflecting congestion, redispatch, and the generator that responds to a change in load or generation. These effects are inherently local and cannot always be captured at broader regional levels.

At the same time, nodal resolution should not be required universally. Data availability, implementation burden, and the need for comparability across regions and reporting entities warrant a hierarchical approach. Coarser spatial resolutions, such as balancing authority or grid region, should be permitted where nodal data are unavailable or impractical to implement.

Importantly, allowing finer internal resolution does not require that reported results lose comparability with other Scope 2 methods. Comparability should be preserved through standardized aggregation, disclosure, and reporting conventions, rather than by constraining all analyses to the coarsest common denominator. This approach mirrors how other accounting systems handle differences in underlying data resolution while maintaining consistent, comparable outputs.

Finally, consequential accounting should not be constrained by the deliverability boundaries used in the Market-Based Method. Those boundaries were developed for attributional accounting and reflect practical compromises rather than marginal system behavior. Consequential accounting serves a different purpose and should be allowed to use more granular information where it materially improves causal attribution, while still maintaining clear separation from attributional Scope 2 inventories.

45. What is the maximum appropriate level of temporal granularity for marginal emission rates?

Select only one:

Annual

Monthly

Daily

Hourly

Sub-hourly

46. Please provide context regarding your answer to question 45.

Hourly temporal granularity is the appropriate default for consequential emissions accounting because marginal emissions vary materially over time. Generator dispatch, congestion, fuel switching, and renewable availability all change on an hourly basis, and coarser temporal resolutions obscure these dynamics and can materially misstate emissions impacts.

Using hourly data improves alignment with actual grid operations and reduces systematic error that arises when emissions rates are averaged over longer periods. This becomes increasingly important as grids incorporate higher shares of variable renewable energy, where diurnal and seasonal patterns drive large swings in marginal emissions.

At the same time, hourly granularity should not be treated as an absolute requirement in all contexts. A hierarchical approach is appropriate, where hourly data are used wherever available and reliable, and coarser temporal resolutions are permitted as a fallback when data limitations or implementation constraints exist. This approach preserves feasibility and comparability across regions while still prioritizing accuracy.

Importantly, allowing hourly granularity for consequential accounting does not require that results lose comparability with attributional Scope 2 methods. Comparability should be maintained through standardized aggregation and reporting conventions, rather than by constraining consequential analysis to the same temporal resolution used in allocation-based inventories.

In summary, hourly emissions factors should be the preferred temporal resolution for consequential accounting, with clear guidance for fallback approaches where hourly data are unavailable, ensuring a balance between causal accuracy, scalability, and consistency across reporting entities.

Back to top ↑

Build and operating margin weights

This section addresses how to balance or weight operating margin and build margin impacts when estimating the emissions effects of electricity projects. It summarizes existing approaches used in GHG Protocol and other programs, as well as additional concepts raised by the Scope 2 TWG subgroup and seeks feedback on which weighting methods are most appropriate and practical. Responses will guide the AMI TWG in developing consistent approaches for combining short-term and long-term grid impacts in future avoided-emissions methodologies, consistent with ISB direction.

Ever.green did not respond to questions 47 - 52.

Ever.green case studies

High-Impact RECs

SC

Solar

Bishopville Solar

Ever.green helped unlock 28MW of solar power by connecting a mission-driven developer with corporate buyers to unlock long-term revenue needed to finance this project and bring it online.

View case study

Clean Energy Tax Credits

AR

Solar

Ralston Family Farms

Ever.green matched Ralston Family Farms with a buyer and guided the deal from start to finish, making it easy for both sides to benefit from clean energy tax credits.

View case study