[RFC] Gauge Framework Revamp Status Report


One of the Maxis initiatives for Q1 2024 is to revamp the currently implemented gauge framework. Therefore, the Maxis have been re-evaluating the current Gauge Framework as described in the forum here: [BIP-57] Introduce Gauge Framework v1

The initial framework stated the following:

“This “Gauge Framework v1” aims to deliver a simple and objective rating criteria that the Balancer community can use to filter all current and future gauges. There are two phases of analysis - first using a “weight” factor with a “market cap” factor to derive an “overall” factor. All gauges scoring below a certain threshold would proceed to the second phase which would apply a “revenue” factor, helping those small pools which are generating significant revenue reach the threshold. If a pool remains below the threshold after phase 2 it will undergo a mandatory migration to a new gauge with a 2% or 5% maximum cap on the emissions it can receive.”

We approached the framework by stepping back and asking questions like

  • “Where are our biggest pain points?”
  • “What are the most difficult elements for partners to onboard?”
  • “Where can we improve our processes?”
  • “Does the framework v1 still apply and if not, what do we need to change?”

You can check out our findings in this Miro Board:

Analysis of the current status quo

Problem statement analysis: whate are the biggest pain points?

Solution brainstorming: What would an ideal solution look like

We took following learnings from Gauge Framework v1:

  • Framework v1 was essential to migrate to a new and more consistent system
  • The old framework was great in analyzing already existing pools / gauges and how to assign caps. It is, however, difficult to assign caps to new pools
  • We now have the core pool framework in place that adds another layer of decision making when applying for a gauge

Problem Statement

The gauge framework is a central component of a customer journey at Balancer. Therefore, it is important to analyze any changes in its setup in a holistic fashion. We identified following issues:

  • Issues with partner onboarding: it’s cumbersome and they need constant help from the Maxis to get components set up (be it pools, gauges, caps etc)
  • Technical issues: problems with token or pool whitelisting, not having the correct gauge set up which leads to issues with staking on the front-end
  • There is no clear guideline and consistent execution about gauge caps for initial setup. What rules do apply for a new pool where no liquidity has been onboarded yet?
  • Oftentimes it is not clear to partners when rewards will flow on which network, what it means to apply to a gauge and how to navigate our ecosystem

Approach to Improve the Current Setup

Based on these learnings we identified following work streams for improvement of the current system:

  • We need good and clear documentation for all our processes
    • We provide tooling through automation and good documentation so anybody can work with our infrastructure - “Balancer becomes easy to use”
    • We improve our internal processes so when a partner launches it “just works”
  • We need concise partner onboarding documentation so they can decide which product fits them best and what our protocol has to offer (gauges, core pool status, fee recycling, Aura, voting incentives). This includes a consistent documentation and approach from BD to integrations.
  • We have to revise framework v1 and make it as simple and robust as possible - no dynamic caps in the initial phase.
  • We work on better data: clear data products so partners understand how fees flow, how our economic engine works and how they can benefit from the ecosystem (Aura, Balancer Fee Flywheel)
  • Consistent monitoring and removal of outdated gauges needs to be automated: we introduce an automatic approach to label gauges that can be removed. We introduce a clear procedure on how that happens - gauge removals will be as clear as adding them for all parties involved.

Improvement Work Streams

Work Stream Lead(s) Team / Peer-Reviewers Minimal Goal Nice to have
Partner Onboarding documentation Xeonus Marketing Team and BizDev Align Partner Notion Pack with Basic Onboarding document containing Balancer pool types, core pools, gauges, voting incentives etc. It is something we can share with partners and they know how they can leverage Balancer and see clear up-sides of using our protocol Show reward flow and timelines across chains
More fletched out incentive simulator like for Aura (Aura Analytics )
Gauge Onboarding Automation Xeonus & Tritium Xeonus / Gosuto Introduce checks for end-to-end integrity such as subgraph gauge settings, front-end working, rewards streaming. If a partner is onboarded with a gauge it “just works” Snapshot vote automation based on forum post and multi-sig ops PRs
Data insights Xeonus Beets Integrations team / Ardo Provide meaningful insights into core pools such as fees earned, voting incentives placed Protocol wide stats and endpoints (APIs)
Gauge Offboarding Automation Xeonus & Tritium Tritium / Gosuto Job that runs at a given interval that evaluates all active gauges and flags gauges that didn’t receive any votes for XY weeks (tbd) Clear docs on how this will be done
Make partners aware of this change, also part of Partner pack
Dashboard showing historical votes and a “leaderboard” incl. If they are about to get offboarded / below threshold

Architectural Decision Record

Improving our current system while striving for simplicity has implications to how we approach these topics. We therefore align and provide decisions for the revised architecture:

Omit Implementation of Optimistic Voting

We had initial discussions to introduce optimistic voting (You can review the current discussions here: [RFC] Optimistic approval for Gauge adds). The idea of optimistic voting was to aggregate all gauge votes into one snapshot vote with a veto system. Only gauges that would not be deemed to be enabled by the community would be voted out, otherwise they would be “optimistically” voted in.

We decided to put this initiative on hold because of following reasons:

  • Optimistic votes assume we have guard rails in place to deny malicious adds. We currently don’t have a good system / way to track that
  • Ecosystem participants will stop looking at gauges and there is a higher probability that “bad” gauges are voted in.
  • One single snapshot vote per gauge provides a more clear way for approving them. Additionally, we have a partner applying for a set of gauges which would also cause challenges in the optimistic voting design
  • Proper implementation of optimistic Voting would require a lot of resources and dev work which we don’t think is worth the effort right now
  • Comparing the impact of “voter fatigue” vs. other issues raised above doesn’t justify the implementation of optimistic voting
  • Gauge management should still be a process with proper discussion and human interaction where optimistic voting would result in less engagement.

Omit Dynamic Gauge Cap Management

Although, initially being intrigued by the idea, the Maxis identified serious issues about a potentially automatic way to dynamically manage gauge caps based on following reasons:

  • We need better data about earned fees, gauge and pool performance to even start to define KPIs on how gauge caps would be set in the future (relating to Phase 2 of Framework v1)
  • Very rarely, we had issues with partners about setting gauge caps
  • Protocols holding veBAL to vote for their gauges would be very negatively impacted by dynamic caps resulting in making veBAL unattractive for them
  • There is a high likelihood BAL emissions would be lost based on setting caps and entities not adjusting to the new maximum vote weight in time
  • The system would be hard to manage and the positive impact would be minimal
  • Dynamic gauge management will be rather challenging with v3 on the horizon. The Maxis would need to manage both v2 and v3 deployments.

Next Steps

The Maxis will work on these proposed improvement work streams to stabilize the current framework, which includes

  • Improved Partner Onboarding - streamlined Partner Pack
  • Improved onboarding experience incl. Automation wherever possible
  • Simplified guidelines for setting initial gauge caps
  • Improved data insights

We encourage the community to give feedback on this approach and if we shall change certain aspects of this revamp, otherwise the Maxis will proceed with the outlined work streams to improve our current setup.