[BIP-57] Introduce Gauge Framework v1

Your remark about engineering random snapshots to avoid gameability begs an important question: how is the framework that @solarcurve introduces (or one of your modela if that turns out to be better) to be enforced every week?

I have yet to see an overview of all gauges with their respective scores, let alone such an overview which is fully automated.

1 Like

It’s a bit different with Solarcurve’s model as its somewhat hard to game market cap and revenue factor consistently as it would require a lot of money, but hey if its +EV it will likely happen lol

In my models I guess TVL can be faked a bit more

I think he meant, DAOs might have to change their pool composition first to be above a weight factor of 5, and later they have to change their pool again if they want to qualify as a core pool.
For example if Badger changed to a 50/50 pool they would be in uncapped territory, but if they want their pool to also be a core pool, they would need to relaunch it again once the core pools go live. Even though I doubt they would head in that direction since they seem to like the WBTC pairing.

I assume the pools would have to be relaunched to be a core pool, right? This wouldn’t be an issue if you can just launch your pool with a rETH or wstETH pairing right now and enable the core functionality later.

Upon introduction all gauges will be assessed (see here). Those below the threshold will be migrated. Going forward it is up to any interested community member to bring forth a proposal to cap or uncap a gauge if the data has changed to justify that action. In the v1 of this framework there is no automatic promotion or demotion of gauges, because designing an automated system for that is a complex task (and I’m not certain it’s worth doing).

As for resources managing it, none really as the only “management” is handled by interested community members. Most of the gauges being migrated to a 2% cap have no realistic expectation of ever exceeding that in my view. It’s hard to envision a massive amount of uncapping gauge proposals.

Expectations are that existing gauges below the threshold will be migrated (timeline to be announced when it’s available). New gauges will be subject to this framework to determine if they’ll enter with a 2% cap or uncapped. If capped, in a month they’ll earn a revenue factor and a proposal can be made to uncap at that point if justified. LP’s will need to unstake from existing gauges and re-stake in new ones for gauges that are migrating to a cap. This is a one time migration as we can uncap a gauge via governance action (no migration required).

Wow this is an incredible post! Sincerely appreciate you taking the time here. It is true that we can only implement this quickly because we’re leaving it as a manual process with a single cap and changes are likely to be infrequent. Your system sounds amazing but would require a huge overhaul I think. Distributing excess votes to other pools proportionally is not possible with Curve’s design from what I know. However, this could be implemented on Aura’s side since they use snapshot and already do this for pools that get under 0.5% of the vote. Probably worth exploring this avenue through Aura if their share of veBAL continues to grow. Again, great post! A lot to think about.

It is buried in the original post but here you go.

I don’t think any pool can simply adjust to 50/50 and have a 5 factor based on the ones that will be migrating if this passes. But yes, a new pool is required for core pool status. This would be the case with or without this framework. Anyone wanting to go this route should continue waiting for the new factories then remake their pool. In the interim, if they’re migrated to a capped gauge that’s a separate issue and there is no need to remake their pool because no pool currently being migrated would qualify for uncapped with a simple weight change.

1 Like

Temple and Badger would be the 2 cases where remaking a 50/50 would be enough to be uncapped
I took those numbers from the spreadsheet you posted but also double checked their mcap. (Badger supply is wrong on coingecko but should be fixed soon)

Badger would actually prefer to move to a 40/40/20 pool with graviAURA I think, so then we I guess also have to wait for AURA to hit 50 million in AUM, and then if we are both right around there maybe have to constantly deal with governance to decide if we get a cap or not. It sounds quite unappealing :slight_smile:

It is an interesting scenario to think about. Badger requests a 40/40/20 gauge to replace our 80/20 gauge. Badger has about 60 million in mcap.

Aura currently has about 30 million in MCAP, but that is growing fast with pretty solid flows of new tokens being minted.

So basically depending on when this happens, we will end up with one gauge or the other. Than if the market tanks and badger falls below 50 million in mcap we have to deal with that, or AURA. If we start with a 2% because of AURA, then we basically have to wait till AURA hits 50 million(or maybe 55 with this multiplier on a 40/40/20), and then maybe are also riding that line for a while. If the market tanks by 30+% again, tokens at 50-70 million be under 50 million until peaks back up.

@solarcurve getting into specifics, because this is the perfect messy situation, how would you suggest this is handled in governance? I really don’t want to keep having Badger related governance every month.

1 Like

I can’t promise proposals won’t be made and those won’t go to a vote since our current system presents a very low bar for that to happen. Any reasonable delegate/voter would not entertain capping one month, uncapping the next, etc. Such a change should be carefully considered and expected to have staying power. We could add a stipulation that any proposal to cap or uncap a gauge, if successful, cannot be revoked until three months have passed (and then, only if the data has changed to justify it). I think this would address your concern?

To some degree, if it came with a bit of leeway in situatuions where it was expected to change.

For example if this passed tomorrow and we had to get a new gauge. Badger is working on fixing our AUM on coingecko, and AURA should with any luck hit 50 million soon. If there was some give/we could argue that we’re headed towards uncapped and so should just get there a bit early that’d be cool.

If it meant we ended up stuck at 2% for 3 months and AURA hit 50 million in 3 weeks it’d kinda suck. This goes for any larger cap token paired with graviAURA.

There’s will be a lot of movement around this 50 million number in the next months, and as I keep saying 2% really hurts… Especially if you fall from unlimited to 2% and have already locked enough influence to vote for more than that.

Let’s do a little poll and see if any lurkers want to chime in:

What do you think the Market Cap limit for the voting cap should be?
  • 25 million
  • 50 million
  • 75 million

0 voters

What do you think the cap should be on pools that don’t make the cut?
  • 2%
  • 5%
  • 7.5%
  • 10%

0 voters

Would you support giving snapshot voters(veBAL hodlers) the choice to select between a 2% and a 5% cap on smol pools?
  • Yes
  • No

0 voters

With your 3rd question, do you mean case by case, so for the first one voters might pick 5% the next coin gets a 2% cap, and so on?

I think he means putting that question to a vote to determine the global cap limit. I personally think it would be better to settle on a cap % through discussion and consensus building here rather than a take it or leave it vote.

3 Likes

No i meant like just letting snapshot voters decide how high the limt should be within some range really?

I’d be happy with 2% or 5% as options. I’d vote for something with >5%. I’d vote no on 2%.

I changed the poll to be more clear and reset it.

1 Like

Agree I like when polls have range options rather than just a simple Yay - Nay

1 Like

I’d vote for 3% if it was there. Idk why 2% seems very tight and 5% seems too much lol

1 Like

I ask you to prove me wrong regarding the following statement.

If we consider:
a) that 3rd party bribes are revenue as it reflects directly in veBAL and vlAURA holders APR
b) the core pool bribes are negative revenue (i.e. should be deducted as an expense in the cost-efficiency calculations),
then last round:

  1. all of the core bribes were losing money or around breakeven at best (i.e. direct distribution would yield better)
  2. stETH, stMatic, and MaticX were only ok because of the 3rd party bribes. They would be cost-inefficient if only core bribes were used
  3. the most cost-efficient bribes were all the pools that received third party bribes and didn’t receive the core bribes
  4. all the pools that didn’t receive any bribes but received emissions were way below (3) cost-efficiency wise

Cost-efficiency of emissions defined as:
$ spent on emissions in the pool / revenue coming from the pool (including fees, staking rewards, and veBAL + vlAURA bribes)

If you can’t prove that the statement above is wrong, then the current proposal takes a hit at the most profitable avenue of earning yield for distributing emissions for veBAL and vlAURA holders.

Another thing to consider is that core bribe inefficiency hits veBAL holders twice:

  1. what would be their profit becomes their expense
  2. it dilutes the bribe market, making it less likely that 3rd parties that make emissions more cost-efficient populate it at scale

The current proposal by disincentivizing voting for cost-efficient emissions would make the bribe market even less competitive.

While I realize that there is a demand in the community for removing the largest BAL investor from the equation, I would prefer if the matters were separated. And it can be done in a simple manner by capping the pools based on cost-efficiency.

I still believe that the most impactful way of supporting the ecosystem comes from locking BAL and AURA, but we can agree to disagree there.

What evades me is why veBAL and vlAURA holders would want to vote for a proposal that results in lesser yield for them.

1 Like

I can’t get past these two statements unfortunately so it’s probably not fair for me to address the rest of your post. I can see why you could consider 3rd party bribes to be revenue. Very confused about considering core pool bribes negative revenue. You’re saying we are spending emissions AND spending on core pool bribes?

Your argument is to maximize short term bribe revenue towards vlAURA or veBAL holders that arguably has no lasting/ very little long term benefits for the protocol. It’s a clear issue between veBAL holders and vlAURA holders wanting to maximize their own yield in the short-run while the protocol suffers from value extraction in the long-run.

I guarantee you that most of the veBAL holders or vlAURA that are actually in support of this realise the long-term consequences of promoting revenue through BAL emissions. In the end veBAL yield increases with revenue which is more sustainable than short term bribing revenue in which the protocol doesn’t directly benefit from

your definition of cost-efficiency when you include bribes is from a veBAL/vlAURA holder perspective not a protocol perspective, you’re literally promoting to reduce alignment between whats best for the protocol and whats best for veBAL holders

I’m not against bribes and I agree there is value to them, but once again its not something the protocol should maximize for

4 Likes

I see your cost efficiency calculation there which is one way to look at it, but i’m distracted by how it seems you are making core pools off as being a net negative.

maybe i’m missing something but from the last bribing round (just Aura, but not sure why veBAL would be different):

  • with core pools total bribes = $318,542.74
  • vlAura votes controlled = 3,873,552.96
  • $ / vlAura = 0.082

  • without core pools total bribes = $266,760.74
  • assume people would vote for remaining pools with bribes, vlAura votes controlled = 3,873,552.96
  • $ / vlAura = 0.069 (-16.3% less than with core pools)

unless you are assuming more 3rd parties are going to enter the ecosystem to fill that gap, but I don’t think there is really any way of knowing that.

bigger picture, it looks like the core pools on aura controlled about 2.8% of gauge votes. i don’t see anything on the remaining list (that had bribes) where 2.8% more emissions would garner a higher return for veBAL or vlAura holders broadly. sure maybe better for LPs in certain pools, but bribes give access to more holders (at least in my eyes, and re-occurring ones that grow with the protocol is even better). Not everyone is going to want to LP in a pool with high emissions.

Great! Will this spreadsheet be updated continuously on a daily or weekly basis? And is there any chance we can get access to columns B-M there too? Currently it is only possible to see the formula, but not the underlying data unfortunately.

Hey everyone, this is a very long & complex discussion. May I suggest a recap to enable people to get up to speed faster ?

The main point of contention seems to be the gauge limit for smaller non-core pools. Honestly this limit makes a ton of sense. Ideally a gauge should reward exponentially high revenue maker, limit non-virtuous actors without outright banning them and offer space for new entrants. I believe this framework is a healthy experimentation around the concept of gauges aiming to optimize the efficiency of the system.

The 2% cap for smaller pools makes sense with one exception: if a bootstrapping protocol uses Balancer as their main liquidity it might stump their growth (this is especially a problem because Balancer LBPs → gauge is a very smooth experience). It could be interesting to have a decreasing cap from 10% to 2% over the first 3 months of existence of a pool. Wdyt @solarcurve ?

For the record, I am in favor of this proposal. The point brought in by Tritium about bribes seems moot to me since most bribes on the veBAL layer are paying almost 50% premium to bribe right now and all the pools receiving 20k$+ of bribes are either core pools or major players with over 50M market cap (logical, because most protocols will not spend more than 2.5% of their supply per year on this). If you’re worried about over-paying and going over the 2% threshold, just use Quest.

Now, my main concern is about execution. DeFi protocols are at their best when everything is on-chain and we minimize manual / gov interactions. Is there any chance we can implement a regulation mechanism directly on-chain ? I think this will be the biggest success factor for such framework.

PS: This is an important proposal because it sets precedents that Balancer DAO can modify the veModel for users who locked under different terms, and the communication around these changes have to be very clear.

3 Likes