[BIP-57] Introduce Gauge Framework v1

pretty sure if someone used their massive voting power in a gov’t setting to go against the will of the people repeatedly they would be voted out (these days not so sure kek). please don’t spin this that i’m suggesting we ban addresses.

simple question, is the community happy that 1/3 of emissions are being directed for the clear advantage of a single party? a party that has shown repeatedly that they are pretty much only interested in clawing back the $11 million in losses that they have (assuming mostly unrealized). Mind you this is because of the investment decisions they made (also reminder, the whole market is down).

instead of trying to work closely with the protocol that they are so heavily invested in they would rather corner the emissions for themselves. Yes, they haven’t sold BAL to date which is cool, but what happens when they feel they have recouped their losses. This party has highlighted issues with the system that this proposal is trying to fix.

this whole back and forth between badger and ourselves is a bit odd to me because badger is clearly a huge benefactor with the way things exist now. maybe 2% isn’t the correct number, but 1) it can be changed by governance at any point really and 2) as a veBAL and vlAura holder the system could be in a much better place for everyone. i don’t feel like i’m being rugged or the game is being changed on me, the hope is that the system is being iterated for the better (which can be changed again if it isn’t correct), but maybe i’m bias. if there are other large holders out there that feel like they are being screwed over feel free to speak up.

2 Likes

The main reason to filter caps by market cap rather than instituting a global cap is simply because larger cap assets tend to generate more protocol revenue than small cap assets. However, it is a fair point that a global cap that is only able to be lifted based on certain revenue milestones may be necessary in the future. This could be a v2 of this framework. For now I believe v1 addresses the immediate situation adequately.

We might have several ‘venture-grade’ projects using Balancer 80/20’s but these are not contributing anything meaningful to our bottom line. We’ve got to aim bigger than this kind of PMF imo.

Any new gauge will get a revenue factor after a month. Seems like this should not really be a problem to me?

It comes down to how much of our emissions do we want to expose to pools that cannot generate protocol revenue. At a 5% cap, only 6-7 of these pools need to hit the cap then we have not materially changed the reality we’re facing today. At a 2% cap, this number would have to be more than double which is a far less likely outcome. Initially I suggested a 1% cap but that doesn’t offer enough opportunity to achieve a realistic revenue factor - 2% is a happy middle ground I think that protects our emissions from going to pools that cannot generate protocol revenue while also allowing pools to prove that they CAN generate protocol revenue, and thus become uncapped.

What the Badger core guy posted that I was quoting. vlAURA bribe ROI is over 100% for voters and over 100% for bribers. Thus, it would be a rational economic decision for Badger to vote and collect other bribes while also bribing for the Badger pool up to the 2% cap. However, he said if this framework passes Badger would stop all bribing activity. This is an irrational decision not based on the economics of the system but emotion.

This should be explored in a separate proposal as it is not really on topic for this framework.

However, he said if this framework passes Badger would stop all bribing activity.

I haven’t read anything like that in the thread. Are you referring to this quote?

If so, I believe that’s not what he implied.
Last round Badger bought 4.37% of emissions by spending ~75 $k on bribes, so it would be irrational to not change the bribe behavior and not scale it down.
For other DAOs the rule would also stop them from increasing their bribing activity.

The same would apply to locking BAL and AURA by the DAOs and graviAURA self-voting pools.
Self-voting pools and pools that DAOs vote for are spared from the inefficiency of the bribing ecosystem, i.e. if you’re ok with getting vlAURA or veBAL exposure, over the long run it’s better to vote for the pool you wish to support than to harvest the bribes.

1 Like

how many DAO’s with sub $50M mcap have enough veBAL/vlAURA to hit the 2% cap? I think the answer is zero. Even if this changes, constant dilution of voting power from incentive printing means constant buy pressure is required to maintain 2% of voting power.

how many DAO’s with sub $50M mcap are bribing enough to hit the 2% cap? only Badger I believe.

Noting that even if these answers change in the future, a DAO could simply vote or bribe for a pool that can generate revenue and pass the threshold for an uncapped gauge. Sooner or later the ROI on bribes will go negative unless BIP-19 is repealed, regardless if this framework is approved or not. DAO’s will stop bribing once that happens and many will stop before that if they’re farming with POL and only bribing with excess profits.

Name me another DEX that would even approve a gauge for the average sub $50M mcap token on Ethereum? Or a DEX that would allocate more than 2% of incentives towards such a token (if there is no gauge system)?

Balancer remains the best and effectively only option for these DAO’s to get incentives on their token even if this framework passes.

2 Likes

Name me another DEX that would even approve a gauge for the average sub $50M mcap token on Ethereum?

The only other DEX with gauges on Ethereum I’m aware of is Curve.
Perhaps I don’t understand this question correctly, but there are plenty of gauges there with tokens with less than 50 $M market cap:
RAI, tBTC, tBTC2, sEUR, sdCRV, sdFXS, sdANGLE, RSV, btrfly, LFT, CADC, USDK, DUSD, SDT, PWRD, pBTC, EURN, PAL, oBTC, SILO, TOKE, Badger, rETH, agEUR, apeUSD, oUSD, ibBTC, pUSD, sETH, mUSD, most if not all of Fixed Forex tokens

Or a DEX that would allocate more than 2% of incentives towards such a token (if there is no gauge system)?

I believe this should be the case with most DEXs on other chains and L2s. For instance, on Quickswap I’ve counted 30% of the rewards being allocated above 2% for “such tokens.”

how many DAO’s with sub $50M mcap are bribing enough to hit the 2% cap? only Badger I believe.

Currently, yes, which frankly begs the question of why the parameter was chosen this way.
Nonetheless, it takes time for the bribe market to mature. It took months for Convex.

The model is disincentivizing pool2 tokens in three ways:
a) in how the revenue is calculated
b) in the market cap factor
c) in the pool composition factor

If (a) would be calculated differently and (c) wasn’t a factor, it would arguably already make things more bearable because the same thing wouldn’t be punished twice.

Sooner or later the ROI on bribes will go negative unless BIP-19 is repealed, regardless if this framework is approved or not.

This, in a way, is an extension of our disagreement about the fundamentals of DEX bribing markets.
For me, the ideal scenario is when there is a vibrant bribing ecosystem with multiple outside actors (a.k.a. “3rd parties”) competing for BAL and AURA emissions.
The path you seem to prefer leads to the ecosystem dominated by Balancer and AURA bribes, which would be a self-bribing ecosystem.

Balancer has a competitive advantage to become the home for pool2 tokens because of the 80/20 type pools since they allow for a different equation wrt cost-efficiency of incentives used by the DAOs.

For example, imagine if FRAX was launched today, chose an 80/20 pool on Balancer as their governance token, and directed a part of their emissions to bribe that. If that imaginary FRAX were a success, I believe it would add a lot of value to the Balancer ecosystem like the real FRAX is adding to Curve/Convex. With the proposed model, however, the imaginary FRAX wouldn’t choose to launch on Balancer in the first place.

2 Likes

The vast majority of these are mintable tokens which would be exempt from the 2% cap. Of the remainder, many were above a $50m mcap when they got their gauge. If I’m not mistaken, none of these gauges get even close to 2% of CRV emissions.

L2’s are a different story - Velodrome also sends more than 2% of emissions towards small caps I’m sure. I doubt Sushi allocates >2% of emissions towards <$50m mcap tokens.

We’ll agree to disagree. Counting bribe revenue, which is unreliable and earns Balancer DAO nothing, is not a metric we need to optimize for in my view.

I’d be interested to hear more about why you think this way. FRAX is a freely mintable token so it would be exempt from the cap. They could have started FXS/FRAX 80/20, capped by 2%, plus FRAX/bbaUSD uncapped. Direct large bribes towards FRAX/bbaUSD, grown FRAX usage, thus growing FXS market cap and generating FXS trading revenue → reached uncapped status → the rest is history.

If anything, the next FRAX would be crazy not to launch with us.

1 Like

on the FRAX example, how much liquidity does a new project need to start with? 2% of emissions gets you somewhere around 1-2mm of liquidity if i had to guess and this is only if there are interested LPs. if not many people are swapping the token on a regular basis there is a point of diminishing returns in terms of the amount of liquidity in a pool.

you can find your self in a situation where a gauge has 5% of emissions, the APR of the pool is 150% or more, liquidity is 500k and volume is 12k a day and it stays this way for months if not forever. that type of scenario doesn’t seem like a efficient use of emissions.

as solar mentioned there are other ways for projects to get involved in an uncapped way and if projects gain interest their other constrained pools can see their way out of the microcap tier

2 Likes

Consider the fact that lager cap coins may see Balancer as too high of a risk, given the fact that the rules keep changing, and there seems to be the desire to govern to restrict large holders to only operate in certain ways based on the ever changing environment. Whatever we do, we need to stop changing it all the time, especially in ways that removes ability that was once there. If this 2% rule goes through for example, it may be quite hard to make any ROI on graviAURA or our choice to build on balancer (like us, frax builds, not just launches pools).

Granted based on some external voting we have been making quite good money right now, So Badger seems to be a making good ROI on the balancer engagement, but a vast majority of that comes from some free whale, which is not something the next FRAX, or any builder, can count on.

Could you?. I think at 5% of emissions and 500k in liquidty you’d have much higher than a 150% ROI, which would encourage accumulation, pushing both trading volume and Liquidty locked up.

wMATIC/MASTIC has 12 million in aum and a 24% ROI. So wouldn’t that pool with 500k in it be yielding 24x24% or = 576%

Everything in this BIP seems well analysed and thought out but 2%. 2% people seem to not really understand what it means. I encourage you to do a bit more of this analysis and both 2 and 5% and present it.

I’ll also try to get something together this week, but quite busy taking some well needed time off at the end of the week. If this governance can wait a few weeks I can take the time to get deep into the data when I am back. Otherwise, maybe someone else could?

I’d also love to hear some representatives of other DAOs speak up. I’ll try to ping a few more people to engage in this discussion next week.

When I say ROI here I am talking about return on investment on the cost to design, build, architect, audit, market and support a new DeFi Product.

1 Like

I can point to the many requests from the community for a framework if you’d like. To ask for a framework on the one hand then when a framework is presented complain about things changing too often is a bit disingenuous imo. The 2% vs 5% has been addressed by me several times. 5% exposes too much of our emissions for pools that cannot generate protocol revenue, period.

1 Like

Something I have been wondering.
Why was $50M mcap chosen? Is there any data that supports that number, or was it picked because it’s a nice round number and “feels” right? Why not 30M or 100M
Not saying it’s too high or low, was just interested what the reasoning behind it is, especially since this can change fairly quickly depending on the overall market conditions. Sorry if I missed it.

A framework is better then signalling out coins, or this constant noise that we currently have which I do think is holding DAOs back from engaging.

My hope is that once this framework is done, we can let the system stay the system and I think more people will join and the ecosystem will grow.

One can not predict alternate futures, but based on a number of my conversations I do think this whole ecosystem would have a lot more engagement, if governance wasn’t/t so busy changing the rules. I also think going for a few months without any noise about changing the rules, and demonstrating that the intention is to stay the course.

You are correct this goes against some of what I said before “try 5% if that doesn’t work 2%.” I have convinced myself that we need to decide something and leave it as is… I retract my prior statements about eh, just change it again if it doesn’t work. Too much is changing, need to be stable and predictable with governance, if we want people to engage.

2 Likes

If you go too low this framework loses its effectiveness and is not worth the hassle. If you go too high you introduce needless friction & governance overhead when adding new gauges that will most likely have a revenue factor above 1 (vote to approve gauge, month later vote to uncap it). Plus the reality that crypto market caps can go up 100% or down 50% in any given month.

I think a higher number could be considered but I wouldn’t go any lower than $50M personally. If you look at the list of gauges ordered by weight and mcap factor there’s a clear dividing line in the 5-7.5 area where below that you run into a lot of small project 80/20 pools that don’t generate revenue and above that you’re running into BAL pools which we don’t want to cap obviously. Could go with $75M but I imagine those who don’t like this framework would prefer $50M over that. Also open to hearing other ideas or analysis on this point.

2 Likes

To be clear, I’m more Beets and LexDAO than Badger, Balancer, Aura or anything else. I’m also a DAO nerd at heart. Since October of last year, I have enjoyed playing around with and learning about innovative Balancer tech over on Fantom. And ever since I first heard solarcurve talk about them back in November of last year, I was and still am super excited for boosted pools. More recently, I wandered over here and I’ve enjoyed jumping into the fray with the Balancer, Aura, and Badger ecosystems. I love the collaboration and multiple levels of building. And now with Beets/Bal OP entering the arena, I’m hoping the fact that I’ve been somewhat paying attention will pay off and y’all ETH maxis won’t run complete circles around this ol’ country lawyer.

I think my general issue with this proposal is that it’s confusing. The more I thought about it, the more it seems to make sense, but I have one main point and a few pieces of constructive criticism.

Main Point: Doesn’t it make sense to implement this proposal a few weeks after the boosted / core factory metastable and weighted pools roll out, so folks can reconfigure pools in such a manner where the new pools potentially qualify as a core pool and are also configured so they are consistent with this framework (assuming it passes)? This is the ol’ measure twice, cut once motto.

From the Balancer perspective, veBAL launched within the last 5 months, allocating 100% of all Bal emissions to its gauges, was way too loose with initial gauge approvals for 80/20 pools (from the Beets gauge perspective, I know this problem too well) and for pools for which more diligence was required, and now would like to walk the system back and tweak it. That’s fair - I understand it. You don’t want to be stuck with a system you designed that is now not generating nearly as much revenue as was intended while Bal emissions are being diluted.

That’s OK. They can be changed. Just try to do so with a gentle touch, in a way where you don’t throw the baby out with the bath water. Unless the intention is in fact to alienate, balance the approach in a way that tries not to specifically single out other projects and ecosystems that are building the second, third, and fourth layers on and around Balancer.

Constructive criticism - while this proposal has A LOT of information, it doesn’t have much in terms of the reasoning behind the specific parameters that it recommends. My main piece of constructive criticism is to explain the reasons for what this proposal intends to do and why those things make sense - not only in terms of the stated purpose, but in terms of the reasoning behind the parameters the proposal wants to apply.

  • What are the pros? What are the cons?

  • Where is the reasoning that explains why each of the parameters is either important or significant?

  • Why does this proposal use an old weighting factor that is difficult to understand and is also from June of 2020, when Balancer was in its infancy? What about that weighting factor makes it particularly helpful to apply here in the Phase 1 analysis?

  • Why does this proposal use a market cap factor? Why does it then use only the lowest score from the pool and exclude the others? Why do you believe this approach to this particular data is better than other possible approaches?

  • Why use Coingecko as the confirming data source for market cap? Is Coingecko a reliable source of this data in all cases (no)? If it’s not reliable in a certain case, then what?

  • Why does this proposal use an overall Phase 1 market factor of 5 or higher as the critical threshold? Are there other thresholds that were rejected? Why? Why is this one better?

  • Why was it decided to apply these parameters to all the not new gauges at once, and move all of those that fall under the overall factor of 5 to the B team at the same time, and then make each subsequent application of these rules subject to separate governance proposals and snapshots? Will the subsequent applications be dealt with on an ad-hoc pool-by-pool basis? Will they be aggregated monthly? Quarterly? Semi-annually?

  • Doesn’t this proposal have a strong possibility of dramatically increasing the administrative workload of the Balancer DAO? If that happens, how will Balancer DAO process the increased workload? If you don’t think this is likely to increase the administrative workload, why not?

One other piece of constructive criticism: Balancer, IMHO, has a lot of governance voting - maybe too much. The sheer number, content, and consequences of the many proposals are difficult to track and follow. It’s a full-time job to keep up. And if you look at the people who participate in this forum - it’s basically a very vocal and energized, but very small, minority. Might be worthwhile to hardcode some of this framework into an organizational structure as opposed to having separate votes all the time that still wind up creating a de facto Senate nonetheless (b/c no one besides the attention-paying minority has any clue what’s going on - either intentionally bc they’re ignoring what’s going on or bc they just don’t have the bandwidth to follow).

Anyway, keep fighting the good fight. I love what Balancer is doing. But this is really a hard proposal to understand, particularly without pieces of information and background reasoning like those referenced above.

2 Likes

I’m not seeing the relationship between core pools and this framework. I’m not sure there is a single gauge that would be affected by the proposed framework that plans on transitioning to a core pool? Even if there were, being a core pool has no bearing on how this framework assesses the pool, other than most likely a core pool would earn more revenue. Maybe you could explain more about how you see core pools and this gauge framework being related?

The system was designed by the community. The community continues to iterate and propose ways to improve upon it. I’m not entirely sure what you’re trying to imply but I’m not proposing to walk back anything. Who is the “you” here?

Please point out in the proposal where anyone is singled out. The intention is not to alienate anyone. Anyone can build whatever they want but it should be obvious it is being built for them to profit first, otherwise why spend the time and money? Just because something is built does not mean Balancer has any obligation to accept it - unless a proposal is made to Balancer governance to build it and that proposal passes.

Pros: Small cap pools that generate near zero income cannot syphon off a large share of our emissions
Cons: Those small cap pools generating near zero income above the proposed 2% cap will see an immediate reduction in their eligible emissions. Only a con if you happen to be an LP in such a pool.

I’m almost certain I’ve covered this but anyway.

Weight factor: In general, off-weighted pools generate less protocol revenue than equal weighted pools.

Market cap factor: In general, pools with lower market cap tokens generate less protocol revenue than pools with higher market cap tokens.

Revenue factor: For exceptions to the above two generalizations this factor assesses a pool’s percentage of Balancer’s total revenue so that pools that CAN generate protocol revenue are not subject to the emissions cap.

In the original draft of this framework another weight factor calculation was used that was far more punishing to off-weighted pools. The first round of private feedback from the community raised this as a concern, as well as the fact that it was difficult to understand the calculation. Given the Balancer community used the old weight factor Fernando created for many months during v1 liquidity mining it seemed like a proven approach to adopt for this framework. Even to me the calculation is hard to understand but I’d encourage you to play around with the google sheet a bit and you’ll see how different weights affect the output.

In general, pools with small cap tokens generate less protocol revenue than pools with large cap tokens. The lowest score is taken because that is going to be the limiting factor in most cases for assessing the pool’s revenue generating capability. I have no doubt there are other possible approaches but the chosen one seems more than justifiable to me. Revenue generation has extreme fluctuations so applying a rigid threshold against every gauge would throw up a lot of false positives (and be a ridiculously huge hassle). Thus, we must apply some common sense filters. We know small cap pools generate less protocol revenue than large cap pools on average. Boom, market cap factor.

If Coingecko is not accurate for a particular token and that token is requesting either a new uncapped gauge or a promotion to an uncapped gauge, it would be necessary for the project to update Coingecko’s data. I believe for all gauges that will migrate to a 2% cap coingecko has the correct data or the FDV of the token is below $50M so we know the circulating mcap also is.

A threshold of 5 equates to a $50M mcap token in a 50/50 pool with weth or similar. Defining a “small cap” market cap is extremely difficult considering crypto market caps can go up 100% or get cut in half in any given month. Adjusting this threshold in the future might be required if market conditions materially change. When assessing the overall factor for all gauges (as you can see in the spreadsheet linked in the proposal) there’s a clear dividing line in the 5 to 7.5 area. Below you start hitting many 80/20 pools that don’t generate protocol revenue and above you start running into pools with BAL as the smallest token which we wouldn’t want to cap. We settled on 5 to be as generous as possible with allowing small cap pools to reach that number if they could attain even a minor revenue factor. It can be considered to go higher though, to 7 or 7.5 if anyone is in favor of that.

The decision to migrate all of them at the same time is so we can all move on from this and there’s no clear way to determine which ones should migrate first, second, etc. The fairest introduction of this framework is a mandatory, simultaneous migration for capped gauges. Post introduction, if the community believes capping or uncapping a particular gauge becomes warranted based on the data (mcap or revenue has changed since framework introduction) anyone is free to present a proposal for that.

There are a million reasons why such data could change - some justify a change in status, some wouldn’t. Exploring this further might be a good idea for v2 of this framework but for now I think the only reasonable approach is kick the decision to governance. These proposals will be handled on an ad-hoc pool by pool basis, though I do not expect we will see many to be honest. Almost all the gauges moving to uncapped have no realistic expectation of ever gaining more than 2% emissions in the first place.

I see little reason to expect a dramatic increase in administrative workload. When assessing new gauges the weight and mcap factors can be applied in maybe 60 seconds to determine if a gauge is entering at a 2% cap or uncapped. Most gauges that are migrating to a 2% cap have no realistic expectation of hitting that cap in my opinion - thus, I would not expect many (if any) proposals to promote a gauge to uncapped. Caveat being a huge change in market conditions that sends market caps far higher or lower. Even in this scenario, it is up to the community to propose gauges change status. If no one cares then no proposals will happen. Unless there’s a reason to take the time to do a proposal my guess is nobody will.

If governance is overwhelming I suggest you consider delegating your voting power in the Delegate Citadel. I have said repeatedly in every DAO update for the last 8 months that I encourage more people to participate in governance and become delegates. It is true we have a lot going on - one way to solve that is for the community to start voting down more proposals. Another way is to delegate some powers to other entities to manage things like treasury, gauge approval, etc. We could also start only allowing votes every two weeks instead of every week - or even every month.

I’m not so sure having very active governance is a bad thing though. I understand most people don’t have the time you and I do to follow everything but following everything is not really required unless you’re going to be in here arguing on every forum thread. Honestly I think Balancer governance is in a pretty good spot - it’s fairly easy to get things done in a timely fashion and there is healthy discourse whenever something controversial arises. Controversy is always solved with a timely vote and we all move on together.

If you’re angry things are changing too much, start voting against making changes. Or change the rules of governance so only certain things can be proposed or certain people can propose them (or whatever). I take pride that anyone can make a proposal to Balancer and it will go to a vote as long as the specification is clear and able to be implemented. Many other places this is simply not the case.

1 Like

Ill respond to the rest when I have a moment, but Im not sure what would give the impression that Im angry at all. I thought the tone of my post was fairly middle-of-the-road if not relatively positive. I apologize if my post came off otherwise - it was not my intention. It mainly took me about 3 days to work/think through the proposal, and some of that process could have been easier if the info was a bit more up front or part of the initial package.

I thought it was clear thag this was a collective “you,” where I was referring to the community. Not to any person or community member in particular.

There is also a difference between active governance and numerous snapshots. You can have the former without necessarily the latter. Ill comment more on this when I can.

1 Like

Ah sorry, didn’t mean to imply you specifically were angry. More just in general if people think we are changing too much, just start voting no to changes. Simple :slight_smile:

Your post was completely fair and all reasonable imo.

I would also echo that the core pools and boosted pool developments have no overlap with this proposal. A change like what is being suggested here is necessary (in my eyes) regardless. This is one reason why I used the USDC/ETH pool in one of my posts above. Back when that pool was getting more emissions the ecosystem benefited way more than it did from the smaller cap pools that ended getting more % of veBAL votes later on.

1 Like

If you really think voters know best why dont you let them continue to drive emissions distributions without arbitrary caps?

1 Like

Can some clarity be added about how often the inputs to this model will be refreshed and caps reassessed? Marketcap and volume (a proxy for fee revenue) are highly volatile metrics. Will one rough month kick a token into a capped voting situation? If theres growth in valuation for a token will the cap automatically be removed? What resources are going into managing this new system, what are the costs there, what should be the expectations for projects, voters, and LPs around how this is implemented?

Wow what a thread, I love it lol!

Tbh, I’m a bit confused by the uproar against this proposal. I think it’s completely fair to restrict the allocation of rewards a pool can earn based on their contribution to the protocol, why would you want anyone to be able to extract significant amounts of value from the protocol in return for little to no benefit?

While I agree that bribes bring value to an ecosystem, I don’t think bribes are something a protocol should optimise for in the long-run. Who knows how long they last?

In my opinion bribes just make the principle-agent problem worse in DAOs.

Optimising for revenue in the long-run should be the number one goal and I think using BAL emissions to incentivize positive behavior that increases revenue and Balancer’s competitiveness is what it should be used for.

How do we maximize protocol revenue?

  1. Increase trading volume
  2. Increase the absolute value of pools with yield bearing assets

1.1. Increasing trading volume:

(Base level) Provide more efficient swaps → trades get routed through aggregators → trading volume increases → more fees

Metrics to achieve this:
TVL and assessing optimal fee structures

2.1 Increase the absolute value of pools with yield bearing assets
Incentivize pools to utilize yield bearing assets + BIP-19 → increased fees

Metrics to achieve this:
TVL of yield bearing assets

With these ideas in mind I thought of an alternative model that may reduce some frictions based on voting for gauge allowances, caps etc.

An alternative model:
I like solarcurve’s model as its pretty simple and straightforward. However, I think constantly adjusting caps and gauges will get annoying at least for existing pools.

Alternatively, we can move more to a more programmatic approach similar to dYdX’s LP reward formula:

(I wish i had some time to do modelling, hopefully someone who is better at math than myself can also point out a better form or highlight issues with these formulas)

Model 1:

Max_gauge_weight = pool_score = (Overall_factor^a) x (Revenue_factor^b) x (TVL^c)

Where,
a + b + c = 1 (Constant returns to scale)
Overall_factor = lowest_mcap_factor^(weighting_factor)
Revenue_factor = as described, does it include rev generated if it’s a boosted pool? (e.g. aDAI)
TVL = Total value locked in pool

Max_gauge_weight determines the maximum amount of BAL emissions a pool is eligible for and is a function of inputs that are beneficial for Balancer.

Rational:
Firstly, I think including a TVL metric on top of a market cap metric makes sense. Large protocols might be on balancer but have bad TVL making swaps more efficient elsewhere. TVL in its base form just increases the efficiency of swaps and could promote smaller projects to focus on Balancer with majority of their liquidity as an easy metric to improve.

Secondly, a model like this provides a quick feedback loop between improving a pool’s performance and its eligibility for more rewards reducing the constant re-evaluation of pool caps.

Thirdly, using a cobb-douglas function with constant returns to scale rewards protocols equivalently to the input they provide and how the DAO values such inputs. The DAO will have full control over the a,b,c parameters and can place heavier weights on protocol rev etc.

Model 2:
Max_guage_weight = Pool_Score = (Overall_factor^a) x (Revenue_factor^b) x (TVL^c) x (Bribe_revenue^d)

*Where, *
a + b + c + d = 1 = Constant Returns to Scale
Bribe_revenue = Amount of bribes in $ value earned by the pool per epoch

Rational:
This is the same as model 1 but at least tries to capture the value bribes bring to the ecosystem. Once again, the DAO/community can decide on its weight/value

Model 1 + 2 - New entrants:
We can also introduce a scaling variable that is dependent on time, which allows new entrants to have automatic access to x% of emissions so that they can have time to build up some exogenous liquidity.

After x amount of time, the scaling variable turns off or decays.

How it works:
Similar to dYdX, we would need per-minute random snapshot of each metric over the entire epoch to 1) avoid gaming, 2) incorporate a change in metrics. At the end of the epoch max_gauage_weights are calculated and used for the following epoch to determine the maximum BAL emissions a pool can achieve. All additional votes will be distributed to other pools in proportion to their vote weight (similar to Convex).

Considerations & Structural changes:
Clearly such a change would require significant engineering work, however, I think this is a suitable alternative when thinking about future options. e.g. implementing the formula + random sampling

Selecting the correct weights is also very important, as we still don’t want to run into the same issue where a pool earning $x for balancer is eligible for $y rewards (where y >> x) but i think for the most part this is mitigated with such variables.

There are also some nuances that need to be worked out. Is score proportional to the rest of Balancer? Does it just simply scale? Do we normalize it in a certain way etc?

6 Likes