The approval of the WBTC/DIGG/graviAURA gauge passed – albeit with a notable amount of dissent, 49.62% – over objections that DIGG is not a suitable asset to incentivize. This gauge now takes nearly 31% of all BAL emissions and, given the total market capitalization of DIGG is barely over $3M, is and will remain incapable of returning nearly the amount spent on it to the Balancer protocol. While the “core pools” proposal will eventually help mitigate this, Balancer holders should not sit idly by and countenance such an egregiously wasteful allocation of BAL.
If approved, the DAO Multisig 0x10A19e7eE7d7F8a52822f6817de8ea18204F2e4f will call grantRole on the Authorizer 0xA331D84eC860Bf466b4CdCcFb4aC09a1B43F3aE6 with the following arguments:
I definitely understand where you’re coming from with this. The problem is there’s a lot of pools the CREAM whale could vote for that are quite a bit worse than this one imo. If this proposal passes he’ll move to one of those and this process repeats for the next few months as we play whack-a-mole.
We could pass a framework as many have suggested and include in there the automatic killing of a gauge that violates the framework (no vote required). Then we only vote once and the whack-a-mole can happen autonomously without bothering voters.
Downside to that approach is it would almost certainly end up killing some gauges we’d rather not have killed.
For now I favor doing nothing. Voting for this pool is far better than voting for the CREAM pool as this pool can actually generate non-zero fees. There are some other pools he could vote for that would be far worse.
As the creator of the DIGG pool and requestor of the gauge the intent was to create a sustainable primary liquidity pool for DIGG in the Balancer ecosystem. The vision for the pool’s long term prospects remain unchanged in my view.
Digg and it’s gauge are not the root cause of the issue that requires addressing here but I understand it’s an easy target for everyone from every angle.
Which pool’s gauge will be the next to be put on the chopping block after/if you are successful in killing Digg’s?
Fully support dialogue to address root cause issues that improve the sustainability of the Balancer ecosystem.
Can we please just take a month or two and try to get some sort of a sensible framework together before we keep coming with pitchforks after gauges?
I’ll try to work on something ASAP.
It is also worth noting that coingecko is wrong, and all but 8 of the 573 DIGG in existence are currently in supply. The market cap is therefore 7.5 million. It wasn’t 7.5 million before this whale started accumulating, but the point is that it is not hard for a whale to pump a token to 5 million market cap.
This is not a good indicator. It is not so simple to figure out what good indicators are. It takes time, conversation, research, analysis, more time and more conversation.
@Xeonus started doing some of this in the forum post to kill the cream gauge. I suggest we refocus on a more data driven approach to defining a framework that actually meets sensible objectives. Here are the objectives I can decipher so far:
more than 10-20% of the vote on anything but a megacap pair that has a huge potential TVL is bad.
Revenue is important
Blackholes, and things leading to it are problematic
Coins need to have active teams and development
I define a complete blackhole as voting for for a pool with more veBAL value than it has in AUM over a sustained period of time (more than a round or 2). It becomes a real backhole, if the emissions earned are going to increase the vote
As @Tritium mentioned, a framework would be much more ideal. Using a community-decided framework to us to kill gauges will not only act set a standard for the community but also for those protocols that wish to enable gauges.
They will now be aware of the criteria that can be used to kill a gauge and will reduce the tension, as people we understand that we are following a framework, rather than it being based on bias or emotional decisions.
Did you get to think of anything in the meantime @Tritium?
Another factor I think we should add is the a maximum APR. Anything much higher than 100% is a very wasteful way of deploying LM. Maybe 150% could be a threshold, currently the DIGG pool is at 292%.
This framework would force whales (here I really mean in general, not just the CREAM whale) to also contribute with votes that benefit the broader ecosystem through common goods pools (like wsETH, bb-a-USD etc.) where LM is distributed to a wide number of LPs, not just the whales voting.
I had proposed a starting point for a basic framework here:
I was basing it more on % of veBAL voting. We could also do ROI, but that fluctuates a lot more, is harder to calculate as a fixed number, and in the end a whale could just add more deposits and still capture more veBAL.
For example, how would you set this 100% ROI rule on the DIGG pool? Including AURA or not? Max boost or min boost? Take using what measurements? At what time? Remember that the ROI of a small pool can quickly be lowered by just depositing more assets (which whales can do).
I would rather discuss general frameworks than more individual requests to kill gauges though, so maybe we can resume a conversation over here: https://forum.balancer.fi/t/rfc-simple-framework-for-vebal-gauge-voting. Get something implemented, and then apply it consistently? If the DIGG gauge gets killed with nothing else done, we just move onto the next one and spend all our time doing this over and over again.