[Proposal] Balancer LPs rewards analytics

The proposal in one sentence

Balancer LPs rewards analytics.

(There is a limit of 2 links per forum post so please look at: https://github.com/jakub-wojciechowski/limestone/blob/master/balancer-proposal.md to see the original version with all the references)


The full analytics platform will consist of:

  1. Review of current rewards calculation scripts with proposed optimisations
    Abstract data providers enabling different (faster) data sources like pre-fetched data, batch transactions, Graph or BigQuery caching
  2. Prepare a mock data set to enable testing of the rewards scripts
  3. Prepare micro-service architecture to synchronise data continuously and offer real-time rewards calculations
  4. Implement decentralised backups of calculated rewards, so they are securely persisted for future analytics, research or disputes process
  5. Design and implement the rewards explorer UI to offer users a way to browse, visualise and analyze their rewards
  6. Offer accurate predictions of the expected APY of user funds across all the pools based on most recent data and trends
  7. Rewards simulator - ability to execute a dry-run of rewards update proposals to see how it affects the BAL distribution
  8. Rewards api - a service providing estimates of the real-time rewards estimate, so other protocols could display user rewards on their website


What problem it solves

  1. Improve the efficiency & transparency of the rewards mining process

Balancer spends every week approximately $3,000,000 rewarding users
for providing liquidity to the protocol. It is a very serious commitment that affects all the protocol’s stakeholders. Therefore, it could be very beneficial to increase the level of community oversight of the process, encouraging more developers to run the scripts, experiment with the data and verify the scripts results which are run by the Balancer team.

When I first ran the code it took me 6 hours to process a single snapshot on my laptop and I needed to resolve multiple technical issues to an archive node efficiently. It could discourage not only non-technical users but also for seasoned developers to follow this procedure.
Based on my initial attempts on refactoring the code it is possible to improve the efficiency of data fetching… This could attract more people to interact with the scripts directly, increase the chances of early discovering error and encourage further innovation and process improvements.

  1. Offer users better tools to track their rewards

Users need to wait for one-week to see their mining rewards. Although, there are some tools offering rewards rates predictions they do not take into account the history of pools and users balances, offering just a rough estimate based on a single point in time. The tool under development will give a real-time view over accumulated funds, with a split per pool and rich data about return rates.

Instant feedback and accurate metrics may attract a group of liquidity providers who constantly look for the best market rates (aka yield farmers) by offering them a similar experience to other protocols.

  1. Enable the community to make informed decisions about rewards structure

The structure of mining rewards is the most widely discussed topic in the Balancer discord and updates of the rewards parameters are the most popular proposal for the government. However, there are a lot of unchecked assumptions and subjective opinions that could harm the decision process.
The project aims to build a tool that will simulate the results of a proposal showing how it could affect the rewards. All the users will precisely see how each proposal will change their individual benefits and global distribution.

  1. Provide tools for other protocols to integrate the rewards mechanism

Balancer is a popular choice for other protocols to reward providing liquidity for their tokens like UMA or mStable. However, the weekly rewards distribution period makes real-time bonus reporting hard and requires additional efforts to integrate with internal incentivisation schemes.

This project plans to build a public API so integrators could easily query the current rewards and display them on their dashboards. This could encourage new protocol to Balancer as a partner and increase the deposited funds.

  1. Prepare an infrastructure for other data-analytics solutions

Apart from delivering useful tools, this project has got a very strong R&D component focused on data processing and analytics. We will explore different paths to identify the most efficient data extraction and caching tools and patterns of organising the information.

Other grants and projects building analytics solutions for the Balancer protocol could use the research on data architecture and the deployed infrastructure. Having advanced analytics solutions could encourage professional traders and liquidity providers to use the protocol and create a competitive advantage over other exchanges.


The exact time depends on the number and scope of updates to the current rewards system as the current model is open to changes by governance proposals.
Therefore, I’m estimating the duration as a range, it should take 3-5 months in total.

I estimate the costs at ~$40k.


The project will be open source under MIT license and will be publicly available at this address on the release: https://github.com/jakub-wojciechowski/bal-rewards-analytics

What have been built so far?

There is a prototype of the tool available at …
It shows the rewards from the last 3 weeks. However, it needs to be updated to reflect the recent changes in the whitelisted tokens format.

The code is fully open source …


Team members and their roles in the project and backgrounds

Jakub Wojciechowski
Role: lead developer, data-architecture, ETL implementation
I am a computer science graduate of the Warsaw University living now in London. I’ve worked as a software engineer in the fintech and insurance industry and progressed his career to a team lead role. I’ve joined the blockchain space co-founding Alice, where he designed and deployed custodian stable currency (first to go through a regulatory sandbox in UK), in-house transactions relayer and a decentralised impact investment protocol. I’ve presented at multiple conferences including Devcon and launched Warsaw Smart Contract Coding Club (over 100 members). I’ve also worked as a smart-contract auditor at Zeppelin Solutions.

Some of my blockchain projects (links in the original proposal on github)

  • Transparent donations platform, used by Greater London Authority and presented during Devcon3

  • Fixed-rate lending protocol, winner of EthLondon 2020

  • Impact investing protocol with pure web3 UX presented during EthCC 2020 Paris

  • Multi-agent simulation of Token Curated Registries presented during EthCC 2019 Paris

  • Binding library for Gnosis Conditional Tokens presented during Dappcon 2019

  • Open source blockchain explorer

  • Decentralised IoT oracle protocol

  • Margin trading platfrom for the Synthetix protocol (Winner of New York Blockchain Week 2020)

Alex Suvorau
Role: full-stack developer
Background: Alex is a very talented developer with more than 2 years of experience in the blockchain space. We worked on a few projects before and won multiple hackathon’s bounties together.

1 Like

Yes please.

I only skimmed as bit short on time but my thoughts are can we make sure that we can query historical data and the reward formula versions? Would be very nice to visualise the trends as well as measure the effects of the version changes (ideally we should be able to plot for a single pool the reward trajectory given different reward formula versions)

Short answer: Yes, that’s the purpose of the rewards simulator.

It should quickly compute the rewards distribution for a set of parameters, calculated over a selected period and split per-users and pools.

Longer answer: It’s an excellent question as it highlights the difference between the current architecture of rewards computation and the proposed change.
Currently: calculate_step_1 -> fetch_data -> calculate_step_2 -> fetch_data …
New model: fetch_data -> index_and_store_data -> calculate

So the main difference is decoupling data fetching (which takes 99% of the scripts execution time) and persisting indexed data so it could be reused for other calculations.

This will allow us to have a cleaner code for the rewards calculation that is independent of the data-fetching logic and optimise two parts separately. However, the main benefit is the data reusability allowing better analytics like comparing different rewards proposals, preparing rewards reports per pool/asset/user/period.

Yep that’s really nice. Any thoughts on how you’d expose it? Via API or something like extending the balancer subgraph?

API is a part of proposal (point 8th), so one should be able to query the rewards in a convinient way.
I’m still hesitating between the Graph and BigQuery for data layer. The Graph offers a lot of stuff out of the box but it doesn’t handle more complex, aggregate queries and mixing with off-chain data. BigQuery is much more flexible and efficient but it will demand writing own indexers.

Will you build a web interface with charts and everything? If not, will you charge people who uses your API to build such a website?

Hi Robin, there is a prototype of web interface already:
(I expect it to go through few iterations based on feedback to be more user friendly).

Regarding your second question: the api will be open and I don’t intend to charge for using it. If the project is sponsored by Balancer I don’t expect to generate additional revenue from users.

I had a look and am a bit confused - what am I seeing?

Also - did you have further thoughts on graph vs bigquery?

Hi @tongnk,
In the current UI you can see the history of the accumulation of your reward calculated per block.There are two modes: cumulative (where you can see how the rewards increased during the week) or per-snapshot (when you can see the allocation dynamics - the value may drop when you withdraw funds). The data is limited to 3 weeks, as I need to update calculation scripts and cover extra costs for the ethereum archive node to continue the project. Obviously, this is just a prototype and the interface is going to be improved. Currently, I’m using the Arweave chain to store data, which is great for security/transparency but it will require speed optimisation before convenient for mainstream users.

Regarding your second question. I’m going to start with the Graph, as is the fastest tool for prototyping, because you get the indexing infrastructure out of the box. However, I expect some issues with the performance of advance filtering, data joins and merging with off-chain pricing data, so it’s likely that the project will evolve towards Bigquery. That’s the part of the research that is going to be sponsored by Balancer team through this grant.

Didn’t someone already propose an analytics dashboard, would it not be best joining both ideas together and making it one ultimate dashboard/analytical panel.

1 Like

Maybe we should sync up in discord/TG and have a chat (tongnk in Discord). We’re building out https://dashboard.balancer.community/ and the second phase is around historical pool analytics.

Potentially an idea would be if we can leverage your subgraph to display historical returns it would be a win win for community:

  • Developers can build on top of your hard work in calculating BAL rewards
  • Community can view historical returns for pools over time in a single place

The dashboard UI is only a small part of the project (10%), so it’ll make sense to focus on api/integration with other community tools. The real challenge is to extract the historical data and serve it fast in a convenient format.
Sure @tongnk, I’m happy to have a chat with you on Discord.

Want to add me in discord? I’m there as tongnk

It was nice to talk to you @tongnk. I really liked your project and I hope that the dashboard will keep growing and providing more and more useful data for users. Great job guys!

It’s good to notice that there could be a synergy between our projects as the amount of data available on the Balancer subgraph is limited, especially regarding the historical entries and rewards.