My skillset is in product so I usually have to partner with the necessary people - as you can imagine an SDK vs build an onboarding for LPs have vastly differing skillset and role requirements which is why it has been hard to simply put a $ figure.
The other very hard problem is that as with most things in product you don’t have a very concrete deliverable but rather an outcome that you want to achieve. A great example here is around OKRs - let’s take as an example that we commit to building out an onboarding process in 1-2 months. Let’s say in month 1 we do a very early MVP (minimal viable product) of the onboarding and deployed it. However we found that users actually need more education instead as the onboarding helps but they don’t understand the underlying. In this case because we committed to delivery the onboarding we would continue delivery of the onboarding even though the data shows that this is not the ideal case. If however we were to measure against a tangible outcome then the team is incentivised to optimise for that outcome rather than some tangible deliverable.
Another point to consider with this as well is time horizons - a very simple example of this is if we were to take a very short time horizon, (lets say 3 months) it would automatically remove some projects that we should be doing in the longer term. So things such as SEO which industry suggests takes at least 6 months of dedicated work before we get decent organic results.
However I do understand the need to keep everything accountable given I am an unknown from a quality of work perspective. What I think could work would be that we split this into 2 distinct grants, with one being a discrete piece of work that proves the quality of work and then the second one being ongoing optimisation.
1. Experimentation infrastructure and reporting:
This can be a very discrete piece of work. We can set up the infrastructure necessary for any future growth team to understand what the usage looks like. The key things that we would have to do here include:
- Front end event based analytics
- Consumption analytics (ideally the univision team will plug it into some form of DB so we can measure everything, given this is the key metric we need to report against)
- Any ancillary analytics we may need to run
Ideally this grant is to allow for the community to see the quality of the work and to understand a bit more about the methodology that I am proposing. A couple of things that I would love to learn more about for point 2 below would include:
- What does user behaviour look like for LPs
- What does user behaviour look like for traders?
– Do these traders use the UI or is it routed through
– What is the MAUs look like
– What does retention look like for users
– What is the average value for users. Can we segment these
– What’s the volume of automated vs manual trades
- How often is documentation viewed
– How often do people actually build on top of documentation
- What is the current balancer ecosystem/landscape like (this is probably more research that I need to do regardless but will probably be useful for core team):
– What applications are being built on top
– Are these open/closed?
Costs + timelines depends here - I’ll just assume that I have to recruit and find someone:
- Time:
– Recruitment: say 2-4 weeks
– Implementation: say 2-4 weeks
- Cost: depends again on who I can find and their motivations so I would say either 10K USD in BAL vested, or 5K BAL vested + 10k USD
– The other cost consideration would be costs associated to any tooling + storage. I assumed that core team would want control over this vs an outside team managing it. Happy to maintain it and just pass on costs separately.
2. Experimentation, Optimisation and Growth
I would propose that this be an ongoing grant with a couple of key metrics in place. I believe probably trading volume, MAUs and some combination of the two may be useful.
Usually how I run these is that there would be some form of product council on a quarterly basis where key stakeholders agree on the goals of the business + the metrics that we should measure these by. How the team then achieves it is up to them.
I will preface to say that these ideas will probably change drastically once we understand the behaviours from (1) but the ideas I listed before would fall under here:
- Better documentation
- SDKs
- SEO
- Content/guides
- Functionality like onboardings etc
Cost: I would probably say 10K equivalent split between BAL/USDC here could work. If costs are unspent we could either return funds or roll them through to the next month.
Timing: we could say run this quarterly with review at the end of the quarter to either continue or halt work.