[RFC] Orb Collective's Path Forward - Extended Integrations Team Plan

This is an extension of the main Orb Collective post, focused specifically on the engineers of the Integrations Team.

Transition Plan | August 2023 and Beyond

The Integrations Team is interested in extending collaboration with the Balancer DAO beyond July 31; however, as previously noted, we require some time to assess the viability of establishing an independent Service Provider (SP). Alternatively, we are considering a transition to one or more existing ecosystem SPs. In any case, to avoid disruptions, all transitions will be promptly organized and put into effect on August 1. The objective is to ensure a seamless transition with no downtime, and to this end, we suggest publishing our transition proposal no later than July 1.

In the meantime, we respectfully request to retain our existing funding through July 31 in order to minimize disruption to our ongoing work and allow ample time to establish transition plans/infrastructure, discuss with the community, and vote on new funding. We are committed to providing valuable services to the Balancer DAO and smoothly transitioning to the next chapter of the Integrations Team.

Current Work | May-July 2023

We unanimously want to continue contributing to Balancer through Orb Collective for the duration of Orb’s current funding period, which ends July 31. We look forward to getting back to work next week and refocusing after what has been a turbulent, distracting, and stressful few days.

We are very much open to hearing feedback from the community as to our remaining roadmap, but we’ve prepared a preliminary plan of action for the DAO’s review. The items below are listed in approximate priority order, and we intend to tackle them roughly one by one in short sprints in order to optimize for not leaving any dangling work behind on July 31. The list reflects our understanding of the Integrations Team’s historical duties as well as feedback that we’ve already heard from ecosystem contributors about short-term priorities.

Dev UX

One of our key takeaways from the ONsite in Barcelona was that the majority of the ecosystem still believes Balancer’s developer experience could use improvement, and we echo this sentiment. We haven’t always been clear enough in our priorities to devote the time and care required to produce great documentation, example code, or test suites. Here we seek to rectify that by naming “Dev UX” our highest priority through July 31.


Goals: Provide third-party developers with the deep knowledge they need to scale Balancer integrations beyond what is possible with manual partner support.

Description: Hopefully it seems obvious that a great developer experience starts with great documentation. We have seen strides taken in the recent docs overhaul, but these were largely structural and cosmetic, whereas we still feel that there is some quality content missing in certain areas. Below we provide a rough overview of where we feel there is content concretely missing from the docs, but of course we are open to feedback as well.

  • Pricing BPT: Provide solutions for various pool types, including complex compositions such as bb-a-USD or even pools built on top of bb-a-USD. Where necessary, focus specifically on the lending market collateral use case (see “Partner Engineering” below).
  • Internal decimal scaling and Rate Providers: We have some docs on Rate Provider contracts, but none on how they are utilized within the Balancer system, including how both the rate and decimal scaling propagate through pool math libraries. We have seen lots of confusion from partners on this topic and believe it needs clarification.
  • Relayers: The existing docs focus too much on the canonical Batch Relayer and not enough on the capabilities or design principles of custom relayers. Docs like these would have saved Cron Finance time and money in their quest to build TWAMM on Balancer.
  • Solidity code snippets for swaps/joins/exits: Existing documentation around these standard Balancer functions focuses on the SDK, whereas some partners seek to automate these behaviors in smart contracts. There are many possible points of confusion, including token ordering, userData encoding, and price limits. We can also provide guidance on how to conduct sandwich-free joins and exits without price oracles, which can be useful for many Yearn vault-style integrations.
  • Scaffold: We believe that the scaffold-balancer project led by the BeethovenX team is crucial to providing developers with the tooling they need for rapid prototyping. Our documentation suite should treat scaffold-balancer as a core offering and include a usage guide.
  • Managed Pools: We have heard plenty of feedback and have no intention of continuing endless development work on Managed Pool Controllers, but that puts the onus on third-party developers to create their own custom integrations. As such, we believe that the docs would benefit greatly from a much more thorough treatment of Managed Pools, which are a core Balancer product harboring hidden complexity. The Integrations Team already has firsthand experience dealing with various gotchas of Managed Pool development and can transform this insight into concise and readable documentation for third parties. The existing documentation is paltry and was intended as a placeholder.

Example Contracts

Goals: Provide third-party developers with ready-made templates so they can build from 0 to 1 in 1/10th the time.

Description: Beyond written documentation, another pillar of a good developer experience is semi-production-ready examples. These offer developers a chance to examine hypothetical real-world code while leaving the Integrations Team and the Balancer DAO unencumbered by the need for audit funding. Such examples can spark inspiration or even serve as templates allowing developers to copy/paste their way to finished products. Here are a few examples that we feel are sorely needed in the short term:

  • Custom pools: In the past, this idea has been expressed as a guide to “create a custom AMM in 10 minutes.” It’s not really possible in 10 minutes, but there is plenty of guidance to be offered by some basic templates in this area. The inheritance structure of “official” Balancer pools can be somewhat confusing and obfuscate the intentions behind each design decision. Developers need to know which pieces are necessary, which will make their lives easier, and which are highly specific to a given use case that doesn’t apply to them. They need help with things like decimal scaling, preminted BPT, owner-only actions, protocol fees, recovery mode, and pool math.
  • BPT as collateral: Partners would benefit from seeing a few basic examples of price oracles for BPT on lending markets. See “Partner Engineering” below.
  • Managed Pool Controllers: Here, the Integrations Team intends to close out work that we’ve already almost finished, rather than starting any new efforts related to Managed Pools. It’s important to note that this is entirely different from the big project we worked on for several months last year; instead, it is just a few simple examples of working Managed Pool Controllers to show partners some different ways that they can go about designing their products and where exactly the dangers are.


Goals: Provide third-party developers with a UI-enabled test harness for rapid prototyping.

Description: The BeethovenX team champions the ongoing scaffold-balancer project, which we view as a critical component of the developer experience; however, in working with the project so far, we have observed that there is still work to be done to better generalize the UI component generation to all use cases. We would also like to help contribute many of our example smart contracts listed above so that developers can play with these in the context of the scaffold.

Partners: The BeethovenX team.

Partner Engineering

Between ongoing internal projects, the Integrations Team’s core focus goes to external partner integrations (it’s in the name). These are expected to be fluid, as partners come and go, but here we provide a summary of our current opportunities. Many of these were sourced by the Balancer Maxis, who have provided our Partnerships insight for the last several months. It is expected that the Integrations Team will not acquire any new projects of scope to extend beyond July 31, unless and until there is a firmer transition plan in place.

BPT as Collateral

Goals: Create external demand for BPT, develop business relationships with the lending sector, and improve visibility of Balancer as a premiere AMM platform.

Description: One of the most frequently requested integrations in recent months is “BPT as collateral.” Various money market leaders in DeFi want to enable their users to borrow against liquidity positions in lieu of standard tokens. Support is required to ensure that partners pay close attention to best practices when pricing BPT. There are complexities around understanding decimal scaling, Rate Providers, and composability, as well as protecting against read-only reentrancy.

Partners: There is ongoing work with Aave that was already started by the Integrations Team and needs to be finalized. There are at least two more partners - Interest Protocol and Sturdy Finance - actively integrating today (with support from Integrations), and we anticipate more in the future. Midas has already wrapped up their integration.

Cron Finance

Goals: Showcase novel AMM offerings on top of Balancer to highlight the core mission statement of providing a platform for others to build upon. Generate protocol fees by driving volume through peripheral Balancer pools to rebalance the TWAMM.

Description: Cron Finance recently built a TWAMM on top of Balancer. The product is live, but further support is required. There is ongoing work to establish arbitrage tooling via flash swaps to drive price parity between TWAMM pools and other Balancer pools. The Integrations Team has already provided support in the areas of the core TWAMM code review as well as, more recently, the UX-enhancing relayer.

Partners: Cron Finance.

B.Protocol Boosted Pool

Goals: Create a liquidity sink for Liquity LUSD on Balancer.

Description: Evaluate the feasibility of a Boosted Pool built around B.Protocol’s Liquity integration. If deemed viable, it will require a token wrapper, which would likely be developed externally but would also require vetting by the Integrations Team and may even require a custom Linear Pool integration, depending on its design.

Partners: B.Protocol.

Small-scope Engineering Services

Goals: Improve Balancer market share by offering opportunities to create new pools or judge novel grant submissions.

Description: In addition to more targeted and longer-term partnership opportunities, the Integrations Team also provides on-demand engineering services to Balancer ecosystem contributors like the Balancer Maxis and Balancer Grants. The scope of this work primarily tends toward code review but also includes deployments and occasional development, especially as it pertains to custom Rate Providers for Composable Stable Pools. One code review from Grants is already underway for a balpy-related project.

Partners: Various.


Hey @rabmarut thanks so much for putting this together. This is a lot closer to what the community has been asking for in terms of deliverables and transparency on what you guys are doing than any other post in the past year.

Do you think these domains are realistic to finish? The terms and definitions are rather broad. From my working experience in an agile setup it would be great to define clear acceptance criteria / when work is defined as done. Would also be nice to see who is doing what until when. Makes it much more tangible for the community, like:

  • Deliverable: Example contracts for Balancer Product XY
  • Acceptance cirteria: github repo with XY code examples ready to use for product Z
  • Responsible dev: YZ
  • FTE: 1
  • Time to completion: XY weeks

All in all I greatly appreciate your more detailed breakdown. I am in full support of you guys continuing your valuable work!


echoing @Xeonus
lack of specified deliverables, objective measurements and criteria and timelines has been a major issue with all past updates of the integrations team and remains an issue here too.

i do highly suggest revising each item to the bullet points @Xeonus pointed out and only committing to what you can deliver but, standing accountable to that delivery.

integrations team is a valued part of the ecosystem and we’re happy to see the looking to continue their work, albeit with more accountability and better managemed.


Hi @Xeonus, thanks for your feedback. I’ll try to explain in more depth how I view this post and our internal processes and you can let me know what you think.

We also adhere to an agile methodology. We plan our sprints around pretty traditional two-week blocks, and within these blocks we strive for the level of specificity you outlined. This post, however, is not intended as a sprint plan; it is a broad-strokes roadmap that covers 2.5 months of work. The agile methodology is centered around iterative adaptation, so it is considered futile to plan a quarter’s work with the same granularity as a two-week sprint.

This post (if/when consensus is established with the community around the specific items therein) will serve as our compass, to which we will align each two-week sprint from now through July 31. We will do our best to follow the compass, but sometimes priorities shift, or a highly urgent task (e.g., read-only reentrancy mitigation) must be inserted in the middle of the roadmap. Perhaps a team member falls ill, or one of the roadmap items sprawls beyond its original scope. Agile intentionally accounts for these unknowable possibilities in its structure by optimizing for short durations.

With that being said, if you think any of the items above are too broad to be reasonably evaluated upon completion, I’m happy to iterate with the community until we achieve a more agreeable specification. I’m simply hesitant to get into the weeds as far as who will be completing a given task, or exactly when it will be delivered on the calendar. It depends on too many factors. The goal here is simply to accomplish the full list within the next 2.5 months. We think that’s achievable, and you can judge our output on the other end to decide whether we made a reasonable effort and deserve continued funding as an engineering team.

A logical follow-up question to my comments here might be, “so why don’t you publish the details of each two-week sprint for the community’s scrutiny?” If this is desirable for the DAO, then I think that’s a discussion that should be had for the future, but as long as Orb Collective exists, I intend to honor its existing structure.

Orb is set up like a traditional company (very similar to BLabs); employees implicitly agree to a few conditions when they are onboarded. They understand that they are accountable to company management for the work they produce, but at no point do they agree to be held accountable as individual contributors to the DAO. They don’t even have to be involved in the DAO if they don’t want to be; they are shielded from externalities by management and given every opportunity to simply focus on their individual work. If they fall behind momentarily, the team is here to pick them up; they are not under a public magnifying glass at all times.

So, out of respect for the implicit agreements that are in place with all of Orb’s employees, I don’t feel comfortable exposing individual contributors to scrutiny from the DAO. My team is accountable to me, and I’m accountable to the DAO. We outline this roadmap for the next 2.5 months and provide monthly status updates for the DAO to judge our progress and make adjustments if needed. Internally, we operate on a two-week sprint cadence for maximum flexibility with high granularity, and we strive to maximize focus during those short periods to optimize productivity.

To the best of my knowledge, all of the above also reflects how @nventuro, John (can’t find his forum account), and @gareth run their teams. And, if I had to guess, it’s probably how @danielmk runs his team. I firmly believe that we are making a reasonable effort towards transparency, but I’m also happy to hear any input to the contrary.


Thanks @rabmarut. I think it makes sense to run your team like that from that perspective.

However, the Foundation/OpCo as the contractor could/should be the one requesting a more detailed information and we can discuss if that is appropriate or not for a veBAL holder to publicly demand a more granular update. From what I understand, our framework is quite dubious if those demands should come through the Foundation. Quite honestly, if I was a regulatooor I rather see them in a public forum. But I digress.

Nonetheless, if you have the intention of continuing as an SP through Orb, maybe you can raise the legal implications of sunsetting the company and what are the benefits for the ecosystem to keep it (from the regulatory perspective).

are you committing to every bullet point provided above? can the DAO judge the performance in due time based on delivery of each item?

The DAO will have to exercise some reasonable judgment regardless of how we define it. If I say yes, we are committing to each and every bullet point above, then it’s possible the DAO will decide that failure to complete just one item is equivalent to total failure. But actually, that’s a much better outcome than constructing a far-too-easy list of items that we will complete in one month and then twiddling our thumbs for the remaining funding period.

So, this list is meant to reflect approximately 2.5 months’ work. Yes, it is my hope that we will complete the entire list, and we should be judged partially on how close we get. But it’s all within reason; just as we have a duty to the DAO to execute the work to the best of our abilities, the DAO has a duty to judge our work fairly and not on arbitrary metrics. If I think this is 2.5 months’ work, then any small degree of variability will result in it taking more or less time. If we end up missing a few items, you’ll have to decide how severely you think we failed.

In other words, the pressure to establish more and more concrete metrics leads to a different pressure to create trivially easy roadmaps to guarantee execution of the metrics. We’re trying to be honest here about how much work we can handle and not take that other path, which is bad for the DAO.


I’m sorry if I’m misinterpreting or reading too much into it, but here and there I’ve been seeing comments like the one above that seem to lean in the direction of imposing what I feel is excessive management overhead onto veBAL holders.

IMHO, the DAO should not be judging the performance of SPs based on a previously agreed upon list of deliverables. That’s what grants are for. And even then, veBAL holders don’t vote on each and every grant - they delegate most of the work to a committee.

The governance process I would like to participate in is one where veBAL holders approve SPs based on their leaders’ reputation/track-record and the SP’s proposed mission and give them autonomy to execute for a fixed amount of time and at a fixed cost.

One might argue there’d be a third step in that process whereby veBAL holders judge SPs based on how they use that autonomy over that time period. But the way I see it that is really just a sub-step of the approval process if the SP applies for another round, because funding is not retroactive nor conditional on governance approval month-to-month.

Autonomy is a key factor to motivation, and I don’t think Balancer can attract the best minds in the tech industry if we don’t give them that. I appreciate the Integrations Team efforts towards being transparent and further detailing how they plan to accomplish their mission, but more importantly I trust they’ll responsibly make changes to those plans along the way if necessary.


Well said and very much agree. The one thing I would say is that along with this Autonomy should come a certain amount of reporting/transparency requirements. It may make sense for the DAO to step in before the end of a approved term/contract if the agreed upon reporting/transparency is not being delivered.

Transparency, more than anything else, is the thing that sets us aside from CeFi. How we deal with that in terms of SP’s doing work/getting budgets is an important thing to think about, but also a sensitive topic that needs time, and is perhaps best not figured out in an open forum with a bunch of anons writing text walls.

I very much agree that demands for retroactive transparency at this time don’t add much, and more important is to think about how this is built into future SP agreements. It should be done in a way that is very clear for everyone and finds the right balance between respecting team autonomy and time, and ensures that the community/governance has some way to measure if service providers are performing.


this actually has nothing to do with autonomy and everything to do with accountability.

no one is telling the integrations team or any other SP what they should be doing. i didn’t write the list, the integrations team did so, presumably and hopefully with their own autonomy.
i’m just asking if this is a list of things the team is willing to commit to or a list of ‘things we’d like to maybe spend some time on’. if it’s the latter, it’s fine – not like the DAO is voting on this – but it should presented as such.

i’m sorry if this is too harsh for you to hear sir: performance is not measured by input, it’s measured by output.
now the question is, do we have the culture of output, delivery and impact or the culture of best effort and participation trophies?

i too want to see that world but, if you’ve been keeping up with the forum unfortunately, the mood isn’t too favorable for suspending accountability and the said reputation has already been banked on. i for one do not think the team’s reputation is tarnished but that doesn’t mean we can’t hold each other accountable if one of us has good reputation.

now, i don’t think it’s too harsh to simply ask: is this a laundry list of things the team would spend time on regardless of outcome, or is this autonomously proposed list something the team is committing to and willing to stand accountable for.

i’m not undermining anyones autonomy – i didn’t write the list.
i’m not judging the output – i didn’t say this was too little; cut it in third for all i care.
but i will ask if the team is willing to take accountability for their autonomy.

1 Like

appreciate your willingness to stand by the proposal.

by the inherent nature of the DAO, i cannot speak on its behalf but only as a part of it. i can assure you that the community is not here to sabotage your work and undermine your performance with arbitrary metrics, unfair bullions or by nitpicking small variations.

the community supports you, roots for you, and wants you to succeed. i for one only ask for accountability and measurement of output rather than input which you’ve so greatly been willing to accept and stand by.

this is not just important in the context of unfortunate quarrels that is happening now, this is important for the culture of the DAO and the community we are building. thank you for willing to stand accountable and not mixing it up with micromanagement or an imposition on your autonomy which this certainly has not been so.