RAF1 Round Analysis: Project Distribution, Voting Behavior, and Quadratic Voting

This report analyzes the results of Obol’s first Retroactive Funding (RAF) round, leveraging publicly available data. Key findings address project categorization biases, voter engagement, and quadratic voting (QV) mechanics. Recommendations aim to refine future rounds for alignment with Obol’s strategic priorities.

1. Project Categorization & Ambiguity

Initial Challenges

Projects were grouped into overlapping categories (e.g., Infra & Tooling vs. DVT Integrations), creating ambiguity. For instance:

  • Dappnode (Infra & Tooling) and Stereum (DVT Integrations) both provide the same service, but were in different categories.
  • restake.watch (Security & Monitoring) and Guide to Setting Up a Distributed Lido… (DVT Integrations) better fit Community & Education.

Revised Categorization

To reduce ambiguity, categories were consolidated into:

  1. Infra & Tooling (Security, DVT, Infrastructure)
  2. Community & Education (Awareness, Advocacy)
  3. Research & Development

Distribution Insights

  • Infra & Tooling received 67% of total votes despite having roughly the same amount of projects as Community & Education.
  • This suggests a voter preference bias toward technical infrastructure, potentially crowding out community or research focused initiatives.

2. Funding Distribution & Voting Behavior

Top Projects by Allocation

  • Top 7 projects (all Infra & Tooling) received more than 1/3 of total votes.
  • Only 2 out of 15 top-funded projects were DVT/Obol-specific.

While there’s value in investing in the broader Operator Ecosystem, we must recognize that resources are finite. Therefore, the Collective might consider optimizing fund allocation to ensure that Obol-specific projects and strategic priorities receive the necessary support.

Long-Tail Distribution

  • 20% of projects received <20K votes.
  • 14 projects (<10% of total) secured less than 10% of half of the votes casted.

Signs of a likely too loose eligibility criteria––about 25% of projects shouldn’t have made it to the round.

Delegate Engagement

  • Average votes per delegate: 10 projects.
  • Median votes per delegate: 9.
  • 37.25% of delegates (19/51) voted for ≤5 projects.

Most did not spend much time looking at the projects. This means potential apathy, time constrains, or lack of clear voting guidelines.

In all fairness, timelines for this round were tight.

3. Quadratic Voting (QV) Effectiveness

This first round used Quadratic Voting, a system where the amount of people voting matters more than the actual votes and that offers some protection for less popular opinions as described in Gitcoin’s blog post.

QV and QF are sometimes used interchangeably, but it’s important to note here that they aren’t the same. The algorithm are almost identical, but because QF has actual money, it accounts for other things.

QV vs. Linear Allocation

  • Vote distribution closely aligns with linear allocation (purple line), indicating minimal QV impact.
  • Small QV “bump” for projects with 0–100K votes (mostly Community & Education).

Limitations of Implementation

  • Looks like a very simplistic QV formula lacking safeguards (e.g., sybil resistance, minimum thresholds) was used.
  • A simple Connection-Oriented Cluster Matching (COCM) algorithm was tested with the dataset, but deemed ineffective due to sparse project clusters. On a larger round COCM should be considered.

4. Key Takeaways & Actionable Recommendations

  • Eligibility Criteria:

    • Host category-specific rounds (e.g., DVT Innovation or Community Growth).
    • Refine evaluation metrics and “project profile” (e.g. projects with too much funding can’t apply).
    • Implement a pre-screening committee to filter low-impact proposals.
  • Delegate Engagement:

    • Publish clear round-specific priorities in line with SQUAD goals to guide voters.
    • Improve timelines to reduce time constraints with more time to prepare, apply and vote.
  • Quadratic Voting Enhancements:

    • Test minimum vote thresholds as a filter to auto-exclude projects without impact from the distribution.

This report was made with ʚ♡ɞ by

       _     _        
__   _(_)___| |_ __ _ 
\ \ / / / __| __/ _` |
 \ V /| \__ \ || (_| |
  \_/ |_|___/\__\__,_|
 * · + vista.wtf + · *

P.S.: Yes, this is the soft announcement for the team that will work with me on this delegation and more things.

6 Likes

@Vista.wtf (Welcome to the Collective!), @enti

Thanks so much for putting this together—really appreciate the time, thoughtfulness, and structure of this analysis. It’s clear how much effort went into not just looking at the data, but also drawing out meaningful insights and actionable recommendations. This is exactly the kind of reflection that strengthens future rounds.

A few thoughts from my side:

Categorization & Structure

Agree that several projects could reasonably fall into multiple categories—or might’ve fit better elsewhere. The categories in RAF1 were primarily used for visual organization and surface-level context, not to drive voting logic. Delegates ultimately showed a strong preference for Infra & Tooling, which makes sense to me given the tangible impact infrastructure projects tend to have on the operator ecosystem.

That said, the ambiguity clearly created some confusion. For future rounds, a more structured approach could help—maybe broad, high-level categories supported by tags or thematic tiers. Something closer to how SQUAD Goals are structured might give participants a clearer mental model, and help align projects and voters more effectively.

Funding Distribution & Strategic Alignment

One of the standout takeaways here is how few top-funded projects were directly related to DVT or Obol. While it’s valuable to support the wider ecosystem, it’s also important to think about how to ensure protocol-aligned initiatives don’t get crowded out—especially when funds are limited.

Delegate Behavior

The delegate engagement data was revealing. Seeing that around 40% of voters only selected 5 or fewer projects highlights an opportunity to better support delegates with clearer information, context, and expectations.

Beyond timeline constraints, this might also speak to the need for:

  • Delegate briefings or round summaries (quick videos, written primers, etc.)
  • Evaluation frameworks tied to SQUAD goals
  • Optional working groups or thematic voting clusters to help distribute attention

Improving the delegate experience is key to surfacing stronger signals across the board. This has to be a priority, always!

Quadratic Voting (QV) Reflections

It’s been helpful to see how it played out. The fact that outcomes tracked closely with a linear distribution suggests that QV on its own doesn’t shift dynamics meaningfully unless paired with additional safeguards and context.

This connects nicely to Vitalik’s recent PUB-OS post, where he notes that legitimacy in public goods systems often comes less from mechanisms alone and more from clear, shared purpose. Without strong coordination norms or identity primitives, QV risks being a thin layer of math on top of the same outcomes.

Some things worth testing in future rounds:

  • Minimum vote thresholds to filter low-impact projects
  • Sybil resistance measures or identity-weighted voting
  • Blended models (QV + ranked choice or role-weighted participation)

Big Picture

This kind of deep-dive community analysis is exactly what makes mechanisms like RAF useful—not just as a funding tool, but as a coordination experiment. Really appreciate the clarity and rigor you brought to this write-up.

Looking forward to building on this for RAF2. It will be great to develop the next iteration with these lessons in mind!

4 Likes