[Request for Feedback] Obol Governance Analytics & Delegate Scoring Parameter

Hi everyone,

I’m Varit (v3naru.eth) from Curia Lab. Over the past few weeks we’ve been working on an Obol governance analytics platform for Obol to give delegates, token-holders, and contributors clearer, near real-time insight into Obol governance dynamics.

:seedling: Why we’re building this

  • Advance Obol’s 2025 SQUAD Goals
    The Collective’s roadmap calls for sustained delegate participation, transparent governance, and a clear incentives system. I believe having reliable analytics are the foundation for measuring those targets and keeping incentives fair.

  • Empower data-driven decision-making
    With clear turnout, impact, and governance metrics, delegates and token-holders can evaluate initiatives (and each other’s contributions) on substance rather than speculation.

  • Pro bono, community-first
    Curia is building this with no funding request. We simply aim to contribute to the Obol collective’s transparent analytics layer that will benefit the entire ecosystem—and we’d love the community’s fingerprints on the final design.

We’re sharing this work-in-progress early because we’d like your input before shipping the first MVP at the beginning of June 2025. Feedback is especially welcome on two fronts:

  1. Delegate parameters – definitions of “active,” “inactive,” and “ghost” delegates, plus the delegate scoring model.
  2. UX/UI + Metrics – layout, usability, and any metrics you feel are missing.

:magnifying_glass_tilted_left: Draft Delegate Parameters

Active / Inactive / Ghost
=========================

• Look-back window: 84 days ≈ 4 governance cycles  
• Voting-turnout threshold: 65 %

Active   → ≥ 65 % participation  
Inactive → < 65 % participation  
Ghost    → delegated voting power but 0 votes cast

Forum Score

A composite index that recognises off-chain discussion quality and presence.

Proposal Score   = (Prop_Initiated  * 0.5)
                 + (Prop_Discussed  * 0.3)
                 + (Prop_Like_Rec   * 0.1)

Engagement Score = (User_Topic_Int  * 0.7)
                 + (User_Post_Count * 0.4)
                 + (User_Like_Rec   * 0.2)

Activeness Score = (User_Day_Visited * 0.07)
                 + (User_Time_Read   * 0.06)

Forum Score = (Max_Score * 1)
             + (Proposal Score * 1)
             + (Engagement Score * 0.5)
             + (Activeness Score * 0.5)
             / (Σ Weights × Max_Score)

Delegate Score

Delegate Score = (Number_of_Votes_Score    * 0.05)
               + (Participation_Rate × 100 * 0.60)
               + (Voting_Impact_Score      * 0.10)
               + (Voting_Threshold_Score   * 0.05)
               + (Forum_Score              * 0.20)

*A full metric glossary here → Notion doc


Interactive Prototype


Explore the Figma demo here: link


:raising_hands: How you can help

  • Thresholds – Are the 65 % turnout line / 84-day window sensible?
  • Delegate Parameter/Score Weighting – Which behaviors deserve more or less influence?
  • Missing metrics – What metrics would be useful?

Feel free to leave comments or feedback here. We’ll iterate continuously and share an updated spec before the MVP goes live early next month.

Thanks in advance for your time and insight!

5 Likes

Hey, I really like the direction of this dashboard — great work.

I’d suggest adding a section to highlight the most active delegates over the last 30 and 90 days, ranked exclusively by Forum Score. This could be especially valuable for new or upcoming delegates looking for role models or examples of impactful engagement.

A couple of thoughts on the Engagement Score formula:

  • In my view, contributing to existing discussions adds more value to governance than just starting new threads. The depth and consistency of replies could better reflect meaningful engagement.
  • Also, what about increasing the influence of likes received specifically from other delegates? If that’s possible to filter, it could help reduce noise and avoid manipulation from casual users or bots.
  • I’d explore the idea of normalizing variables like User_Topic_Int and Prop_Initiated by the total number of topics or proposals created during the cycle. For example:
Engagement Score = (User_Topic_Int / Total_Topics * 7)  # Simplyfling 100*0.7
                 + (User_Post_Count * 0.5) 
                 + (User_Like_Rec * 0.3)

A similar logic could apply to Prop_Initiated and Prop_Discussed, making the score more reflective of relative contribution rather than just raw totals.

Do you have any scenario simulations with current values? (like for example, what it takes to n delegates achieve 100%?, how time consuming could be for a delegate to try achieve 100%?)

Thanks for the RFF Varit,

Cross-posting here our feedback for Tally’s proposal

In addtion, to anser your question about the 84-day window, we think this parameter is too restrictive and discourages new participation into the ecosystem, as a new delegate would need close to 3 months of flawless activity to reach their maximum score. We think 2 full voting cycles (~42 days) are a more reasonable timeframe to assess the commitment of new delegates until they reach their max score.

Hi @Ariiellus @Jose_StableLab!

Thanks a ton for the thoughtful feedback! Below I’ve quoted each of your suggestions and shared how we’re thinking about them.

  • We’re already building exactly that. A rolling X-day leaderboard (based on Forum Score, or others attributes) is in the works and should be live some time next 1-2 weeks!
  • We see the merit in emphasizing thoughtful replies, how much extra weight do you think replies should carry versus new-thread creation?
  • Great idea. To filter likes by source, we’d need either (a) having users to verify and link their delegate wallet to their forum profile (which we plan to create a thread for that) or (b) delegate accounts explicitly tagged in the forum (similar to Scroll’s setup), we can enable delegate-only like-weighting right away.
  • Right now, we use the percentile method to normalize raw activity metrics (e.g., Prop_Initiated, User_Topic_Int, User_Day_Visited, etc.). Each metric is converted into a percentile score to ensure fair comparisons across users. Yes, we use the inclusive cumulative percentile method, meaning a delegate who ranks at the top in a given category will receive a score of 100.

Thanks for reposting the detailed feedback on Tally’s delegate-compensation proposal. We’re on the same page about the original 84-day look-back being a bit long, three months of perfect attendance is a high bar for newcomers.

In Tally’s latest revision the Delegate Reputation Score (DRS) look-back window has already been shortened to 63 days (three voting cycles). That change should make the system more responsive while still rewarding sustained participation.

Appreciate the constructive dialogue!