I’m Varit (v3naru.eth) from Curia Lab. Over the past few weeks we’ve been working on an Obol governance analytics platform for Obol to give delegates, token-holders, and contributors clearer, near real-time insight into Obol governance dynamics.
Why we’re building this
Advance Obol’s 2025 SQUAD Goals
The Collective’s roadmap calls for sustained delegate participation, transparent governance, and a clear incentives system. I believe having reliable analytics are the foundation for measuring those targets and keeping incentives fair.
Empower data-driven decision-making
With clear turnout, impact, and governance metrics, delegates and token-holders can evaluate initiatives (and each other’s contributions) on substance rather than speculation.
Pro bono, community-first
Curia is building this with no funding request. We simply aim to contribute to the Obol collective’s transparent analytics layer that will benefit the entire ecosystem—and we’d love the community’s fingerprints on the final design.
We’re sharing this work-in-progress early because we’d like your input before shipping the first MVP at the beginning of June 2025. Feedback is especially welcome on two fronts:
Delegate parameters – definitions of “active,” “inactive,” and “ghost” delegates, plus the delegate scoring model.
UX/UI + Metrics – layout, usability, and any metrics you feel are missing.
Draft Delegate Parameters
Active / Inactive / Ghost
=========================
• Look-back window: 84 days ≈ 4 governance cycles
• Voting-turnout threshold: 65 %
Active → ≥ 65 % participation
Inactive → < 65 % participation
Ghost → delegated voting power but 0 votes cast
Forum Score
A composite index that recognises off-chain discussion quality and presence.
Hey, I really like the direction of this dashboard — great work.
I’d suggest adding a section to highlight the most active delegates over the last 30 and 90 days, ranked exclusively by Forum Score. This could be especially valuable for new or upcoming delegates looking for role models or examples of impactful engagement.
A couple of thoughts on the Engagement Score formula:
In my view, contributing to existing discussions adds more value to governance than just starting new threads. The depth and consistency of replies could better reflect meaningful engagement.
Also, what about increasing the influence of likes received specifically from other delegates? If that’s possible to filter, it could help reduce noise and avoid manipulation from casual users or bots.
I’d explore the idea of normalizing variables like User_Topic_Int and Prop_Initiated by the total number of topics or proposals created during the cycle. For example:
A similar logic could apply to Prop_Initiated and Prop_Discussed, making the score more reflective of relative contribution rather than just raw totals.
Do you have any scenario simulations with current values? (like for example, what it takes to n delegates achieve 100%?, how time consuming could be for a delegate to try achieve 100%?)
Cross-posting here our feedback for Tally’s proposal
In addtion, to anser your question about the 84-day window, we think this parameter is too restrictive and discourages new participation into the ecosystem, as a new delegate would need close to 3 months of flawless activity to reach their maximum score. We think 2 full voting cycles (~42 days) are a more reasonable timeframe to assess the commitment of new delegates until they reach their max score.
Thanks a ton for the thoughtful feedback! Below I’ve quoted each of your suggestions and shared how we’re thinking about them.
We’re already building exactly that. A rolling X-day leaderboard (based on Forum Score, or others attributes) is in the works and should be live some time next 1-2 weeks!
We see the merit in emphasizing thoughtful replies, how much extra weight do you think replies should carry versus new-thread creation?
Great idea. To filter likes by source, we’d need either (a) having users to verify and link their delegate wallet to their forum profile (which we plan to create a thread for that) or (b) delegate accounts explicitly tagged in the forum (similar to Scroll’s setup), we can enable delegate-only like-weighting right away.
Right now, we use the percentile method to normalize raw activity metrics (e.g., Prop_Initiated, User_Topic_Int, User_Day_Visited, etc.). Each metric is converted into a percentile score to ensure fair comparisons across users. Yes, we use the inclusive cumulative percentile method, meaning a delegate who ranks at the top in a given category will receive a score of 100.
Thanks for reposting the detailed feedback on Tally’s delegate-compensation proposal. We’re on the same page about the original 84-day look-back being a bit long, three months of perfect attendance is a high bar for newcomers.
In Tally’s latest revision the Delegate Reputation Score (DRS) look-back window has already been shortened to 63 days (three voting cycles). That change should make the system more responsive while still rewarding sustained participation.