Community-Based Evaluation

We hesitated to adopt the proposed open community model, fearing dozens of CRT complaints and that the Sponsor team would be swamped. Ironically, that’s exactly what happened.

And so I am once again proposing a model proven in real communities for millennia: collective observation, transparency, and shared responsibility - enhancing fairness, engagement, and accountability.

Ancient communities, whether by location or activity, constantly monitored each other. They knew their members so well that they could judge condition or effort from a distance. Permanent members behaved properly, meeting requirements essential for the community’s survival. Honesty mattered - not only out of care for the group, but because moral outrage would not allow some to work hard while others did little yet claimed equal share. This single moral principle was enough.

Here are three things we lack today.

1. Openness:
For example, if someone doesn’t do their shift properly - say, washes the dishes badly -the whole community suffers. And who did wash the dishes today? We don’t know; it’s “confidential.” Ridiculous. A community of spies, really.

You’ve often said that openness breeds hostility - but I completely disagree. It’s the lack of transparency that fuels hostility. When you don’t know who gave you a low score in the dark, you invent stories: maybe they’re crazy, malicious, or something else - and you want them out. Meanwhile, the truth could be simple: a post got deleted by a service, and no one is really at fault. But without openness, the truth never surfaces - or surfaces too late.

If you do know who evaluated you, and you can openly discuss it, ask for justification, and hear the community’s feedback - the problem vanishes. Everyone understands who thought what, what needs to be clarified, or what should be improved to prevent future issues. Even the evaluator who gave a low score gets a chance to calmly reconsider, and maybe admit: “I overreacted” or “I missed the links.”

Openness forces people to take responsibility for their actions, words, and scores. It builds morale, unites the team, and discourages freeloading.

2. Community-wide decision-making

Currently, evaluations are conducted by 2–3 people, and in case of conflict - 5 people. This means that a person’s work is seen by only 2–3 (5 in conflicts) members of the community?! That is not objective.

But the working model assumes that decisions regarding those suspected of misconduct are made by the entire community.
Yes, we have rotation, but even if judges are randomly chosen, objectivity is still not guaranteed. Think about how you would want to make important decisions in your own life - you’d want to examine the problem multiple times and from different perspectives. Our communities used to assess the actions of wrongdoers or suspects the same way - everyone looked from different angles. Some saw one thing, others saw another.

Maybe 2 people say: “You’re bad, leave!” (score 1), while 3 say: “You need to significantly improve” (score 2). But there could be a dozen others who didn’t participate in CRT or the expert evaluation that day, and they might say: “Wait! They tried their best but failed. I’ve had cases where my links were deleted by that same service without my consent too .. and so .." (for example).

In our current system, 3 people - an expert evaluation - is enough to justify a score if there are no objections. But in an open community model, objections either don’t arise or dissipate naturally in discussion, where complainants get answers to their questions - if the scores are fair.

  • We could create a Discord channel for score disputes, where disagreements would resolve themselves because people would find answers to their questions.

No one will ring the alarm over every little issue and demand a full community meeting. But if objections remain even after explanations are heard in the designated channel, then something went wrong. In that case, a random group of 5 unknown CRT members is not a solution. The real solution is a full assembly - a general vote. We know how to run votes. And moreover with the voting bot, it would be a real pleasure. But let’s not get into implementation yet.

3. Automation.
Relying on automation = relying on the judgment of 2–3 people. Building automation based on community-wide voting is a completely different approach.

Thank you.

1 Like

I have some ideas, really my ideas, which nevertheless turned out have already been used in some DAOs. At suggest and do myself first trying to invent it and then check how they did it.

Current circumstances, as I see it, open up new models of interaction between ambassadors that could yield even more interesting results in the future. I will outline these ideas, which are still in raw, and very compact form for you.

1: Participation without obligation

  • Ambassadors are not here for payouts but because they share the project’s vision.
  • The main thing is selection: find those who want to build and support the project.
  • No strict rules, no fear of “being kicked out,” because there is nothing to punish with.
  • Studies(1) show that adding small monetary rewards can undermine volunteers’ intrinsic motivation - when volunteers were paid, they actually contributed less.

2: Symbolic rewards

  • Rewards remain, the distribution system should stay as is -
    but I will describe the reward model: “Base + Recognition” in next message.
  • We can add more symbols: badges, roles, voices in decision-making.
  • The point is recognition for merits, not direct financial payout.

3: Peer evaluation without punishments

  • Peer review work only to highlight (best) contributions.
  • Symbolic rewards are not “salary,” but tokens of respect.
  • If you didn’t review or been reviewed, you simply miss the chance to be noticed.
  • Rewards can be given not only for work but also for quality feedback.
  • In the old(current) system Google Sheets scores look like “judgment,” with risks of conflicts.
  • In the new system, recognition is natural and is absolutely conflict-free.

4. Experiment with Peer-to-Peer Recognition Tokens

  • Each ambassador gets a small monthly pool of recognition tokens.
  • Tokens can be given to colleagues for valuable contributions.
  • Rules: you cannot give tokens to yourself, only to others.
  • This eliminates “cheating”: no tokens = nothing noticeable done.
  • Tokens are not “judgments,” but gestures of gratitude.
  • Not receiving tokens = neutral, not a penalty.
  • Tokens can also reward feedback quality:

5: Advantages of this system

  • The Conflict Resolution Team is eliminated, because once recognition tokens are given - that’s it. They cannot be appealed or challenged, leading to the overall amount of conflicts decreased.
  • There may sometimes be cases of not entirely fair evaluation, but these will be single cases and will not have significant consequences. Negligible.
  • There will be fewer cases of deliberate underrating or overrating, because the receiver gains nothing directly from it, but the evaluator satisfies himself.
  • No bureaucracy.

6: Motivation

  • In a small community (70–100 people), trust and friendship can appear (but there is math to prevent cheating with recognition tokens).
  • For many, the real reward is respect and the feeling of being useful (we have to find those people).
  • Social recognition replaces formal control. This is important.
  • Value distributes automatically - visible and useful contributions are recognized.

7: Hope for the Future as a Motivator

  • While there are no explicit promises, we can remind ambassadors that:
    • Their contributions now are laying the foundation.
    • If the project succeeds, their early involvement could become very valuable.
  • This hope, combined with current recognition, is a strong motivator.
  • Research(2) on community tokens shows: early contributors remain engaged because they develop a sense of belonging and expect a share in future success.

Reseraches

(1) Frey & Goette (1999) – “Does Pay Motivate Volunteers?”.
Deci (1971) – “Effects of Externally Imposed Deadlines on Subsequent Intrinsic Motivation”

  • Key finding: Monetary rewards undermined intrinsic volunteer motivation—when volunteers were paid, they actually worked fewer hours.
  • Notable strength: Utilized a unique dataset from Switzerland with strong empirical backing on volunteer hours. ResearchGate

(2). Chao Li & Balaji Palanisamy (2019) — “Incentivized Blockchain-based Social Media Platforms: A Case Study of Steemit”

  • Key finding: Empirical analysis of 539 million operations by 1.12 million Steemit users (2016–2018) revealed misuse patterns in reward systems (e.g., bots, curation networks).
  • Notable strength: Vast dataset encompassing over a million active users provides strong empirical grounding. arXivResearchGate

Li, Y.; Chen, J.; & Xu, H. (2025) — “What Keeps Them Invested? Social Identity and Group Formation in Blockchain”

  • Key finding: Social reward, positive outlook, and investment level significantly predict “identity fusion” with crypto communities—a measure of engagement and belonging.
  • Notable strength: Mixed-method study: 26 interviews + quantitative survey of 111 crypto users. Frontiers, Neuron.

Here are the conclusions I see so far:

  1. The selection of ambassadors should lean heavily toward the candidate’s motivation. This is crucial for onboarding.
  2. In the peer-review process, the guiding principle should not be deadlines, rules, or rewards, but rather the genuine desire (if present) to contribute the best one can to the project in their own way.
  3. Instead of expulsion, I propose introducing a Dormant status for ambassadors who are going through a creative block, temporarily unable to work, or disengaged from peer review.
  4. The difference in “compensation” between Active and Dormant will be minimal, almost unnoticeable, but still present. I expect this to generate natural outcomes that will serve as a foundation for refining our path forward.
  5. I anticipate that we will have people who, even without rewards, will want to come up with something exciting and impactful. And indeed, the belief that the project will thrive, and that they themselves will be the driving force behind progress, is the strongest, most genuine motivator - never petty, never shallow, but the best one.