-
Notifications
You must be signed in to change notification settings - Fork 388
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GOR — Evaluation DAO #407
Comments
What's not clear
I would rephrase it to "designed around the principle of least privilege".
Seeing "authority security" next to each other is a bit ambiguous for the non-native english speaker that I am |
Here are my initial thoughts, @moul let me know if it is worth pursuing... One way to measure and numerically quantify contributions is by defining a contribution metric. The metric will map a contribution type to a discreet number of tokens. The first group of contributions could be an approved and merged PR of an issue reported.
The second group of contributions could be approved proposals.
Alternatively, Gnosh could be rewarded based on whether the approved proposal required simple or super majority.
The third group of contributions that could be rewarded is voting, so a vote, Yes or No, will get an equal number of Gnosh. |
To make the challenge more manageable, we've isolated a simpler part that's maybe more appropriate to get started. Please check out #728. However, working on the complete task is still worthwhile. |
I think this is a great start, I plan to start expanding on this a bit further if you'd like to bounce some stuff around! I had an extremely similar idea after first hearing the concept. I think you're spot on. It seems to make more sense for a Contributor to select the contribution type they believe theirs falls under based on a predetermined reward scale. Opposed to each contribution reward being judged individually and uniquely by Evaluation DAO. This way Evaluation DAO just has to choose between yes (we agree with your selected contribution type and corresponding reward) or no. No could then be further split down into rejecting the contribution outright (reject or no), rewarding even more than asked (upgrade), or still rewarding but with a GNOSH reward corresponding lower on the scale (downgrade). Large feature commits that are already at the top of the reward scale wouldn't have a valid upgrade option, and the lowest tier ones like a minor change or even say, a very short published article wouldn't have downgrade available to them. This would need to be run by all the devs to make sure nothing is being missed and to gauge difficulty... but I currently see that last part being implemented one of two ways. There'd be separate scales similar to how you have them above, but with even more categories. Small, medium, or critical bug fixes. Short, medium, or large documentation (maybe based on text count). Different feature or commit sizes (though here I'm not sure if basing this on lines or code would work since that doesn't always correlate to efficiency or difficulty/complexity of the code being contributed), etc. Upgrade and downgrade could simply move the contribution type up or down one level accordingly, but if a contributor severely misjudges they would need to get rejected all together and restart this process to reselect their type. I'm assuming after they work out their misclassification in GitHub comments or wherever else so they can feel sure the 2nd time. Otherwise, a new proposal could get started when a misclassification takes place, allowing the DAO to reassign that contribution to any contribution type/reward tier that they feel best matches it. This second method would add some extra workload to the DAO participants though (extra research and voting time), let alone to devs implementing it. And an extra wait period for the reclassification proposal itself. Depending on the reclassification proposal time, it'd potentially take longer than rejecting and resubmitting a contribution with a new classification if it's more than one level away though... not ideal if so. In that case, might make more sense to stick to the first option that is strictly yes, no, upgrade, and downgrade? Going to keep going through this, just thinking out loud at the moment. Would love your thoughts on all of this as well! |
@moul we should change the Evaluation DAO description here. Now that Evaluation DAO is strictly writing reviews that are added to user profiles. Also after talking to @jaekwon it seems like Evaluation DAO will retain the power to add new contributors to every tier besides the top tier, even though each tier will largely be governed by the tier above it now. And I am guessing that means Tier 1 will be self governing since there is no tier above it? |
Note: this issue will be updated to keep track of changes in rules.
Context
Related slides:
Part of ContribursDAO/WorxDAO (#872)
Problem Description
In order to ensure a fair and transparent distribution of rewards in the Game of Realms competition, an Evaluation DAO is needed. The Evaluation DAO will evaluate contributions during phase two of Game of Realms and attribute rewards accordingly. Using a DAO will allow us to scale the review process and let members vote on the best contributions for the platform.
Implementation of the Evaluation DAO is the only step the New Tendermint core team must approve because of its crucial role in the competition and the platform's future. Once the DAO is in place, DAO members will collectively review some previous and all further contributions.
Important features:
What we look for in the submissions / suggestion on what could work for addressing the challenge
What wins points
The text was updated successfully, but these errors were encountered: