-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a better incentive system to discourage model copies #71
Comments
In my opinion, the biggest problem is that every model gets emission, which encourages cloning... And the best miners are not properly rewarded. Therefore, I propose that only about 10% of the miners should earn, rather than everyone. |
Thanks for your input. |
I'm not convinced that 10% is the optimal solution, but I haven't come across any other subnet where every miner gets emission. For example, in SN37, only the top 1 and 2 receive emissions, which isn't necessarily ideal, but... On the other hand, if every miner earns something, it could encourage cloning and discourage the top miners, for whom the emissions are relatively small. |
I think in the case of managing emissions, this may be better addressed by adjusting temperature, which other miners have requested as well. |
Although current systems in place (3% time penalty, model safetensors hash checking) help guard against blatant model copying, there are still potential avenues of exploitation.
The requirements for a solution that can accurately detect model copying are the following:
Further notes on tuning other subnet properties:
It is desirable to keep the current temperature at roughly the same range. Although the current number of participants in the subnet may be small, this value can change in the future. Furthermore, there are existing discussions with validators regarding the emissions curve and the current level
It is desirable to keep a relatively low registration cost. A low registration cost can encourage newer participants, and there is already an implied cost for miners who are training and evaluating their models offline.
Additional discussion points are welcome and it is expected that this may not result in a concrete solution, but at the very least provide discourse to find a way forward.
The text was updated successfully, but these errors were encountered: