You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NuCypher’s sampling algorithm is already more sophisticated than selection weighted purely by stake size – it in fact reconciles unpredictable user preferences (policy duration and threshold) with a sybil-resistant, incontrovertible network state (all staked tokens, their lock durations and the workers to which they pertain) – this is a great foundation to experiment with other selection conditions.
Changing selection conditions are a lighter, lower-risk nudge towards good behaviour than slashing, which from an economic (and psychological) perspective is less repeatable and uni-directional. In other words, the probability of a given worker being assigned a policy could continually adjust over time, in both a positive and negative direction, but conversely there's only so many times you can slash a worker, and there’s no mechanism to reverse a slash. An imperfect selection rule is less likely to cause an exodus of workers.
Performance indicators that could be incorporated into worker selection:
service quality
measurable via: % correct re-encryptions, % ignored re-encryption requests (i.e. no answer within a globally agreed time-box), median time to answer re-encryption request. Regardless of which measure or combination defines service quality, it makes sense to weight recent activity as more indicative of quality than older activity, so some kind of ageing function can be utlised
service compatibility
based on specified user preferences; e.g. geographical proximity (measured as latency) or capacity (e.g. for a particularly high throughput). Rather than selecting exact workers by address, a user could get a set of workers that best matches their stated requirements
Open questions:
Will rewards dwarf the incentive of higher probability selection?
How do we generate statistics on worker performance in a cheap and ungameable way?
How does the selection algorithm handle price differentials between workers (i.e. a free market)?
Can stakers with a negative scoring workers simply spin up new ones?
What other factors could be inputs int the selection algorithm?
Given the imperfect attribution associated with worker downtime, might selection-based punishments be fairer than slashing?
The text was updated successfully, but these errors were encountered:
Setting the scene:
Performance indicators that could be incorporated into worker selection:
measurable via: % correct re-encryptions, % ignored re-encryption requests (i.e. no answer within a globally agreed time-box), median time to answer re-encryption request. Regardless of which measure or combination defines service quality, it makes sense to weight recent activity as more indicative of quality than older activity, so some kind of ageing function can be utlised
based on specified user preferences; e.g. geographical proximity (measured as latency) or capacity (e.g. for a particularly high throughput). Rather than selecting exact workers by address, a user could get a set of workers that best matches their stated requirements
Open questions:
The text was updated successfully, but these errors were encountered: