-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make a final decision on first approach to slashing #2475
Comments
Some previous discussions about this topic: Some interesting reading: |
Some proposals from my end to be discussed: |
Just to make sure I'm following, we are only debating power slashing here, right? |
Some notes:
|
I think the most reasonable take is to:
I see this as a simple and elegant solution that disincentivizes power hoarding and largely mitigates the potential harm to network liveness and oracle service quality that powerful yet lazy, malicious or faulty validators could have. |
For more color on the proposal above:
|
I'm a bit confused by this statement. Unless I misunderstand the code, block consolidation happens at the beginning of the next epoch, not at the end of the current epoch. This also implies that slashing needs to happen at that time and not earlier. This is not an issue for data request and block mining. Block mining already is postponed through a |
Sorry I was speaking top of my head. It is exactly as you mentioned. I was speaking rather from the perspective of protocol design than the particular implementation in witnet-rust. The logic is still the same. The bottom line is that coin ages get adjusted upon consolidation just like UTXO set or any other data structures that belong to the chain state are. |
No worries, just wanted to make sure we were completely on the same page as this is a pretty important decision and has a significant impact on the consensus algorithm.
Note that his is currently not implemented for the simple reason that, in my mind, it is a much more severe penalization mechanism. My goal with the slashing was purely to preserve chain liveness with as little impact as possible. If we go the route of slashing everyone with a higher power too, we wander into the area of potentially slashing nodes that are being censored, DOS'd, etc. I consider that to be more contentious and I wanted to steer clear from it the initial wit\2 integration. |
Then your suggestion is to apply slashing to everyone in the |
That is what I have currently implemented, yes, but solely for the sake of (game-theoretic) simplicity. I'm not opposed to the other option, I just wanted to keep my patch as little contentious as possible. The one thing I'm still not sure about is if I like the slash proportional to the rank option because slashing a high-power node by, e.g., 1/3 of his power feels a bit futile (at least on my artificial testnets). |
I had this same concern, and that's why the article about the Witnet P2P Bucketing System was linked above. After solving multiple questions and possible flaws on current bucketing system, we came to agree that we should consider it to be strong enough to not auto-limit ourselves when thinking on possible slashing solutions.
Slashing in different proportions wipes out any attempt of an attacker feeding validators with exactly the same staked amount (the most naive way to take over the
I would need to assess the tests or simulations being done. A priori, I don't believe that reducing the power of a rank-3 validator by a third would be futile. What matters most is not how much power is reduced in absolute terms, though, but how much power is reduced in relation to higher ranked validators. |
What about increasing No matter the slashing approach we ultimately decide upon, there can always be the chance of an attacker taking over the (a) settle a higher value for the |
Because there can always be the chance of an attacker taking over the " |
I'm inclined to agree with the article, I just wanted the current patch to be non-contentious. If no one objects to slashing stakers
It does, but at the cost of having more epochs where one or more validators can refuse a block. I'm not really against it, but the consequence over fully slashing all validators is significant and I'm mainly wondering about the actual slashing ratios.
Note that you are not reducing it to a third, but rather reducing it by a third (otherwise a rank 3 validator would be punished harsher than a rank 2 validator).
Certainly not against it as that was part of my initial proposal, I'm just not sure how to implement it because that changes the |
Agreed, I actually meant "by a third" too, and just fixed it above.
I'd be more concerned about the number of epochs where all validators are prone to refuse a block. As a way to minimize periodical situations where all validators refuse a block, sequential prime numbers (
Not really. Block proposers in epoch |
From what I can tell, this is true, if we're saying |
Under that perspective, there would be a base replication factor constant, and a separate effective replication factor that grows progressively as epochs are completed without a consolidated block, and goes back to the value of the base constant as soon as a block is consolidated. Is this what you are suggesting @guidiaz ? |
This may be interesting (and workable) as an approach. I was not sure how we could increment the
Yes, that is what we discussed before on Telegram as a way to tackle the issue where there are more than the base |
Nodes that are synchronizing do not go into validating anything related to eligibility, as they can't compare block candidates, etc. |
About witnessing power-slashing Because block proposers are sovereign to decide what dr-commit transactions to include, if any at all, there's no actual possible way to power-slash reluctant witnesses. However, we should definitely care to reset the coin age of all different "witnessing identities" for whom at least on valid dr-commit transaction gets included in a block (a "witnessing identity" being any the unique validator-withdrawer pair in the Stake Tracker). |
About validation of dr-commit transactions From a block validator point of view, the following rules should apply in order to consider whether the dr-commit transactions contained within a block proposal are valid: a) they have to refer to a not-yet-committed and not-yet-expired DR. Should any "witnessing identity" get its
|
About validation of dr-tally transactions I partially concur with the idea proposed in #2446 about: ...letting a block proposer to include dr-tally transactions embedding a specific error (to-be-named) if a DR getting included within the same block requires a number of witnesses greater than the number of entries in the Stake Tracker... but adding this extra condition: On another note, instead of naming this specific tally error as "too many witnesses", I'd rather suggest to name it as (a) coherence with the (b) "too many witnesses" could give the wrong idea that the data requester is trying to require an "illegal" number of witnesses, whereas the fact is that the scarcity of witnesses may just be something temporary and eventually self-amended in the short-term. Finally, I think we should decide on whether the block proposer should earn double or single fee: one mining fee for including the DR tx and another mining fee for including the DR Tally tx as well, or just one mining fee for including both (as if including the former without the later would invalidate the whole block being proposed). What do you think guys @aesedepece @drcpu-github @Tommytrg ? |
I remember there was a reason for the inclusion of commit transactions to be essentially random, but I cannot remember what it was and I don't know if this still holds up. I wonder if we can require the included commits to be ranked by power (and then we can introduce slashing). Of course, that only works under the assumption that no one is censoring nodes (but we are assuming that anyway).
I'm not quite sure, but this sounds like you are (partially) referring to this PR? Note that this simply takes into account the number of stakers, not their coins. I don't think taking into account the number of coins is relevant though since there is a minimum stakeable amount of 10k coins and it seems unlikely we'll see DR's requesting that kind of collateral as long as they have to pay 1/125 of the requested collateral as reward to each witness. |
I wasn't directly referring to that PR, but yeah, stating a similar idea I guess. In fact, the one proposed in the PR is even better, because the tally transaction would be included together with the DR transaction itself, which is even more efficient. However, I'd strongly suggest the condition to be "not enough entries in the Stake Tracker with at least as much stake (i.e. |
About witnessing eligibility (Summarizing a few ideas and conclusions after in-depth chatting with @aesedepece) Based on what's currently stated in the Wit/2 WIP, some variants are proposed below for your consideration, pretending to:
() For a commitment transaction to be considered for inclusion in a block, it MUST refer to a (a) not-yet-committed data request, (b) the referred DR's round is 3 or less, (c) the committer (i.e. validator-withdrawer pair) holds an stake (i.e.
Where:
|
Draft document as of Oct. 15th, proposing multiple ideas on Witness Slashing and other topics as well as possible deflationary issuance model, support for fast-forwarding superblock proofs, and enhanced DR result errors: |
More specific proposal just covering Witnesses Slashing implementation as for v2.0 (including approach to eventual support for "capable witnessing committees"):
|
Concerning current lack of Validators Slashing implementation, the potential approaches are described herein:
|
Currently, we don't have a slashing mechanism in Witnet 2.0 branches. We have to discuss the different proposals and decide whether to implement one for the first release candidate or for upcoming ones.
The text was updated successfully, but these errors were encountered: