-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix random ordering of miners in response windows #1102
Conversation
test/reputation-system/reputation-mining-client/client-auto-functionality.js
Outdated
Show resolved
Hide resolved
test/reputation-system/reputation-mining-client/client-auto-functionality.js
Outdated
Show resolved
Hide resolved
test/reputation-system/reputation-mining-client/client-auto-functionality.js
Outdated
Show resolved
Hide resolved
test/reputation-system/reputation-mining-client/client-auto-functionality.js
Outdated
Show resolved
Hide resolved
test/reputation-system/reputation-mining-client/client-auto-functionality.js
Show resolved
Hide resolved
e91762e
to
93926f1
Compare
So the other issue I've fixed here is that for transactions that could be sent in good faith (i.e. anything with a 'responsePossible' call) could fail if two miners sent the transaction / the last allowed transaction at the time time, and the miner whose transaction failed would then stop mining. I've also upgraded our estimateGas calls to ethers v5 syntax (they weren't working before). The only thing here I'm a little suspicious of now is that if the estimateGas call fails for such a transaction, it'll still be sent, fail, and then potentially keep being sent. |
test/reputation-system/reputation-mining-client/client-auto-functionality.js
Outdated
Show resolved
Hide resolved
0745107
to
a0c0e80
Compare
test/reputation-system/reputation-mining-client/client-auto-functionality.js
Outdated
Show resolved
Hide resolved
a0c0e80
to
0dedc59
Compare
Even if this build is green, please don't merge, this implementation still has issues. |
ae55253
to
6eb225c
Compare
6eb225c
to
6f841ed
Compare
Okay, well, I guess this is better (and an easier-to-grok-fix) than it was, but I'm still not happy with it. By using the address in the hash, we randomise the ordering for each mining cycle (which is good), but the ordering is known in advance. Someone could create colonies or install/uninstall extensions such that the next mining cycle address is advantageous for them in terms of ordering. I can't figure out a clean solution, though. Using the timestamp has the same issue, except it's now based on miners choosing a judicious time to close a phase and move on to the next one. I'm open to suggestions. |
Realistically, would changing extensions / creating colonies allow people to explicitly set the ordering, or does it just amount to getting a new random hash? If these actions just change a 256-bit hash then in practice it is not manipulable. |
It essentially lets them pick the ordering of miner addresses (which are known). |
But in a "random shuffle" way, not in a "pick and choose" way. Still not ideal, but want to clarify. |
Right, but they can keep deploying until they get an ordering that they like. |
Would using the timestamp instead of the stage help then, since they couldn't know the other hash inputs in advance? Currently the other inputs are fixed per-cycle. Using the timestamp would make it harder to game. Although I suppose then that allows miners to pick ordering by choosing a mining time, which may be worse. |
This was the issue with the original implementation in the PR and caused me to rework it 😄 I'm pretty convinced there's no elegant solution here (though this implementation, or the one with timestamps, are better than the current implementation). |
I suppose the improvement would be using a value which is set at the end of the previous stage but which cannot be manipulated. If it was a dispute I would say maybe the JRH state? But in general maybe there isn't anything? |
I suppose one thing which you could do would be to use the timestamp, but truncated to a low resolution, e.g. at the minute level. This would limit the degrees of freedom a miner would have to choose a timestamp, to the point where waiting a minute for a different order would yield a small benefit (compared to another miner who would just mine the transaction immediately). |
Is this still a draft? |
The issue here is that they simply manipulate the address they're mining from, knowing what the ordering is going to be in advance.
This is a nice idea. I could believe there's a sweet spot here. Too long, and the ordering is predictable in advance (so they change the address they're mining from). Too short and there's no benefit, but in the middle... I'll give this some thought. This is still a draft because me wanting to explore these issues. |
6f841ed
to
0a32c3f
Compare
Un-marking as draft in its current state. I can't quite make your 'lower resolution' idea work to my satisfaction, so I think we should review / merge, create an issue for the remaining problems and go from there. |
0a32c3f
to
8bcf179
Compare
Follow-on to #842
Until now, the ordering of miners was essentially done thusly:
If the hash was bigger than the target,
responsePossible
returned false. The timestamp passed in determinestarget
, and so the response window 'opens' over time. However, the hash is fixed for the same miner at the same stage in every cycle, so out of a set of miners, the ordering is going to remain fixed for every e.g.confirmNewHash
stage.I have replaced it with
which changes for every response window. Note that we don't need
_stage
, which I originally introduced to make sure every stage has a different ordering, but isn't actually needed - every stage has a different_responseWindowOpened
which is adequate for that purpose - as does each instance of particular stages, fixing the underlying issue.To ease the transition, I've added a 'dummy' function that still takes the
_stage
as an argument for miners which transparently passes through to the new implementation. We can deploy new miners (with the slightly altered call to accommodate the new overload), which will then keep working when we deploy the new contract. We can then deploy new miners again that only query the correctresponsePossible
, and then remove the oldresponsePossible
in another deployment.Currently a draft as I need to sort out the flickeriness of the test (which is actually pointing to another issue in the miner, I think), but we're in a good enough spot to start discussion on this fix.