Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix random ordering of miners in response windows #1102

Merged
merged 12 commits into from
Dec 16, 2022

Conversation

area
Copy link
Member

@area area commented Nov 4, 2022

Follow-on to #842

Until now, the ordering of miners was essentially done thusly:

if (uint256(keccak256(abi.encodePacked(minerAddress, _stage))) > target) {

If the hash was bigger than the target, responsePossible returned false. The timestamp passed in determines target, and so the response window 'opens' over time. However, the hash is fixed for the same miner at the same stage in every cycle, so out of a set of miners, the ordering is going to remain fixed for every e.g. confirmNewHash stage.

I have replaced it with

if (uint256(keccak256(abi.encodePacked(minerAddress, _responseWindowOpened))) > target) {

which changes for every response window. Note that we don't need _stage, which I originally introduced to make sure every stage has a different ordering, but isn't actually needed - every stage has a different _responseWindowOpened which is adequate for that purpose - as does each instance of particular stages, fixing the underlying issue.

To ease the transition, I've added a 'dummy' function that still takes the _stage as an argument for miners which transparently passes through to the new implementation. We can deploy new miners (with the slightly altered call to accommodate the new overload), which will then keep working when we deploy the new contract. We can then deploy new miners again that only query the correct responsePossible, and then remove the old responsePossible in another deployment.

Currently a draft as I need to sort out the flickeriness of the test (which is actually pointing to another issue in the miner, I think), but we're in a good enough spot to start discussion on this fix.

@area area added the bug label Nov 4, 2022
@area area force-pushed the fix/randomise-miner branch 2 times, most recently from e91762e to 93926f1 Compare November 10, 2022 11:32
@area area marked this pull request as ready for review November 11, 2022 10:17
@area
Copy link
Member Author

area commented Nov 11, 2022

So the other issue I've fixed here is that for transactions that could be sent in good faith (i.e. anything with a 'responsePossible' call) could fail if two miners sent the transaction / the last allowed transaction at the time time, and the miner whose transaction failed would then stop mining.

I've also upgraded our estimateGas calls to ethers v5 syntax (they weren't working before). The only thing here I'm a little suspicious of now is that if the estimateGas call fails for such a transaction, it'll still be sent, fail, and then potentially keep being sent.

helpers/test-helper.js Outdated Show resolved Hide resolved
helpers/test-helper.js Outdated Show resolved Hide resolved
helpers/test-helper.js Show resolved Hide resolved
packages/reputation-miner/ReputationMiner.js Show resolved Hide resolved
packages/reputation-miner/ReputationMinerClient.js Outdated Show resolved Hide resolved
packages/reputation-miner/ReputationMinerClient.js Outdated Show resolved Hide resolved
@area
Copy link
Member Author

area commented Nov 30, 2022

Even if this build is green, please don't merge, this implementation still has issues.

@area area marked this pull request as draft November 30, 2022 17:31
@area
Copy link
Member Author

area commented Dec 8, 2022

Okay, well, I guess this is better (and an easier-to-grok-fix) than it was, but I'm still not happy with it. By using the address in the hash, we randomise the ordering for each mining cycle (which is good), but the ordering is known in advance. Someone could create colonies or install/uninstall extensions such that the next mining cycle address is advantageous for them in terms of ordering.

I can't figure out a clean solution, though. Using the timestamp has the same issue, except it's now based on miners choosing a judicious time to close a phase and move on to the next one.

I'm open to suggestions.

@kronosapiens
Copy link
Member

Realistically, would changing extensions / creating colonies allow people to explicitly set the ordering, or does it just amount to getting a new random hash? If these actions just change a 256-bit hash then in practice it is not manipulable.

@area
Copy link
Member Author

area commented Dec 9, 2022

It essentially lets them pick the ordering of miner addresses (which are known).

@kronosapiens
Copy link
Member

But in a "random shuffle" way, not in a "pick and choose" way. Still not ideal, but want to clarify.

@area
Copy link
Member Author

area commented Dec 9, 2022

Right, but they can keep deploying until they get an ordering that they like.

@kronosapiens
Copy link
Member

Would using the timestamp instead of the stage help then, since they couldn't know the other hash inputs in advance? Currently the other inputs are fixed per-cycle. Using the timestamp would make it harder to game. Although I suppose then that allows miners to pick ordering by choosing a mining time, which may be worse.

@area
Copy link
Member Author

area commented Dec 12, 2022

Although I suppose then that allows miners to pick ordering by choosing a mining time, which may be worse.

This was the issue with the original implementation in the PR and caused me to rework it 😄

I'm pretty convinced there's no elegant solution here (though this implementation, or the one with timestamps, are better than the current implementation).

@kronosapiens
Copy link
Member

I suppose the improvement would be using a value which is set at the end of the previous stage but which cannot be manipulated. If it was a dispute I would say maybe the JRH state? But in general maybe there isn't anything?

@kronosapiens
Copy link
Member

I suppose one thing which you could do would be to use the timestamp, but truncated to a low resolution, e.g. at the minute level. This would limit the degrees of freedom a miner would have to choose a timestamp, to the point where waiting a minute for a different order would yield a small benefit (compared to another miner who would just mine the transaction immediately).

@kronosapiens
Copy link
Member

Is this still a draft?

@area
Copy link
Member Author

area commented Dec 13, 2022

I suppose the improvement would be using a value which is set at the end of the previous stage but which cannot be manipulated. If it was a dispute I would say maybe the JRH state? But in general maybe there isn't anything?

The issue here is that they simply manipulate the address they're mining from, knowing what the ordering is going to be in advance.

I suppose one thing which you could do would be to use the timestamp, but truncated to a low resolution, e.g. at the minute level. This would limit the degrees of freedom a miner would have to choose a timestamp, to the point where waiting a minute for a different order would yield a small benefit (compared to another miner who would just mine the transaction immediately).

This is a nice idea. I could believe there's a sweet spot here. Too long, and the ordering is predictable in advance (so they change the address they're mining from). Too short and there's no benefit, but in the middle... I'll give this some thought.

This is still a draft because me wanting to explore these issues.

@area area marked this pull request as ready for review December 14, 2022 15:49
@area
Copy link
Member Author

area commented Dec 14, 2022

Un-marking as draft in its current state. I can't quite make your 'lower resolution' idea work to my satisfaction, so I think we should review / merge, create an issue for the remaining problems and go from there.

@kronosapiens kronosapiens merged commit ce1cdb1 into develop Dec 16, 2022
@kronosapiens kronosapiens deleted the fix/randomise-miner branch December 16, 2022 01:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants