core, les: implement general atomic db #5
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR introduces an atomic database implementation. I still keep the original format, but add some additional fix to it.
The reason I don't choose your child database idea is: in practice we will only assign a single db handler for each module which needs to maintain data. If we use the
child database
idea or your original idea by passing a reader and writer, it will break the code in another module. Another module sometimes needs to use the externally given handler, sometimes use its own db handler.For example, if we update some data in
token sale
module and we pass a handler(rw) to payment module, we will ask the payment module to write data by using the given handler to ensure the atomicity.However, payment module also have its own db APIs defined. And also we still need to read/write some data in payment module by its own db handler like:
payment sender creates lottery, issues new cheques, etc
.So my general idea is: we only define a single db
atomic database
and pass it to each module.In the atomic data we apply the magic. If we need to put data atomically, call
openTransaction
.Regrading your concerns:
The problem is that the atomicity is required for each client separately while the data belonging to different clients might be accessed simultaneously.
It's true we can only guarantee the atomicity for all write operations, but for each client. But it's okay. We might accumulate lots of data in the batch, make it big. But as long as the atomicity is not required all the time, we can find a timing to flush them out.And regarding the performance, it needs a lock to protect batch, but I think it's okay. We don't have too much performance requirement for incentive data updating. Correctness is the most important thing.