-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gracefully recover from nested value-key marshalling #97
Conversation
The previous behaviour of value-key merger in certain scenarios may have been resulted in value-keys being marshalled repeatedly. This was the underlying cause of the issue captured in #94 and corrected in #95. The work here adds to the corrections by gracefully fixing such records by unmarshalling them recursively until we reach values with the expected key prefix, and de-duplicating them as necessary. The fix is done opportunistically, in that whenever such values are detected during reads and writes they are fixed. Additional tests are added to simulate nested marshalling of value-keys and assert that the resulting merge is as expected for both merge-older and merge-newer cases.
Workload 0 benchmark results:
|
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #97 +/- ##
==========================================
+ Coverage 63.47% 63.61% +0.13%
==========================================
Files 15 15
Lines 2779 2803 +24
==========================================
+ Hits 1764 1783 +19
- Misses 753 757 +4
- Partials 262 263 +1
|
if _, pendingDelete := v.deletes[string(value)]; pendingDelete { | ||
return true | ||
} | ||
for _, x := range v.merges { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How many elements do you expect to be in v.merges
? Would it make sense to have an additional hashtable to avoid full scan?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for not using a map
to store merges
is that we need to maintain the insertion order. Iteration order over map
in Golang is non-deterministic I'm afraid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Just one minor comment.
The previous behaviour of value-key merger in certain scenarios may have been resulted in value-keys being marshalled repeatedly. This was the underlying cause of the issue captured in #94 and corrected in #95.
The work here adds to the corrections by gracefully fixing such records by unmarshalling them recursively until we reach values with the expected key prefix, and de-duplicating them as necessary. The fix is done opportunistically, in that whenever such values are detected during reads and writes they are fixed.
Additional tests are added to simulate nested marshalling of value-keys and assert that the resulting merge is as expected for both merge-older and merge-newer cases.