-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix transaction too big issue in restore #957
Conversation
When restore happens, we bunch entries together and perform batch write. The existing implementation would batch 1000 entries. The batch of 1000 entries can overflow the transactional limits in certain circumstances (eg: when table size is too small). This commit fixes the issue. We flush the entries once the batch reaches its max capacity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 1 files reviewed, 1 unresolved discussion (waiting on @ashish-goswami and @manishrjain)
backup.go, line 186 at r1 (raw file):
Previously, ashish-goswami (Ashish Goswami) wrote…
We should not do this as this will modify entries slice already passed to db.batchSetAsync.
Nice catch. Thanks! Fixed it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 1 files reviewed, 2 unresolved discussions (waiting on @ashish-goswami, @jarifibrahim, and @manishrjain)
backup.go, line 135 at r2 (raw file):
throttle *y.Throttle entries []*Entry entriesSize int64
uint64??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 1 files reviewed, 2 unresolved discussions (waiting on @ashish-goswami and @manishrjain)
backup.go, line 135 at r2 (raw file):
Previously, ashish-goswami (Ashish Goswami) wrote…
uint64??
opt.maxBatch size is of int64 type. I've used int64 here so that we don't have to cast it before comparing it to opt.maxBatch size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 1 files reviewed, 3 unresolved discussions (waiting on @ashish-goswami, @jarifibrahim, and @manishrjain)
backup.go, line 167 at r2 (raw file):
if int64(len(l.entries))+1 >= l.db.opt.maxBatchCount || l.entriesSize+estimatedSize >= l.db.opt.maxBatchSize { if err := l.send(); err != nil {
nitpick :P just return it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 1 files at r2.
Reviewable status: all files reviewed, 3 unresolved discussions (waiting on @ashish-goswami and @jarifibrahim)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: all files reviewed, 3 unresolved discussions (waiting on @ashish-goswami and @Sch00lb0y)
backup.go, line 167 at r2 (raw file):
Previously, sch00lb0y (balaji) wrote…
nitpick :P just return it
@Sch00lb0y I can't return it from here. We want to return only if there's an error, otherwise, we want to continue.
When restore happens, we bunch entries together and perform batch write. The existing implementation would batch 1000 entries. The batch of 1000 entries can overflow the transactional limits in certain circumstances (eg: when the table size is too small). This commit fixes the issue. We flush the entries once the batch reaches its max capacity. (cherry picked from commit f19b4c1)
When restore happens, we bunch entries together and perform batch
write. The existing implementation would batch 1000 entries. The batch
of 1000 entries can overflow the transactional limits in certain
circumstances (eg: when table size is too small). This commit fixes the
issue. We flush the entries once the batch reaches its max capacity
Fixes #820
This change is