-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wait for storage backup to finish to start new backup #12064
wait for storage backup to finish to start new backup #12064
Conversation
Kudos, SonarCloud Quality Gate passed!
|
@@ -474,6 +474,12 @@ export class NativeEditorStorage implements INotebookStorage { | |||
return this.savedAs.event; | |||
} | |||
private readonly savedAs = new EventEmitter<{ new: Uri; old: Uri }>(); | |||
|
|||
// Keep track of if we are backing up our file already | |||
private backingUp = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another way to do this might be to cancel the previous save (create a merged token source) and debounce a new one every time a new request comes in and the previous isn't finished
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But then you might end up with half finished writes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if part of the problem is we're writing the entire file every time. We could just do updates? That's a lot more complicated though. Probably have to keep a file stream open and seek around to the write spot and stuff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I thought about the cancel as well. But my fear / worry was what you mentioned about the writes. The big time sink is the file write, and cancelling that halfway seems dicey with a write halfway done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Long term not writing the entire file seems like the right solution. I could open a new issue for that? I think it's a bigger work item. But I'd agree that writing a 70MB file shouldn't happen a few times a second as I type in a markdown cell. With some of these dense plots types I think that it's not too hard to generate pretty big ipynbs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah the only real solution here would be to not write out the entire contents each time. I'll enter an issue for that.
For #12058
package-lock.json
has been regenerated by runningnpm install
(if dependencies have changed).