-
Notifications
You must be signed in to change notification settings - Fork 258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
indexeddb: accept any db schema version up to 99 #3812
Conversation
We weren't updating the database schema version immediately after the v10 -> v11 migration. This was fine in practice, because (a) for now, there is no v12 migration so we ended up setting the schema version immediately anyway; (b) the migration is idempotent. However, it's inconsistent with the other migrations and confusing, and is about to make my test fail, so let's clean it up.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #3812 +/- ##
==========================================
+ Coverage 84.03% 84.05% +0.02%
==========================================
Files 259 259
Lines 27202 27202
==========================================
+ Hits 22859 22865 +6
+ Misses 4343 4337 -6 ☔ View full report in Codecov by Sentry. |
/// with `indexedDB.open(name, version)`, we explicitly check the version | ||
/// ourselves. | ||
/// | ||
/// It is expected that we will use version numbers that are multiples of 10 to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I appreciate the proposal. Thanks for working on this!
I wonder if 10 versions before a breaking change is enough. May 100 be a safer interval? It doesn't hurt, but it gives more rooms for backward-compatible new versions.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main thinking here was that, if we find ourselves getting close to 20 without a breaking change, we can always bump MAX_SUPPORTED_SCHEMA_VERSION
to 29, provided we do so before we actually hit 20.
You might be right though. Version numbers are cheap (we've got 4 billion of them to play with...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've made this change in 41dfa88
/// as is version 30, but versions 21-29 are all backwards compatible with | ||
/// version 20. In other words, if you divide by 10, you get something | ||
/// approaching semver: version 20 is major version 2, minor version 0. | ||
const MAX_SUPPORTED_SCHEMA_VERSION: u32 = 19; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid being confused by the interval (10 —as you suggest— or 100 —as I suggest—) minus 1 and so on, we may want to have to const
expression here.
use std::num::NonZeroU32;
// The value we must update.
// Note: this is `NonZeroU32` for better safety, otherwise we might have 0 in `MAX_…`.
const NUMBER_OF_BREAKING_CHANGES: NonZeroU32 = unsafe { NonZeroU32::new_unchecked(2) };
// The value we use in the code.
// Note: the `saturating_sub` should not be necessary because of `NonZeroU32` but we never know!
const MAX_SUPPORTED_SCHEMA_VERSION: u32 = (NUMBER_OF_BREAKING_CHANGES.get() * 100).saturating_sub(1);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly, I'm not sure I find this any clearer. The problem is it's a bit hard to wrap my head around what NUMBER_OF_BREAKING_CHANGES
actually means here, and how it relates to the individual schema versions that we still have to grapple with in the migration logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can still improve this in the future if we observe difficulties to understand or to update MAX_SUPPORTED_SCHEMA_VERSION
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please rebase your commits before merging. Otherwise it looks good, thanks!
... so that next time we make a non-breaking change to the schema, it doesn't break rollback
41dfa88
to
e4a2335
Compare
I'm not sure this is actually going to help very much in practice, because changes to the format of the serialized objects (such as that in #3795) are also backwards-incompatible :(. |
Currently, each time we make an indexeddb schema change, we break backwards compatibility with any existing applications. In other words, if someone updates their Element install, then decides to roll back, all existing sessions are broken because the old application can't cope with the new schema.
We can do better than this by being more tolerant in the database schema versions we accept. If we promise not to break backwards compat without bumping the schema version to
20100, then we can accept all schema versions up to1999.(Also contains a warmup commit which fixes something that becomes a problem in my new test.)