-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hotfix(migrations) fix broken upstream migrations #3159
Conversation
There exists two problems with the behavior of upstream object migrations introduced in the 0.12.0 release: - First, the DDL migrations to add both healthcheck and hashing column definitions were required before executing function-level migrations to fill in default object data. This is a result of the current DAO implementation that requires that all Lua-schema definitions exist as column definitions in the underlying data store, even if the DAO update call does not reference the column in question. - Second, the functional definitions load each row directly by a directly underlying DB call (as opposed to a DAO find_all()); this resulted in "table" schema types being represented as literal JSON strings, isntead of Lua table types, by the C* driver. The Postgres implementation does not suffer this as the underlying reprentation of table data in Postgres-backed schemas is Postgres' JSON type; this is automagically deserialized to a Lua table upon retrieval. As the C* implementation offers no such behind-the-scenes transformation, a direct load of rows containing "table" schemas results in an incompatable data type when iterating over the returned rows. The fix in the commit is to use the abstract DAO to load upstream rows when leveraging C*. Fixes #3156.
33967e0
to
ce3a7fd
Compare
}, | ||
{ | ||
name = "2017-11-07-192100_upstream_healthchecks_2", | ||
name = "2017-10-27-134100_consistent_hashing_2", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the names are lexically sorted, then this one ends up before the other one. The design was like that, but I don't know whether that still is the case.
It would be a nasty hurdle for upgrading 0.12.0 to 0.12.1 if we change the names of the migrations, as they will then be run again. Not sure what happens in that case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are still run in the order they are defined:
https://github.com/Kong/kong/blob/master/kong/dao/factory.lua#L271
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, they are actually loaded here: https://github.com/Kong/kong/blob/master/kong/dao/factory.lua#L193
but not sorted, after loading. So all seems well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, they were never sorted by their name - migrations names are purely informative.
(It is worth noting that Cassandra offers no such support for JSON comparable to that of PostgreSQL, so indeed, it is not the role of the driver to offer such opinionated serializations.) |
Agreed, certainly no problem there. Just explaining the situation :) |
Manually merged! |
Summary
There exists two problems with the behavior of upstream object
migrations introduced in the 0.12.0 release:
column definitions were required before executing function-level
migrations to fill in default object data. This is a result of the
current DAO implementation that requires that all Lua-schema definitions
exist as column definitions in the underlying data store, even if the
DAO update call does not reference the column in question.
directly underlying DB call (as opposed to a DAO find_all()); this
resulted in "table" schema types being represented as literal JSON
strings, isntead of Lua table types, by the C* driver. The Postgres
implementation does not suffer this as the underlying reprentation of
table data in Postgres-backed schemas is Postgres' JSON type; this is
automagically deserialized to a Lua table upon retrieval. As the C*
implementation offers no such behind-the-scenes transformation, a
direct load of rows containing "table" schemas results in an
incompatable data type when iterating over the returned rows. The
fix in the commit is to use the abstract DAO to load upstream rows
when leveraging C*.
Issues resolved
Fixes #3156.