-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: dump ddl changes into sql file instead of executing it on database #4757
Comments
I'm also very much interested in this feature. We started a spike to implement Knex offer a lot of tooling to manage migration but we want to keep the benefit of LB model metadata to build/update the schema. So bascally in our use case, we use knex as a migration script runner (and for more complexe update needs, not that often). The problem with the LB migrate script is that we are in the final schema state straight from the beginning which may break data manipulation in the various migration scripts we'll have to run. Whereas, having access to the SQL queries made by the ORM, would allow us to run migration against those, ensuring the right schema at the right time. references: |
@ludohenin, thanks for your comment. Would you be interested in contributing to this feature? |
@dhmlau definitely, would need the appropriate pointer and design considerations though as I guess this will touch juggler |
@bajtos, could you please shed some light on this? thanks. |
As was pointed out, we are discussion a "proper" Database Migration Management Framework in GitHub issue #487. Let's keep the discussion here focused on a short-term workaround. The simplest solution that comes to my mind is to leverage A mock-up implementation improving if (operation in ds && typeof ds[operation] === 'function') {
debug('Migrating dataSource %s', b.key);
+ ds.connector.observe('before execute', connectorCommandsObserver);
await ds[operation](options.models);
+ ds.connector.removeObserver('before execute', connectorCommandsObserver);
} else {
debug('Skipping migration of dataSource %s', b.key);
} Observer implementation - this needs to take into account different connectors, I am sharing a code snippet that won't work out of the box: function connectorCommandsObserver(ctx, next) {
// if the connector is SQL:
const {req: {sql, params}} = ctx;
// TODO: merge parameterized SQL with params
// into a single SQL statement string
console.log(sql, params);
// if the connector is MongoDB
const {collection, req: {command, params}} = ctx;
// TODO: format this as a MongoDB client call
console.log(collection, command, params);
// TODO: support other connectors
next();
} References:
The code above is a very rough example. For a real implementation, we need to find a nicer way that also supports different connectors. This may require a lot of effort to design right and then implement in all parts that need changes. I think for the first iteration, it's probably better to introduce some sort of an extension point that will allow LB users to subscribe to commands executed by the connector and then let it up to each application to convert that data into whatever format they need. For example, we can introduce a new field in if (operation in ds && typeof ds[operation] === 'function') {
debug('Migrating dataSource %s', b.key);
let observer;
+ if (options.commandObserver) {
+ observer = function(ctx, next) { options.commandObserver(ctx, ds, b.key); next(); };
+ ds.connector.observe('before execute', observer);
+ }
await ds[operation](options.models);
+ if (observer) ds.connector.removeObserver('before execute', observer);
} else {
debug('Skipping migration of dataSource %s', b.key);
} Anyhow, just few ideas to get you started.
This would require more changes and could be potentially tricky. We need to add a "dry run" mode on top of the observer behavior described above, which will almost certainly require changes in all connectors (MySQL, PostgreSQL, MSSQL, Oracle, DB2, etc. but also MongoDB). I also not sure if the incremental migration will work when the partial changes are not applied to the database along the way. |
This issue has been marked stale because it has not seen activity within six months. If you believe this to be in error, please contact one of the code owners, listed in the |
This issue has been closed due to continued inactivity. Thank you for your understanding. If you believe this to be in error, please contact one of the code owners, listed in the |
Suggestion
Add an option to
npm run migrate
to dump the resolved database structure changes into ansql
file instead of executing it on the database.Use Cases
I think it's quite dangerous to run the migration on a production database and would like to review the necessary changes before it's executed on a production database or apply the sql file manually.
Currently i need to clone the database, generate a ddl file from database, run the migration on the cloned database, generate a ddl file againt and compare the changes to know what exactly the migration is doing.
Acceptance criteria
TBD - will be filled by the team.
The text was updated successfully, but these errors were encountered: