Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dbapi: migrate to Postgres #430

Closed
peterthomassen opened this issue Aug 21, 2020 · 2 comments
Closed

dbapi: migrate to Postgres #430

peterthomassen opened this issue Aug 21, 2020 · 2 comments
Assignees

Comments

@peterthomassen
Copy link
Member

We increasingly need DBMS features that MySQL/MariaDB does not support, but Postgres does, such as UUID column types (generally), or time duration column types, IP range column types, arrays of these things (multiple IP ranges) and the like (for #347).

This issue is to track what's related to migrating dbapi to Postgres. A draft is in branch 20191106_postgres_dbapi.

Some points:

  • Our 16 migrations can't be applied on a fresh Postgres installation, as it contains migrations that convert primary key columns from integer via a text datatype to UUID (native in Postgres, text-based in MySQL). These migrations are not Postgres-compatible. We should just squash all migrations so that old schema states are not unnecessarily reproduced in the new Postgres database, and them import the current data (which does not need any transformations if the import format is compatible).
  • Compatible import format can indeed be achieved using Django's management commands for export/import. The following steps are necessary:
    1. On MySQL, do (~1 minute in production):
      docker-compose exec api python manage.py dumpdata contenttypes --indent=4 --natural-foreign > contenttype.json
      docker-compose exec api python manage.py dumpdata --exclude contenttypes --exclude desecapi.Donation --exclude desecapi.AuthenticatedAction --exclude desecapi.AuthenticatedBasicUserAction --exclude desecapi.AuthenticatedUserAction --exclude desecapi.AuthenticatedActivateUserAction --exclude desecapi.AuthenticatedChangeEmailUserAction --exclude desecapi.AuthenticatedResetPasswordUserAction --exclude desecapi.AuthenticatedDeleteUserAction --exclude desecapi.AuthenticatedDomainBasicUserAction --exclude desecapi.AuthenticatedRenewDomainBasicUserAction --indent=4 --natural-foreign > everything_else.json
      
    2. Switch to Postgres installation, copy the exported files into the api container, and do (~5-6 minutes on my laptop):
      docker-compose exec api bash
      python manage.py migrate
      python manage.py sqlflush | ./manage.py dbshell
      python manage.py loaddata contenttype.json
      python manage.py loaddata everything_else.json
      
    After this, API works out of the box.
  • I'm unsure what to do on the public-facing side while the migration is running. We must avoid any changes in the database, including in the pdns databases, to avoid inconsistency between export and import. However, the api container must be running so that we can run the above commands. I think we should therefore stop the www container (it will look like a downtime), or configure it to send 503. (It would do that automatically if the api container was down ...) How to achieve that? @nils-wisiol (Reject traffic with iptables? hm...)
@peterthomassen
Copy link
Member Author

The current Postgres docker image is Postgres 12, while our Python image is based on Debian buster and comes with a Postgres 11 client. That's mostly fine, but some commands don't work inside Django's dbshell. We should switch the api parent image to Alpine, which in its current version has a Postgres 12 client (and is preferred anyways for its leanness).

@nils-wisiol
Copy link
Contributor

I'm unsure what to do on the public-facing side while the migration is running. We must avoid any changes in the database, including in the pdns databases, to avoid inconsistency between export and import. However, the api container must be running so that we can run the above commands. I think we should therefore stop the www container (it will look like a downtime), or configure it to send 503. (It would do that automatically if the api container was down ...) How to achieve that? @nils-wisiol (Reject traffic with iptables? hm...)

We can docker-compose stop api, which results in 502(!) Bad Gateway after lengthy timeouts. Website is still available; login form says (again after lengthy timeout) "Something went wrong at the server, but we currently do not know why. The support was already notified.". Not sure why it's not an immediate 503.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants