Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[performance] Make the migration middleware faster, second attempt #13018

Merged
merged 2 commits into from
Feb 21, 2024

Conversation

AlanCoding
Copy link
Member

SUMMARY

I was looking at cprofile data from requests, and found myself balking at these numbers again.

  • old, without this change, a typical timing for this middleware: 0.04
  • new, when it re-computes the cache value: 0.01
  • new, when it doesn't re-compute: 0.0005
ISSUE TYPE
  • Bug, Docs Fix or other nominal change
COMPONENT NAME
  • API
ADDITIONAL INFORMATION

I tried this before in #5239, but this revised approach is more standard.

@@ -199,9 +201,21 @@ def process_request(self, request):
request.path_info = new_path


@memoize(ttl=20)
def is_migrating():
last_applied = MigrationRecorder(connection).migration_qs.order_by('-applied').only('name').first().name
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a stripped-down query compared to what we had before, which did multiple queries to return the full list of migrations... which grows over time.

But I think it should be stripped down even more. Probably want to reverse the order, and find the largest migration number from the module, then do a .exists() query.

Also think I had it locally wrong, where we need to .filter(applied__isnull=False), so that we only count migrations that have been applied. We might also arguably add app="main" to the filter.

@AlanCoding AlanCoding marked this pull request as ready for review October 13, 2022 01:00
@AlanCoding
Copy link
Member Author

Ran the tests and they look good, I'd say this is ready for review!

@jbradberry
Copy link
Contributor

@AlanCoding I'm not a big fan of what we are doing in the first place. If I remember right, we consider the app to be in a "migrating" state if there are any migrations unapplied, so we can't temporarily go backwards in the migrations stack, for example.

It might be simpler to just set a lock or a single value table in the database with the pre_migrate signal, and unset it with the post_migrate signal.

@AlanCoding
Copy link
Member Author

Perhaps a part of the intention is to raise the migration screen after the source is updated but before it gets to the actual migration task?

Also, the approach here only does a local redis call for most requests, which is much faster than a database query. Nonetheless, a single simple query would be much better than the many complex queries we run now. On every. Single. Request.

I don't have a big issue with your suggested approach, I just want something to get done for this, because if I ever look at profiling data from customers this winds up constituting a significant amount of time. I also don't like that we have this in this first place, which is why I want to get something to mitigate it. As with my prior attempt, best is the enemy of good.

Copy link
Member

@kdelee kdelee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔥

@AlanCoding AlanCoding force-pushed the migration_detector_again branch from 7f502e6 to 65add01 Compare April 13, 2023 14:23
@AlanCoding AlanCoding self-assigned this Apr 13, 2023
@AlanCoding AlanCoding force-pushed the migration_detector_again branch from 65add01 to 6c46aeb Compare October 3, 2023 15:24
@dmzoneill dmzoneill force-pushed the migration_detector_again branch from 6c46aeb to e09f682 Compare February 21, 2024 13:25
@dmzoneill dmzoneill merged commit 1ebff23 into ansible:devel Feb 21, 2024
21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants