Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic performance benchmarking #816

Open
juhoinkinen opened this issue Nov 8, 2024 · 0 comments
Open

Automatic performance benchmarking #816

juhoinkinen opened this issue Nov 8, 2024 · 0 comments

Comments

@juhoinkinen
Copy link
Member

We could utilize pytest-benchmark, a plugin for pytest, to monitor the performance of some operations.

Performance regressions affecting Annif can occur not only by changes in Annif code, but also due to changes in dependencies, and this is not always easy to notice. In NLTK there have been two regressions affecting Annif: nltk/nltk#3013, nltk/nltk#3299.

Currently there is a check for the CLI startup time (fast CLI startup is important for tab completions to be usable), which is ran in GH Actions.

There could be unit tests for the performance of the train and suggest operations of backends (or at least some important backends) that would be run in the CICD pipeline with the GitHub Action "Continuous Benchmark".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant