Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use multiprocessing to speed up processing of input files #9

Merged
merged 2 commits into from
Feb 8, 2022

Conversation

hugovk
Copy link
Collaborator

@hugovk hugovk commented Feb 8, 2022

Use multiprocessing to speed up processing of input files, using the number of processes which matches the CPU count.

On my old dual-core Mac, the processing step speeds up:

  • 601 files (all of 2021): 2m7s -> 1m20s
  • 2,629 files (my full Strava archive): 10m29s -> 5m9s

Will be a larger gain for machines with more CPUs.

multiprocessing needs a single worker function, this is the new process_file which decides which of process_gpx and process_fit to call. They also need to be top-level functions to work with multiprocessing.

I also formatted the file using the popular Black autoformatter:

python -m pip install -U black
black . 

I also fixed a bug when using an input path like ~/dir, it needs to expand ~ before checking it's a dir (and appending *).

@marcusvolz marcusvolz merged commit 18afe76 into marcusvolz:main Feb 8, 2022
@marcusvolz
Copy link
Owner

Very cool.

@hugovk hugovk deleted the multiprocessing branch February 9, 2022 07:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants