Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add automatic calibration #293

Merged
merged 40 commits into from
Aug 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
281e5e5
added RT, MS1 and mobility optimizers for automatic calibration
odespard Jul 26, 2024
918bfda
added optimizers for targeted optimization of MS1, MS2, RT and mobility
odespard Jul 26, 2024
25ef9d2
added plotting for automatic optimizers
odespard Jul 26, 2024
a9eae04
changed calibration and removed now-unused methods
odespard Jul 26, 2024
20c91c6
fixed errors
odespard Jul 26, 2024
71e98cd
Merge branch 'optimizer_classifier_integration' into additional_optim…
odespard Jul 29, 2024
6faa3eb
update to use df
odespard Jul 29, 2024
279558e
Merge branch 'optimizer_classifier_integration' into additional_optim…
odespard Jul 29, 2024
19dde17
use df for optimization history in automatic optimization
odespard Jul 29, 2024
335ec7a
Merge branch 'additional_optimizers' into refactor_calibration
odespard Jul 29, 2024
ecbe63a
change check for targeted calibration
odespard Jul 29, 2024
de6da9b
temporary_save
odespard Jul 30, 2024
00d533d
automatic calibration in peptidecentric.py with required alterations …
odespard Jul 30, 2024
3b24324
formatting
odespard Jul 30, 2024
09c0d70
changed some TODOs and comments
odespard Jul 30, 2024
60b884e
e2e now runs automatic calibration test
odespard Jul 30, 2024
1170dcd
fixed bugs with classifier version and workflow.extraction()
odespard Jul 30, 2024
599e3a2
make precursor_df consistent
odespard Jul 30, 2024
5c4b3da
Merge branch 'development' into add_automatic_calibration
odespard Jul 30, 2024
0878a77
formatting
odespard Jul 30, 2024
1becbc9
improved logging
odespard Jul 30, 2024
b37e70c
remove empty list in completely automatic calibration
odespard Jul 30, 2024
0cb800f
enforce minimum training iterations and backtrack for kernel
odespard Jul 31, 2024
7d3f930
impose minimum iterations during extraction of optimization data
odespard Jul 31, 2024
4542a7e
introduce extra optimization before loop and enforce minimum steps fo…
odespard Jul 31, 2024
087a856
check min_steps as part of _check_convergence method
odespard Jul 31, 2024
70dba73
add changes to automatic optimizers
odespard Jul 31, 2024
e6987c1
improved names and formatting
odespard Aug 1, 2024
d91689a
improve docstrings
odespard Aug 1, 2024
8d5c989
add unit tests for optimizers
odespard Aug 1, 2024
0d9e2eb
merge optimization and calibration_optimization managers
odespard Aug 1, 2024
f6d56f1
review comments
odespard Aug 2, 2024
157c14a
update unit tests
odespard Aug 2, 2024
8c61993
change feature for AutomaticMS1Optimizer
odespard Aug 2, 2024
7240857
fixed unit test
odespard Aug 2, 2024
84899f8
change e2e test for automatic calibration
odespard Aug 5, 2024
a5d01f2
adjust min_steps and optimization_lock_min_steps in default.yaml
odespard Aug 5, 2024
4f9b094
formatting
odespard Aug 5, 2024
bd55b6c
update unit test
odespard Aug 5, 2024
b5ea3bb
modify_e2e_test_for_automatic_calibration
odespard Aug 7, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/e2e_testing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
strategy:
matrix:
# test case name as defined in e2e_test_cases.yaml
test_case: [ "basic", "synchropasef", "astral", ]
test_case: [ "basic", "synchropasef", "astral", "astral_automatic_calibration", ]
env:
RUN_NAME: alphadia-${{github.sha}}-${{github.run_id}}-${{github.run_attempt}}
BRANCH_NAME: ${{ github.head_ref || github.ref_name }}
Expand Down
15 changes: 9 additions & 6 deletions alphadia/constants/default.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -95,17 +95,20 @@ search_advanced:

calibration:

# minimum number of times (epochs) the updated calibration target has to been passed
min_epochs: 3
# minimum number of steps taken during the optimization lock (during which the elution groups used for optimization are extracted)
optimization_lock_min_steps: 0

# Number of precursors searched and scored per batch
batch_size: 8000

# recalibration target for the first epoch. For subsequent epochs, the target will increase by this amount.
recalibration_target: 200
# minimum number of precursors to be found before search parameter optimization begins
optimization_lock_target: 200

# TODO: remove as not relevant anymore
max_epochs: 20
# the maximum number of steps that a given optimizer is permitted to take
max_steps: 20

# the maximum number of steps that a given optimizer is permitted to take
min_steps: 2

# TODO: remove this parameter
final_full_calibration: False
Expand Down
16 changes: 15 additions & 1 deletion alphadia/workflow/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -95,8 +95,22 @@ def load(
self._calibration_manager.disable_mobility_calibration()

# initialize the optimization manager
optimization_manager_config = {
"ms1_error": self.config["search_initial"]["initial_ms1_tolerance"],
"ms2_error": self.config["search_initial"]["initial_ms2_tolerance"],
"rt_error": self.config["search_initial"]["initial_rt_tolerance"],
"mobility_error": self.config["search_initial"][
"initial_mobility_tolerance"
],
"column_type": "library",
"num_candidates": self.config["search_initial"]["initial_num_candidates"],
"classifier_version": -1,
"fwhm_rt": self.config["optimization_manager"]["fwhm_rt"],
"fwhm_mobility": self.config["optimization_manager"]["fwhm_mobility"],
"score_cutoff": self.config["optimization_manager"]["score_cutoff"],
}
self._optimization_manager = manager.OptimizationManager(
self.config["optimization_manager"],
optimization_manager_config,
Comment on lines +98 to +113
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems to me like OptimizationManager-specific logic that is outside of OptimizationManager :-)
suggestion:
pass the self.config to the OptimizationManager and do this mapping during initialization. Also, I suggest separating between constant values (probable fwhm_rt does not change?) and dynamic values (e.g. classifier_version)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is true. I'll change it.

I think everything in the optimization manager should change – fwhm_rt for instance is updated during calibration.

path=os.path.join(self.path, self.OPTIMIZATION_MANAGER_PATH),
load_from_file=self.config["general"]["reuse_calibration"],
figure_path=os.path.join(self.path, self.FIGURE_PATH),
Expand Down
Loading
Loading