-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(api): Mark calibrations as bad when determined they exceed threshold #6918
Conversation
Codecov Report
@@ Coverage Diff @@
## chore_release-4.0.0-beta.0 #6918 +/- ##
==============================================================
- Coverage 73.20% 73.07% -0.14%
==============================================================
Files 263 244 -19
Lines 21034 20145 -889
==============================================================
- Hits 15398 14720 -678
+ Misses 5636 5425 -211
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can make this a little bit better by not going too overboard with using the same function for marking everything bad, since the caller calls different things regardless. Either something like a data transformation only function; or a separate set of functions for each kind of calibration data or something I think would make the mark_bad
function itself (or whatever replaces it) a bit nicer to read.
calibration: typing.Union[ | ||
local_types.DeckCalibration, | ||
local_types.PipetteOffsetCalibration, | ||
typing.Tuple[float, str, 'Labware']], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a bit odd that this is a tuple and the other stuff isn't - can we get a nice type for tip length calibration in line with the other kinds?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, rather than having a union, I think we should probably type this as a series of overload defs, but that's a personal preference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, since whatever calls this is going to know what kind of calibration it has, it may make a lot more sense to have mark_bad
be a function that transforms a calibration type rather than a function that transforms and saves it. Since the calibration types are all dataclasses, that means that we can stararg expand them. Something like:
@typing.override
def mark_bad(calibration: local_types.DeckCalibration) -> local_types.DeckCalibration: ...
@typing.override
def mark_bad(calibration: local_types.PipetteOffsetCalibration) -> local_types.PipetteOffsetCalibration: ...
@typing.override
def mark_bad(calibration: local_types.TipLengthCalibration) -> local_types.TipLengthCalibration: ...
def mark_bad(calibration):
caldict = calibration.as_dict()
caldict['status'] = local_types.CalibrationStatus(markedBad=True, ...)
return type(calibration)(**caldict)
and then the callers do
bad_cal = mark_bad(calibration)
save_deck_calibration(bad_cal)
pip_id: str, mount: Mount, | ||
tiprack_hash: str, tiprack_uri: str): | ||
tiprack_hash: str, tiprack_uri: str, | ||
cal_status: typing.Optional[local_types.CalibrationStatus] = None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't need both Optional
and = None
, mypy will figure it out
@@ -197,20 +203,25 @@ def save_robot_deck_attitude( | |||
transform: local_types.AttitudeMatrix, | |||
pip_id: typing.Optional[str], | |||
lw_hash: typing.Optional[str], | |||
source: local_types.SourceType = None): | |||
source: local_types.SourceType = None, | |||
cal_status: typing.Optional[local_types.CalibrationStatus] = None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't need both Optional
and =None
, mypy will figure it out
@@ -106,7 +107,9 @@ def save_labware_calibration( | |||
def create_tip_length_data( | |||
definition: 'LabwareDefinition', | |||
parent: str, | |||
length: float) -> 'PipTipLengthCalibration': | |||
length: float, | |||
cal_status: typing.Optional[local_types.CalibrationStatus] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't need both Optional
and =None
, mypy will figure it out
not self._is_checking_both_mounts() | ||
pipette_state = is_second_pipette or only_one_pipette | ||
if self.current_state == State.comparingTip: | ||
modify.mark_bad( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah see here we're already calling things differently depending on what we know the types to be so I don't think we need a fully generic function, or at least a fully generic mark-and-save function.
f158112
to
448093e
Compare
a987d99
to
c3d8612
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's remove that commented assert but the code looks good to me
@@ -121,22 +124,25 @@ def create_tip_length_data( | |||
# assert labware._is_tiprack, \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This commented out assert is part of the comment block above. I want to keep it in as a reminder for what we had previously utilized in that method.
Overview
Closes #6730. Now, when a user completes calibration check, the app will be able to detect calibrations that were marked bad.
Changelog
Review requests
Any other tests you would like? Test on a robot, force your calibrations to be above the threshold and you should see
that the app picks up bad calibrations afterwards.
It will only get resolved once you re-do whichever calibration was marked bad.
Risk assessment
Low.