-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(api): slightly faster simulation #9821
Conversation
Codecov Report
@@ Coverage Diff @@
## edge #9821 +/- ##
=======================================
Coverage 75.00% 75.00%
=======================================
Files 2007 2007
Lines 53286 53286
Branches 5148 5148
=======================================
Hits 39967 39967
Misses 12300 12300
Partials 1019 1019
Flags with carried forward coverage won't be shown. Click here to find out more.
|
8625a1c
to
b834c5a
Compare
ae64ade
to
5f24c80
Compare
def plunger_position( | ||
self, instr: Pipette, ul: float, action: "UlPerMmAction" | ||
) -> float: | ||
mm = ul / instr.ul_per_mm(ul, action) | ||
position = mm + instr.config.bottom | ||
return round(position, 6) | ||
|
||
@functools.lru_cache(PLUNGER_CACHE_LIMIT) | ||
def plunger_speed( | ||
self, instr: Pipette, ul_per_s: float, action: "UlPerMmAction" | ||
) -> float: | ||
mm_per_s = ul_per_s / instr.ul_per_mm(instr.config.max_volume, action) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ul_per_mm
calls piecewise_volume_conversion
which does a decent bit of math.
These plunger_X
methods can be called thousands of times with very small variance in inputs. The config values are immutable so caching seems like a save move.
@@ -61,7 +60,7 @@ def should_dodge_thermocycler( | |||
|
|||
Returns True if we need to dodge, False otherwise | |||
""" | |||
if any([isinstance(item, ThermocyclerGeometry) for item in deck.values()]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method is called in each plan_moves
which is called in each move_to
. A needless search for a value that can be cached.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
InstrumentHandler
has the same lifetime as the hardware control api object and takes the attached instruments as inputs. That means that we need to clear the caches on the plunger position/speed/flowrate when the attached instruments change.
What might be a better idea is adding an LRU cache on hardware_control.pipettes.Pipette.ul_per_mm
. Since that has the lifetime of an instrument, and it gets replaced when you change instruments, it won't need cache clearing, and it's where all the work happens.
I also think we could change the cache location for the deck slot key checking, but reasonable people can disagree.
I'm not sure I understand you regarding I was hoping to do exactly what you suggested, but lru cache requires hashable arguments. |
I was wrong. It's |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caching on a Deck
method while the state of Deck
can mutate underneath it makes me a little anxious. Is it necessary given the addition of the thermocycler_preset
property?
@@ -75,6 +77,7 @@ def _assure_int(key: object) -> int: | |||
else: | |||
raise TypeError(type(key)) | |||
|
|||
@functools.lru_cache(20) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this safe? It looks like self.data
will mutate without busting the cache
def __hash__(self) -> int: | ||
"""A hash function as UserDict is unhashable.""" | ||
return id(self) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels iffy for the same reason the cache above feels iffy
58a65cd
to
b70ae8e
Compare
I had initially put the lru_cache on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great to me!
* cache thermocycler presense to avoid linear search * cache check names * comments. more caching. * thermocycler_present attribute test and fix. * constants. * move lru_cache to ul_per_mm
Overview
TLDR using caching to reduce simulation duration. These changes apply both to fast simulation and slow simulation.
While digging into #9503 I considered that maybe we're better off just improving the speed of slow simulation and doing away with fast simulation all together. Why? Because it seemed agonizing to have the
PipetteDict
produced by fast simulation match what is produced by the HC.From profiling a Protocol that has thousands of aspirate/dispenses I learned that most of the SynchAdapter price was paid in the
move_to
function. It's called repeatedly. So I attempted to modify ourInstrumentContextSimulation
to have it inherit fromInstrumentContextImplementation
and ONLY overridemove_to
. So basically, use the backward strategy from how fast sim had worked.I spent a good deal of time profiling simulation and found sneaky little functions that are called zillions of times for no good reason. Comparing results on the PI I got:
Pretty good, but not good enough.
So I fixed #9503 and am creating this PR to add the profiler learnings to improve simulation times.
Changelog
thermocycler_present
property toDeck
.should_dodge_thermocycler
is called zillions of times and does a linear search for thermocycler in deck map. We can cache this value.Review requests
Am I caching stuff that I shouldn't? Are the lru_cache limits reasonable?
I will add real life performance improvement metrics when I can.
Risk assessment
The
thermocycler_present
attribute is somewhat risky, though it is well covered by unit tests.Otherewise. Low.