Skip to content
This repository has been archived by the owner on Feb 10, 2021. It is now read-only.

Commit

Permalink
Drop Adaptive._retire_workers
Browse files Browse the repository at this point in the history
The `_retire_workers` method seems to largely duplicate the same in
`distributed`'s `Adaptive`. So just go ahead and drop our
implementation.

This was tried before, but didn't work do to duplicate `retire_workers`
calls to the `Scheduler` in both `Adaptive._retire_workers` and
`DRMAACluster.scale_down`. However as the behavior of
`DRMAACluster.scale_down` has now been corrected, it should now be
possible to drop our implementation of `Adaptive._retire_workers`. Hence
we now do drop it here.
  • Loading branch information
jakirkham committed May 20, 2018
1 parent 601137f commit 279c3e5
Showing 1 changed file with 0 additions and 17 deletions.
17 changes: 0 additions & 17 deletions dask_drmaa/adaptive.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,20 +94,3 @@ def get_scale_up_kwargs(self):
logger.info("Starting workers due to resource constraints: %s",
kwargs['n'])
return kwargs

@gen.coroutine
def _retire_workers(self, workers=None):
if workers is None:
workers = self.workers_to_close()
if not workers:
raise gen.Return(workers)
with log_errors():
result = yield self.scheduler.retire_workers(workers,
remove=True,
close_workers=True)
if result:
logger.info("Retiring workers {}".format(result))
# Diverges from distributed.Adaptive here:
# ref c51a15a35a8a64c21c1182bfd9209cb6b7d95380
# TODO: can this be reconciled back to base class implementation?
raise gen.Return(result)

0 comments on commit 279c3e5

Please sign in to comment.