Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parking cleanups #590

Merged
merged 2 commits into from
Sep 15, 2018
Merged

Parking cleanups #590

merged 2 commits into from
Sep 15, 2018

Conversation

wtgee
Copy link
Member

@wtgee wtgee commented Sep 12, 2018

  • Cleanup to parked state to make it easier to understand
  • Ctrl-c while pocs_shell is running will finish loop through state
    machine rather than call park directly on the mount. This allows for
    housekeeping (including uploading) of observations even when machine
    is manually killed.

Edit: Note: in future might be worth adding an option such that when you
press Ctrl-c you are prompted for if you want immediate park and shutdown
or if should run through states.
I think I would prefer to move to #592

* Cleanup to `parked` state to make it easier to understand
* Ctrl-c while pocs_shell is running will finish loop through state
machine rather than call park directly on the mount. This allows for
housekeeping (including uploading) of observations even when machine
is manually killed.

Note: in future might be worth adding an option such that when you
press Ctrl-c you are prompted for if you want immediate park and shutdown
or if should run through states.
@wtgee wtgee requested a review from jamessynge September 12, 2018 00:12
print_warning('POCS interrupted, skipping states and parking')
self.pocs.observatory.mount.home_and_park()
self._running = False
print_warning('POCS interrupted, parking')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps add the TODO here for the behavior you suggested in the PR?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would prefer to move to #592 actually. Can make a note.

@@ -4,24 +4,26 @@ def on_enter(event_data):
pocs = event_data.model
pocs.say("I'm parked now. Phew.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's drop the Phew.

pocs.say("Done with retrying loop, going to clean up and sleep!")
pocs.next_state = 'housekeeping'
else:
has_valid_observations = pocs.observatory.scheduler.has_valid_observations
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this var is only used in one place.

pocs.say("Done retrying for this run, going to clean up and shut down!")
pocs.next_state = 'housekeeping'
else: # This branch will only happen if there is an error causing a shutdown
if has_valid_observations:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I take it that has_valid_observations being True means that those observations could be taken now, so it isn't yet Astronomical dawn, or some such.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't capture the logic of whether those observations are observable or not, it just captures whether or not there is even something to try to schedule. This mostly comes into play if targets are removed while running. Not a widely used feature right now in terms of actively adding/removing fields while running.

@wtgee wtgee merged commit abc8455 into panoptes:develop Sep 15, 2018
@wtgee wtgee deleted the pocs-shell-kill-update branch September 15, 2018 22:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants