-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fleet Management: Minimize Docker Definition #1114
Labels
Milestone
Comments
umbhau
added
feature
Ticket is a feature request / PR introduces a feature
api
Affects the `api` project
labels
Mar 29, 2018
sfoster1
pushed a commit
that referenced
this issue
Aug 21, 2018
Overhaul the container configuration and initialization to remove as much as possible from the Dockerfile and the volatile storage, preferring instead to live in files that are included in the api server wheel and unpacked to /data during initialization. Most configuration and setup scripts are no longer COPYd into /usr/local/bin. Instead, they are packed into the api server’s wheel (opentrons) in a /resources subdirectory. This resources subdirectory is recursive-included into the manifest, so putting a file into the directory will include it in the wheel. Because wheels do not run arbitrary code when they are installed, there is some external tooling to get the files out of wheel/resources and into their eventual home in /data/system. * Provisioning The compute/find_module_path.py is an executable python script that uses importlib to find the location of the opentrons module as Python would do (so it always gets the right one) without importing opentrons, which has significant side effects. The opentrons/api/opentrons/resources/scripts/provision-api-resources script uses find_module_path to find the current opentrons module and copy everything in /resources in the package to /data/system (the OT_CONFIG_DIR). This includes all the system configurations and scripts that were previously in compute/. The provision script is designed to be invoked during updates. This happens during the first boot of the container (more on this later) and during a software update via the update server. In the case of the update-server driven update, because we assume a user is never updating to 3.3 from 3.0 on a 3.0 Dockerfile, we can assume the provision script is in /data/system/scripts and is therefore in the path. The update server therefore invokes the provision script after installing a new api server, the provision script finds the right (newly-installed) opentrons module, and updates /data/resources with the new scripts. In the case of the first container boot, we rely on the one piece of initialization left in compute/. compute/container_setup.sh does the first-boot check that setup.sh used to do, and in addition to removing old api server installations and updating the cached container ID it runs the provision script - this time from the module installed in /usr/local/lib. * Container Initialization The Dockerfile CMD has been slightly simplified by symlinking the environment file into /etc/profile.d. This requires the use of bash’s -l flag to get a login shell in the docker container. Because the symlinks to the environment must be present when the shell starts, the environment script (opentrons/api/opentrons/resources/ot-environ.sh) is linked into /etc/profile.d by the Dockerfile (the rest of the configuration file symlinks are in opentrons/api/opentrons/resources/scripts/setup.sh). It is also linked twice - once from /data/system, which will be empty when the container boots for the first time; and once from the system installation of the api server in /usr/local/lib, as a fallback. The one-time environment variables ensure it is only executed once. Then, the container setup script runs. In most cases it doesn’t do much, but see above for the behavior during the first container boot. Since /data/system/scripts is in the path, we still call setup.sh and start.sh. In addition to what it used to do, setup.sh now makes symlinks for most of the configuration files we had been COPYing into Docker. These are now in /data/system. start.sh is pretty similar to before. * Container Building The Dockerfile is now parameterized to make building a local container slightly easier. Invoking docker build with no arguments builds a container for the pi (it has to, since this is what Resin does). Invoking docker build with the arguments to change the base image and clear RUNNING_ON_PI will build a container for local machines. To make this easier, there is a new top-level Makefile target, api-local-container, that invokes docker build with the correct arguments. Note that the local container still doesn’t 100% work; it needs more love to get over things like not having a dbus socket on all hosts. * Misc - Sweet new motd - Removed some of the nmcli commands used for janitoring the static ipv6 routes - Removed the nginx root server block and deleted the static files it was - serving - hardcoded some ports to get away from infinite env vars we can’t trust - Fixed an issue in the update server where it was sending the repr() of tracebacks in 500 messages for /server/update. Now it sends the result of traceback.format_tb() Closes #1114
b-cooper
pushed a commit
that referenced
this issue
Aug 29, 2018
#2073) Overhaul the container configuration and initialization to remove as much as possible from the Dockerfile and the volatile storage, preferring instead to live in files that are included in the api server wheel and unpacked to /data during initialization. Most configuration and setup scripts are no longer COPYd into /usr/local/bin. Instead, they are packed into the api server’s wheel (opentrons) in a /resources subdirectory. This resources subdirectory is recursive-included into the manifest, so putting a file into the directory will include it in the wheel. Because wheels do not run arbitrary code when they are installed, there is some external tooling to get the files out of wheel/resources and into their eventual home in /data/system. * Provisioning The compute/find_module_path.py is an executable python script that uses importlib to find the location of the opentrons module as Python would do (so it always gets the right one) without importing opentrons, which has significant side effects. The opentrons/api/opentrons/resources/scripts/provision-api-resources script uses find_module_path to find the current opentrons module and copy everything in /resources in the package to /data/system (the OT_CONFIG_DIR). This includes all the system configurations and scripts that were previously in compute/. The provision script is designed to be invoked during updates. This happens during the first boot of the container (more on this later) and during a software update via the update server. In the case of the update-server driven update, because we assume a user is never updating to 3.3 from 3.0 on a 3.0 Dockerfile, we can assume the provision script is in /data/system/scripts and is therefore in the path. The update server therefore invokes the provision script after installing a new api server, the provision script finds the right (newly-installed) opentrons module, and updates /data/resources with the new scripts. In the case of the first container boot, we rely on the one piece of initialization left in compute/. compute/container_setup.sh does the first-boot check that setup.sh used to do, and in addition to removing old api server installations and updating the cached container ID it runs the provision script - this time from the module installed in /usr/local/lib. * Container Initialization The Dockerfile CMD has been slightly simplified by symlinking the environment file into /etc/profile.d. This requires the use of bash’s -l flag to get a login shell in the docker container. Because the symlinks to the environment must be present when the shell starts, the environment script (opentrons/api/opentrons/resources/ot-environ.sh) is linked into /etc/profile.d by the Dockerfile (the rest of the configuration file symlinks are in opentrons/api/opentrons/resources/scripts/setup.sh). It is also linked twice - once from /data/system, which will be empty when the container boots for the first time; and once from the system installation of the api server in /usr/local/lib, as a fallback. The one-time environment variables ensure it is only executed once. Then, the container setup script runs. In most cases it doesn’t do much, but see above for the behavior during the first container boot. Since /data/system/scripts is in the path, we still call setup.sh and start.sh. In addition to what it used to do, setup.sh now makes symlinks for most of the configuration files we had been COPYing into Docker. These are now in /data/system. start.sh is pretty similar to before. * Container Building The Dockerfile is now parameterized to make building a local container slightly easier. Invoking docker build with no arguments builds a container for the pi (it has to, since this is what Resin does). Invoking docker build with the arguments to change the base image and clear RUNNING_ON_PI will build a container for local machines. To make this easier, there is a new top-level Makefile target, api-local-container, that invokes docker build with the correct arguments. Note that the local container still doesn’t 100% work; it needs more love to get over things like not having a dbus socket on all hosts. * Misc - Sweet new motd - Removed some of the nmcli commands used for janitoring the static ipv6 routes - Removed the nginx root server block and deleted the static files it was - serving - hardcoded some ports to get away from infinite env vars we can’t trust - Fixed an issue in the update server where it was sending the repr() of tracebacks in 500 messages for /server/update. Now it sends the result of traceback.format_tb() Closes #1114
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
As a user, I would like to be able to update my robot api without connecting the robot to wifi
Acceptance Criteria
Notes
-- programs installed in OS
-- Version on python
-- Nginx
The text was updated successfully, but these errors were encountered: