-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use docker and aws #28
base: master
Are you sure you want to change the base?
Conversation
- Use variables - Default to tag "latest"
- Comment on what's going on. - Remove (and revoke) sensitive details of Docker registry
Usual stuff: use bash variables and add comments to explain what is going on.
- Correct LOGGING set-up
Motivation: - Try to keep the Docker container stuff together. - Keep the project root clean
- Adjust the paths in supervisor conf - Stub the "security-updates" ini file (am awaiting details of what is supposed to go in it) -
Conflicts: deploy/shared/requirements.txt fabconfig.py fabfile.py makefile www/deploy/nginx/prod.conf www/deploy/nginx/stage.conf www/deploy/nginx/test.conf
Which was used for testing nginx config stuff.
Log output to /host/logs
Not quite sure if it's a good idea to have separate requirements files for different envs.
Use the correctly named based image and add usage notes.
Todo:
|
We don't need them there.
trap "[[ -h Dockerfile ]] && unlink Dockerfile" EXIT | ||
|
||
|
||
# Ensure script is called correctly |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment would be better placed above the case
statement, I think.
usage()
is more like help-text.
Can't quite get the docker pull to work - seems to struggle to find a ~/.dockercfg file to use. Never mind - will bypass this by pushing Docker deployment into a script called by cron.
This commit removes the attempt to "docker pull" from the user data script and moves all docker container starting into a deploy script which is pulled from S3 into /opt/deploy.sh and run every 5 minutes by cron. Notes: - We need to store the S3 bucket URL in /etc/ so it can be looked up by the deploy script. - This makes the webserver bootstrap script much simpler as we just need to install nginx and put the deploy script in place. - The deploy script compares the running Docker image to that on S3 and starts a new container if they differ. It doesn't do anything clever like running two containers side by side (yet). In fact, it doesn't stop the old container but it does update the nginx file to point to the new container.
Woohoo - automated deployments now work! Next steps are to look at S3 media/static serving |
So no need for adding static serving to the URLs config.
This was quite fiddly to get right. You need to ensure the EC2 instance has a role that allows the appropriate write access to the S3 bucket. The Docker container will automatically pick up the access and secret key - there is no need to specify them. We use custom storage classes to allow prefixes within the bucket for storage.
So they can be pushed without re-tagging.
It's a royal pain to get compressor working. A bug* in Boto means that static URLs contain security tokens which mean the hash changes every time - hence offline compression doesn't work. We work around with by using {{ STATIC_URL }} instead of {% static %} * boto/boto#1477
As the EC2 instance gets all its info from the S3 bucket.
We have stripped out the Oscar part now and all the deploy stuff now lives in deploy/ not www/deploy/
This is better handled using S3 and a custom storage for the fields in question. http://tartarus.org/james/diary/2013/07/18/fun-with-django-storage-backends
- Only try to start uWSGI once - No need to specify a STDERR logfile if we're not going to use it (due to redirect_stderr=true)
We need this when running the container locally as offline compression isn't enabled then.
Note to self: things would be much simpler if we ran nginx inside the container itself rather than on the EC2 host. Why?
|
But it would remove the option to have any sort of UAT site, maintenance page or quick rollback. Keeping nginx in the host allows the containers to do one thing and do it properly (think microservices), as well as providing us with more options to do fun things. Also, what if we want to be able to run two API containers on the same host? Which one should have nginx? What if that one gets deprecated or moved? |
So they don't appear when you run fab -l
After talking to @chrismckinnel, I have changed my mind on this one - it is probably better to keep nginx running on the host. Aside: it would be possible to have nginx running inside the containers and have a QA site running alongside a live site. You could make each container listen to a host port (eg 8000, 8001) and use a load balancer to determine which URL points to which container. |
STDOUT gets collected in /var/log/cloud-init-output.log already - we don't need custom files.
This is due to a bug in Boto which includes this parameter even when it's not needed. Doing so causes S3 to respond with an AccessDenied response.
To avoid two containers using the same file.
We now grab the envfile from S3 on container startup. The Docker startup script handles both local and S3-based envfiles to allow release images to be tested locally. This avoids us having any more state within /host/ on the host EC2 instance.
The test failed when more than one container was running. This changes uses grep to provide a more robust test.
This builds upon Dyball's initial work (in #27) and cleans things up considerably.