Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploy through CDK #229

Closed
wants to merge 26 commits into from
Closed

Deploy through CDK #229

wants to merge 26 commits into from

Conversation

devetry-brandon
Copy link

@devetry-brandon devetry-brandon commented Nov 16, 2022

Description

I have attempted to create a pipeline in which to deploy the LBL Linkage Server and Client to AWS through the AWS CDK. I was not able to move the pipeline to GitHub actions, but there is a list of commands that can be used to deploy the application from your local machine.

Although deployment is not yet in an automatic pipeline, you gain the advantage of all AWS assets being created and connected through the CDK, which means creation and deletion is a simple deploy/destroy away. This is an upgrade to the old process of having to ssh into an ec2 instance and run docker manually.

Oddities to explain or resolve:

Added modelica node dependencies to server

I was not able to bundle the server with webpack until I added the dependencies that are used in the modelica parser code. This included packages such as bluebird, fs-extra, winston, and underscore. There may be some other way to accomplish this through webpack custom resolvers, but I did not fiddle with that.

Moving the server Dockerfile

I had to move the server/docker/Dockerfile up a level to server/Dockerfile. This is because aws-cdk does not allow you to specify a build context like the docker-compose.yml files do, and assumes that the build context is the directory that the Dockerfile is in. This means that it cannot find the files that docker is trying to copy from the host, which exist in the parent directory of the Dockerfile.

I edited the /docker-compose.yml and server/docker/docker-compose.yml files so that they point to the new location of the Dockerfile.

There may be a solution out there to avoid this, but I did not find one. There are no arguments like context to pass to taskImageOptions or ecs.ContainerImage.fromAsset. Looks like the functionality was asked for here and possibly implemented in some other way if you look at the referenced merged issues on that issue.

Remove root level docker-compose.yml

You can probably remove the top level docker-compose.yml before this gets merged in if we are abandoning the old manual ec2 deploy. I think the developers use the server/docker/docker-compose.yml file for local development.

CDK Environment Configs

You will eventually want to have a more elegant solution to swapping out configs per environment for the cdk stack. Right now the cdk stack uses one variable and it is set as the first command in the manual deploy steps: export LBL_STAGE=staging. There will be more variables that you need in the future and you probably want to figure out the correct way of doing this.

Custom Domain

There is currently no way to add a custom domain to the deployed applications since there is no custom domain to add yet. When you do get a custom domain, you will want to configure that through the cdk stack and add route53 rules to the bucket and possibly even the api. I am not well versed in route53 logic, but there has to be some one else at dept who is. The dash project uses route53 to add custom urls to their fargate service, so look to them or their code for assistance.

Moving to Github Actions

The hardest part about moving to Github Actions is the dependency on Docker for building the server image, which also builds a templates.json file that is used when building the client. This is an odd step that I was not able to wrap my head around migrating to Github Actions. You can see what we do on these lines:

cd ../server
docker build -t lbl-api-cdk .
id=$(docker create lbl-api-cdk)
docker cp $id:/server/scripts/templates.json ../client/src/data/templates.json
docker rm -v $id

So, we are building this image, again (separate from the cdk build), so that we can grab that template.json file. This can probably be done in a separate process from building the actual server image. I can imagine lines 1-53 of the server Dockerfile being part of a separate build and the /dependencies and templates.json file being extracted to be shared with the server and client builds.

Testing

Manual testing on Daren and Amit's machines.

@@ -1,6 +1,6 @@
{
"name": "lbl-linkage",
"version": "0.1.11",
"version": "0.1.20",
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may want to revert this version number... It was updating every time I used the build command, and I was just doing this for testing purposes.

@akapoor66
Copy link
Collaborator

This PR is outdated. This PR was the base for PR #251 which has been merged into main.

@akapoor66 akapoor66 closed this Dec 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants