Skip to content

scootafew/depend-miner

Repository files navigation

Depend Miner

How to Run Locally

Installation

$ npm install

Secrets

Before the application can be run, the template file secrets.env.template should be renamed to secrets.env and the values populated. The Redis and Neo4J passwords can be any values. The GitHub API access token should be generated here with the public_repo scope only.

Making Requests

Processing can be triggered for either a repository or artifact, by making a POST request to the relevant endpoint.

# for a repository
http://localhost:3000/analyse/github/<username>/<repositoryName>

# for an artifact
http://localhost:3000/analyse/maven?artifact=<groupId>:<artifactId>:<version>

How to Run in Docker

Development

To build local images. Ommitting the optional service name builds all services.

$ docker-compose build <optional-service-name>

To start services defined in docker-compose.yml. Ommitting the optional service name starts all services.

$ docker-compose up <optional-service-name>

Production

For production, run docker-compose commands specifying the production compose file.

Environment variables also need to be set for Redis and Neo4J password, along with the GitHub API token. Suggestion is to create secrets.prod.env and pass to docker-compose. (Note: .env files are included in the .gitignore file to prevent against accidental commit of secrets)

$ docker-compose -f docker-compose.production.yml --env-file=secrets.prod.env <commands...>

AWS

These services could be run on a container orchestration service such as Docker Swarm or Kubernetes. The examples here are for Docker Swarm, but for Kubernetes the kompose tool may be of interest.

Security Group

All AWS instances must be set up as part of the same Security Group, to allow networking between Docker Swarm nodes and also connection from the administrator.

Inbound Rules

Security Group Inbound Rules

Outbound Rules

Security Group Inbound Rules

The following commands should be executed on a EC2 instance (or similar).

On first instance creation:

$ sudo yum update # update
$ sudo yum install docker # install docker
$ sudo systemctl enable docker # set docker to start on startup

# or as a single command
$ sudo yum update -y && \
sudo yum install docker -y && \
sudo systemctl enable docker

On the instance that will act as swarm manager:

# create a swarm
$ sudo docker swarm init

This will return the command to run on other instances, so they will join as workers. E.g.

$ sudo docker swarm join --token <token> <host>

All further swarm level configuration commands should be executed on the manager node.

Optionally, nodes can be labelled to control which services are deployed where. These are set as placement constraints in the AWS compose file. Such labels can be created by running the following command:

$ sudo docker node update --label-add ec2=c5.large <node>

# <value> = t2.small, c5.large etc.

Create the AWS compose file.

Currently Docker Swarm does not support use of a .env file in the same way as docker-compose. For now, the solution is to populate the necessary variables in the provided AWS compose file template. In future, support for Docker Secrets is preferable, however as these are not read from the environment it would require a change to the application code, probably with the addition of a config service.

$ sudo vi docker-compose.yml

Create the AWS secrets file and populate it.

$ sudo vi secrets.aws.env

Create the stack

$ sudo docker stack deploy -c docker-compose.yml dependTest

Verify the services have been created

$ sudo docker node ls

On any given node (manager or worker):

# list services on that node
$ sudo docker service ls

# list containers on node
$ sudo docker container ls

# display container logs (ommitting tail will display all lines over lifetime of container)
$ sudo docker container logs <container-name> (--tail=<number-of-lines-to-display>)

Useful Commands

Inspect Container

# Inspect container
$ docker container exec -it <container-name> /bin/sh

# Inspect image
$ docker run -it <image-name> /bin/sh

Worker

https://stackoverflow.com/questions/39663096/docker-compose-creating-multiple-instances-for-the-same-image

docker-compose up -V --scale worker=3

Neo4j

Auth

Note that the environment variable NEO4J_AUTH only sets the initial password for the container. To change the password you must delete the container & associated volume. Note that doing so will remove all other data.

# remove container
$ docker container rm <neo4j_container>

# remove volume
$ docker volume rm <neo4j_volume>

Cypher Shell

Can access a Cypher shell on the container

# start bash shell in container
$ sudo docker exec -it <neo4j_container> bash

# start cypher shell
$ cypher-shell -u <neo4j-username> -p <neo4j-password>

# run cypher commands
$ MATCH (n) RETURN count(*); # count all nodes
$ MATCH (n)-[r]->() RETURN COUNT(r); # count all relationships

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published