Find Your Road.
_NOTE: This is an internal tool we use at Fugitive Labs. It has an accompanying CLI to make getting projects up and running easier, which you can install via npm. It is under active developement, but feel free to try it out and contribute! _
======
RECENT UPDATES (9/10) Going forward, we will design Yote to work with Node v4. Up until now we haven't really standardized on a node version.
- Install nvm (node version manager)
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.26.1/install.sh | bash
- Active it
. ~/.nvm/nvm.sh
- Install node packages.
nvm install 4.0.0
If this works, 'node -v' should return v4.0.0. To get yote to run, I had to manually npm re-install "node-sass-middleware", but just a npm install should work.
The .nvmrc file specifies which version of node the project wants to run.
======
A simple client agnostic API framework for NodeJS.
- NodeJS version >= 4.0.0
- NPM
- MongoDB
- ExpressJS 4
- ReactJS -- (default web client)
- Redux -- (client store)
- React Router -- (routing)
- Webpack -- (Bundling JS)
- Babel -- (JS compiler)
- Docker -- Deployment containers
To run the application locally:
- Install all dependencies and run mongo.
- Clone the github repo and cd into the directory.
- Run
$ npm install
to install the application's node dependencies (npm is installed alongside node). - Locate and copy the secrets.js file into the top level directory for the project.
- This file contains the randomized session keys as well the application API keys, and is not tracked by Github.
Your folder structure should look something like this:
my-project/
|_ client/
|_ node_modules/
|_ public/
|_ server/
|_ ssl/
.dockerignore
.gitignore
.nvmrc
Dockerfile
logger.js
nodemon.json
package.json
README.md
secrets-sample.js
secrets.js
webpack.config.js
yote.js
Finally, to run the application you'll need to open two separate terminals. (NOTE: this will eventually change to a single command)
In the first terminal, run $ npm run watch
This runs webpack in watch mode to look for and recompile any changes to the bundle.js.
In the second terminal, run $ nodemon
. This runs the node server and watches for changes.
The server will be listening on http://localhost:3030/ in your browser of choice.
In your terminal simply run $ mongo
to use the built in mongo console.
On the remote instance, you can access the database by running a new mongo container and connecting to the already running database container. In the current deployment, this command would be:
$ (sudo) docker run -it --rm --link mongodb:mongodb library/mongo bash -c 'mongo --host mongodb'
$ (sudo) docker run -p 80:80 -p 443:443 -t -i --link mongodb:mongodb --name gsk-registry -rm -e NODE_ENV=production fugitivelabs/gsk-registry
Development is the default environment. It listens on port 3030 and console.log()
logs to the console. It can be run locally with the command $ nodemon
or $ node yote.js
from the top level directory.
To run development environment remotely use the following command:
$ (sudo) docker run -p 80:3030 -t -i --link mongodb:mongodb --name PROJECT_NAME --rm ORG_NAME/PROJECT_NAME
A production environment can be enabled by running NODE_ENV=production PORT=xxxx node yote.js
, where xxxx is your desired port (like 80 on a production server). The PORT=xxxx call is not necessary; it will default to 80, but this will break if that port is already in use. Running production will disable all console.log calls on the front end.
To run a production environment remotely use the following command:
$ (sudo) docker run -p 80:3030 -p 443:443 -t -i --link mongodb:mongodb --name PROJECT_NAME -rm -e NODE_ENV=production ORG_NAME/PROJECT_NAME
Deployment to a remote instance is easy. It requires running docker containers for Redis and MongoDB. See docker docs for more info on setting up your local docker environment.
First we need to build and push our local application to your docker instance.
On your local machine, run:
$ docker built -t ORG_NAME/PROJECT_NAME
Then:
$ docker push ORG_NAME/PROJECT_NAME
Then, we need to initialize our remote instance.
On the remote server, run the following images and link them.
- Pull the Mongo repository from Docker itself:
$ (sudo) docker pull library/mongo
- Start mongod with flags for smallfiles and local storage
$ (sudo) docker run -d -v ~/data:/data/db --name mongodb library/mongo mongod --smallfiles
- Start yote and link with other containers
$ (sudo) docker run -p 80:3030 -t -i --link mongodb:mongodb --name PROJECT_NAME --rm ORG_NAME/PROJECT_NAME
Note that PROJECT_NAME above should be replaced with the project name
Repeat steps above to build and push changes to docker from your local machine.
On the remote instance we need to pull in the new build, stop and remove the running docker application instance, and then rerun the new build.
Run:
$ (sudo) docker ps
Note the CONTAINER ID of the container running the application
Next run:
$ (sudo) docker pull ORG_NAME/PROJECT_NAME
Then stop the running application container:
$ (sudo) docker stop [CONTAINER ID]
Next remove the running application container:
$ (sudo) docker rm [CONTAINER ID]
Now, simply rerun the application and Docker will use the most recently pulled in container instance.
$ (sudo) docker run -p 80:3030 -t -i --link mongodb:mongodb --name yote --rm ORG_NAME/PROJECT_NAME
$ df -h
Docker instances will build up, taking up memory on the server. To clear run:
docker rmi $(sudo docker images -q --filter "dangling=true")
#API Documentation
Everything in regards to the API is stored in the server/ folder. Every time you run yote gen resourceName
a controller, model and router are created for your project based on the resourceName.
this is where you would store your logic happens and you return a success boolean, message, or anything else needed to be returned
this is where the mongoose db schema is defined for your resource. that way when doing a find() in the controller, you can query a specific field
api-router.js - stores the routing to your different resources server/router/api - you can find the resourceName-api.js - this is where you would setup your route for the resouce. i.e. incoming POST, GET, PUT calls are sent to the specific function in the controller to run the logic
#Grant's notes:
TO RUN: (in separate terminal window) 'mongod' 'npm install' 'nodemon'
DEVELOPMENT vs PRODUCTION development is the default environment. it listens on port 3030 and console.log logs to the console. it can be run with the command "nodemon" from the top level directory. production environment can be enabled by running "NODE_ENV=production PORT=xxxx node yote.js", where xxxx is your desired port (like 80 on a production server). The PORT=xxxx call is not necessary; it will default to 80, but this will break if that port is already in use. Running production will disable all console.log calls on the front end, which is really f-ing cool.
DOCKER DEPLOYMENT deployment to a remote instance is easy. it requires running containers for mongodb. on your local machine, run "docker build -t ORG_NAME/PROJECT_NAME .", then "docker push ORG_NAME/PROJECT_NAME". on the remote instance, run "docker pull ORG_NAME/PROJECT_NAME", then:
"docker run -p 80:3030 -t -i --link mongodb:mongodb --name yote --rm ORG_NAME/PROJECT_NAME"
to run the image and link it. more details later.
- pull library/mongo
- start mongod with flags for smallfiles and local storage
"docker run -d -v
/data:/data/db --name mongodb library/mongo mongod --smallfiles" //in future, change "/data" to "~/mongo/data". for time being, changing this will cause loss of old data. - start yote and link with other containers "docker run -p 80:3030 -t -i --link mongodb:mongodb --name yote --rm fugitivelabs/yote"
extras: run mongo console on mongo image "docker run -it --rm --link mongodb:mongodb library/mongo bash -c 'mongo --host mongodb'"
USING HTTPS WITH YOTE Yote comes out of the box with support for SSL. To use, do the following:
- generate the necessary files on your local machine. there are plenty of guides online on how to do this. you will need three files, a .key and 2 .crt.
- create a "ssl" folder in your yote directory and copy these files there.
- in yote.js, change the 3 lines "key: fs.readFileSync('../projectName/ssl/yourSsl.key')" near the bottom so that "projectName" matches your project folder name and "yourSsl" is the name of your key files.
- change te "useHttps" variable to true. Now, once you run Yote in production mode, it will allow users to connect with https. If you want to force users to ONLY connect with https, change the "httpsOptional" variable to false. (todo: put these vars in the config file) (important note: update your docker file when you create a new project that uses https. you will need to change the folder from "/yote/*" to your project name)
TO RUN WITH HTTPS IN PRODUCTION INSTANCE docker run -p 80:80 -p 443:443 -t -i --link mongodb:mongodb --name NAME -e NODE_ENV=production fugitive bs/NAME
SENDING EMAILS to send emails, use the "utilities" controller. an example of its use is users controller "requestPasswordReset" method. if you do not have a mandrill api key, the call will still return but will not send an email.
+more new notes (add these to yote at some point): +view free space on instance +"df -h" +remove all unused images from docker (cleared ~3 gigs of disk space, related to problem with daves) +"sudo docker rmi $(sudo docker images -q -f dangling=true)"
BACKING UP THE DATABASE (notes for later, from Grant to Grant)
//access db: docker run -it --rm --link mongodb:mongodb library/mongo bash -c 'mongo --host mongodb'
- CREATE AND SAVE BACKUP FILES a. on remote, create backup files docker run -v ~/backup/:/backup/ -it --rm --link mongodb:mongodb library/mongo bash -c 'mongodump -d propatient -o /backup/ --host mongodb'
b. on local, retrieve backup files from instance gcloud compute copy-files grantfowler@propatient:/home/grant/backup/propatient ./ --zone us-central1-a
- RESTORE BACKUP FILES TO REMOTE INSTANCE a. on remote, make sure target folder has correct permissions b. on local, copy backup files to remote instance gcloud compute copy-files ~/Desktop/backup/propatient grantfowler@propatient:/home/grant/backup/ --zone us-central1-a
c. on remote, drop database d. on remote, restore db from backup files docker run -v ~/backup:/backup/ -it --rm --link mongodb:mongodb library/mongo bash -c 'mongorestore -d propatient /backup/propatient/ --host mongodb'
USING THE LOGGER the basic "console.log" functionality has been mostly replaced with winston. the new functionality is:
logger.debug("debug message");
logger.info("info message");
logger.warn("warn message");
logger.error("error message");
logging to a file doesn't like working on the docker instances. in theory, we should be able to link the ~/logs volume from the host and write our logs there. in practice, i can't get this to work. so, for the time being, in production mode, any messages labeled "info" or "error" will also be saved into the mongo database with the collection name "logs". we can browse through these on the server using the standard mongo command line, or copy them all using the database backup method. while not quite as useful as a big text file, it will still work for our purposed.
using the regular "console.log" is perfectly fine for debugging stuff. for anything that we might want to keep track of, use "logger.info".
-LOAD BALANCING
Additional notes on production deployment
- 2 load balanced web server instances and a separate database instance
-
create database instance a) run with exposed mongo ports: sudo docker run -d -v ~/data:/data/db -p 27017:27017 --name mongodb library/mongo mongod --smallfiles b) copy instance internal ip address
-
create server instances and attach to database IP address a) sudo docker run -it -p 80:80 -e NODE_ENV=production -e REMOTE_DB=$IP_ADDRESS --name yote fugitivelabs/yote b) must all be in same region, but not necessarily same zone. (us-centra1-a and us-central1-b is ok)
-
configure load balancer a) create static ip: gcloud compute addresses create $NAME --region $REGION b) follow instrutions to create and allocate target pool: https://cloud.google.com/compute/docs/load-balancing/network/example c) to view after completion, go to Gcloud Console -> Menu -> Networking -> Load Balancing (looks like you can also create from console if you want to)
d) TODO: configure and test HTTPS
logging in docker
view the logs from a given container from the command line:
docker logs [OPTIONS] CONTAINER_NAME
all logs on instance are stored in /var/log/daemon.log
example:
docker logs --tail --timestamps 50 yote
// show last 50 logs and include timestamps
list commands: https://docs.docker.com/engine/reference/commandline/logs/
logging in google compute
this is possible, but in practice I can't get it working. basically, we need to authenticate the gcloud logging api, and then use logger to send all logs to it instead when in production. according to google, the relevant env variables should be defined automatically, but this doesn't seem to be the case. putting off for now. todo: try downloading the service account key manually and storing it in a folder in yote.
https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances https://support.google.com/cloud/answer/6158849?hl=en#serviceaccounts https://cloud.google.com/logging/docs/api/tasks/authorization
TO CHANGE COMMAND LINE PROJECT: gcloud config set project $ProjectName https://cloud.google.com/sdk/gcloud/reference/config/set note: not "Yote", but rather the actual id, which right now is 'norse-augury-508'
TODO: notes on instance creation and auto-deploying containers during instance creation https://cloud.google.com/compute/docs/containers/container_vms
-NEW TYPE OF CONTAINERS - the "container optimized" ones are now obsolete, so going forward we will use the new gci ones -notes - does not require sudo before commands, diff colors and filesystems -https://cloud.google.com/container-optimized-os/docs/how-to/create-configure-instance