Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor Updates #16

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 41 additions & 37 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,28 +5,29 @@ annotations (a very exciting topic on it’s own). The emerging serverless
framework begins to look promising for other applications, and may be
interesting to the community.

At a minimum, this serves as a convenient and reliable playground for Docker Swarm, with local Registry and other cool tools, on your dev box.
At a minimum, this serves as a convenient and reliable playground for Docker Swarm, with local
Registry and other cool tools, on your dev box.

At best, this may involve into a solid Serverless framework.

At the current state, it is an experimenting ground with working examples that
produce food for thought and more experimenting.

The sample functions and example wordcount map-reduce workflow are here with
instructions of how to run them. The bioinformatic part contains some trade-
secret bits, so it is kept privately and mixed in to run in production. You
instructions of how to run them. The bioinformatic part contains some trade-secret
bits, so it is kept privately and mixed in to run in production. You
are welcome to explore it's orchestration and wiring.

The solution is under active development; your constructive criticism and
contributions are welcome: [@dzimine](https://twitter.com/dzimine) on
Twitter, or as [Github issues](https://github.com/dzimine/serverless-
swarm/issues).
Twitter, or as [Github issues](https://github.com/dzimine/serverless-swarm/issues).


# Deploying Serverless Swarm, from 0 to 5.

Follow these step-by-step instructions to set up Docker Swarm, configure the rest of framework parts, and run a sample serverless pipeline. All you need to get a swarm cluster running **conviniently**, per [Swarm
tutorial](https://docs.docker.com/engine/swarm/swarm-tutorial/).
Follow these step-by-step instructions to set up Docker Swarm, configure the rest of framework parts,
and run a sample serverless pipeline. All you need to get a swarm cluster running **conveniently**, per
[Swarm tutorial](https://docs.docker.com/engine/swarm/swarm-tutorial/).

## Clone the repo
This repo uses submodules, remember to use `recursive` when cloning:
Expand All @@ -40,12 +41,11 @@ cd serverless-swarm
git submodule update --init --recursive
```
## Setup
Vagrant is used to create a local dev environment representative of a production one. Ansible is used to deploy software. Convinience
tricks inspired by [6 practices for super smooth Ansible
experience](http://hakunin.com/six-ansible-practices). Default config will set 3 boxes,
named `st2.my.dev`, `node1.my.dev`, and `node2.my.dev`, with ssh access
configured for root. Roles are described as code in [`inventory.my.dev`](./inventory.my.dev) for local Vagrant setup, and in [inventory.aws](./inventory.aws) for AWS deployment.
Dah, this proto setup is for play, not for production.
Vagrant is used to create a local dev environment representative of a production one. Ansible is used to deploy software.
Convenience tricks inspired by [6 practices for super smooth Ansible experience](http://hakunin.com/six-ansible-practices).
Default config will set 3 boxes, named `st2.my.dev`, `node1.my.dev`, and `node2.my.dev`, with ssh access
configured for root. Roles are described as code in [`inventory.my.dev`](./inventory.my.dev) for local Vagrant setup,
and in [inventory.aws](./inventory.aws) for AWS deployment. Dah, this proto setup is for play, not for production.

| Host | Role |
|---------------|-----------------|
Expand Down Expand Up @@ -95,17 +95,21 @@ up. Clean it up by hands.

#### 2. Deploy Software

This ansible playbook will create Swarm Cluster, deploy and configure local private Registry at `pregistry:5000`, install StackStorm, and do other final config touches, like setting up st2 packs and getting vizualizer at [http://st2.my.dev:8080](http://st2.my.dev:8080). At successful run of the command,
This ansible playbook will create Swarm Cluster, deploy and configure local private Registry at
`pregistry:5000`, install StackStorm, and do other final config touches, like setting up st2 packs and getting
vizualizer at [http://st2.my.dev:8080](http://st2.my.dev:8080). At successful run of the command,
you'll have a functional Swarm Cluster, StackStorm, and serverless pipelines
ready to go.

```
ansible-playbook playbook-all.yml -vv -i inventory.my.dev
```

Check the action is in place: run `st2 action list --pack=pipeline` and verify
In `st2` vagrant image, check the action: run `st2 action list --pack=pipeline` and verify
that it returned some actions.

(Note: Default user/pass for st2: st2admin/st2pass, and can be added using `st2 login <user>`)

TODO: add commands to validate the setup

**Pat yourself on a back, infra is done!** We got three nodes with docker,
Expand All @@ -132,34 +136,34 @@ all VMs at `/faas/functions`.

Login to a VM. Any node would do as docker is installed on all.

ssh node1.my.dev
vagrant ssh node1

1. Build a function:

```
cd functions/encode
docker build -t encode .
cd /faas/functions/encode/
sudo docker build -t encode .
```
2. Push the function to local docker registry:

```
docker tag encode pregistry:5000/encode
docker push pregistry:5000/encode
sudo docker tag encode pregistry:5000/encode
sudo docker push pregistry:5000/encode

# Inspect the repository
curl --cacert /etc/docker/certs.d/pregistry\:5000/registry.crt https://pregistry:5000/v2/_catalog
curl --cacert /etc/docker/certs.d/pregistry\:5000/registry.crt -X GET https://pregistry:5000/v2/encode/tags/list
sudo curl --cacert /etc/docker/certs.d/pregistry\:5000/registry.crt https://pregistry:5000/v2/_catalog
sudo curl --cacert /etc/docker/certs.d/pregistry\:5000/registry.crt -X GET https://pregistry:5000/v2/encode/tags/list
```
>
Note: Registry alias is set as `pregistry:5000` in `/etc/hosts` for brievity and consistency across Vagrand dev and AWS production environments.
Note: Registry alias is set as `pregistry:5000` in `/etc/hosts` for brevity and consistency across Vagrant dev and AWS production environments.

4. Run the function:

```
docker run --rm -v /share:/share \
sudo docker run --rm -v /share:/share \
pregistry:5000/encode -i /share/li.txt -o /share/li.out --delay 1
```
Reminders:
Flags:

* `--rm` to remove container once it exits.
* `-v` maps `/share` of Vagrant VM to `/share` inside the container.
Expand All @@ -171,21 +175,21 @@ Login to a VM. Any node would do as docker is installed on all.
4. Login to another node, and run the container function from there. It will download the image and run the function.

### 2. Swarm is coming to town
Run the job with swarm command-line:
Run the job with swarm command-line, in `st2` vagrant instance:

```
docker service create --name job2 \
sudo docker service create --name job2 \
--mount type=bind,source=/share,destination=/share \
--restart-condition none pregistry:5000/encode \
-i /share/li.txt -o /share/li.out --delay 20
```

Run it a few times, enjoy seeing them pile up in visualizer, just be sure to
give a different job name.
Run it a few times, enjoy seeing them pile up in [vizualizer](http://st2.my.dev:8080),
just be sure to give a different job name.

### 3. Now repeat with StackStorm

Run the job via stackstorm:
Similar jobs can be executed via stackstorm CLI:

```
st2 run -a pipeline.run_job \
Expand All @@ -197,7 +201,7 @@ args="-i","/share/li.txt","-o","/share/test.out","--delay",3
To clean-up jobs (we've got a bunch!):

```
docker service rm $(docker service ls | grep "job*" | awk '{print $2}')
sudo docker service rm $(sudo docker service ls | grep "job*" | awk '{print $2}')
```

### 4. Stitch with Workflow
Expand All @@ -213,16 +217,16 @@ Use StackStorm UI at [https://st2.my.dev](https://st2.my.dev) to inspect workflo

## Wordcount Map-Reduce Example

Here we run wordcount map-reduce sample on Swarm cluster. The `split`, `map`,
and `reduce` are containerized functions, `run_job` action runs them on Swarm
cluster, StackStorm workflow is orchestrating the end-to-end process.

Another awesome example to run wordcount map-reduce sample on Swarm cluster.
The `split`, `map`, and `reduce` are containerized functions, `run_job` action
runs them on Swarm cluster. StackStorm workflow is used to orchestrate the
end-to-end process.

Create containerized functions for map-reduce and push them to the Registry:

```
cd functions/wordcount
./docker-build.sh
cd /faas/functions/wordcount
sudo ./docker-build.sh

```

Expand Down
2 changes: 1 addition & 1 deletion Vagrantfile
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# vbox.customize ["modifyvm", :id, "--nictype2", "virtio"]

box.vm.synced_folder ".", "/vagrant", disabled: true
box.vm.synced_folder "./data", "/data", type: "nfs"
box.vm.synced_folder "./data", "/data", type: "nfs", create: true
box.vm.synced_folder "./share", "/share", type: "nfs"
box.vm.synced_folder ".", "/faas",
id: "vagrant-faas",
Expand Down
2 changes: 1 addition & 1 deletion pipeline/actions/findgenesb.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
name: findgenesb
pack: pipeline
description: "Computes findgenesb pipeline: "
description: "Computes findgenesb pipeline"
runner_type: mistral-v2
entry_point: workflows/findgenesb.yaml
parameters:
Expand Down