- MacOS, Linux or WSL2 on Windows
- GCC
- Go version 1.16.0 or higher
- Kubectl version 1.15 or higher
- Docker CLI
- on a Debian based GNU/Linux system:
sudo apt-get install docker
- on a macOS use
brew install docker
or alternatively visit Docker for Mac - on Windows visit Docker for Windows
- on a Debian based GNU/Linux system:
- Watch
brew install watch
on macOS
-
Clone this repo on your workstation
-
Setup
.env
environment variable file- From the root of the repository run
make .env
- It is already listed in
.gitignore
so that anything you put in it would not accidentally leak into a public git repo. Refer to.env.example
in the root of this repo for the mandatory and optional environment variables.
- From the root of the repository run
-
Provision access to a Kubernetes cluster. Any certified conformant Kubernetes cluster (version 1.15 or higher) can be used. Here are a couple of options:
- Option 1: Local kind cluster
- Install kind
brew install kind
on macOS
- Provision a local cluster and registry in Docker:
make kind-up
- Install kind
- Option 2: A Kubernetes cluster - use an already provisioned cluster config, either in the default location ($HOME/.kube/config) or referenced by the $KUBECONFIG environment variable.
- Set
CTR_REGISTRY
in your.env
file to a container registry you have permission to push and pull from. - Ensure you are logged into the container registry using:
docker login <registry url>
- Set
We will use images from Docker Hub. Ensure you can pull these containers using:
docker pull openservicemesh/osm-controller
- Option 1: Local kind cluster
If you are running the demo on an OpenShift cluster, there are additional prerequisites.
- Install the oc CLI.
- Set
DEPLOY_ON_OPENSHIFT=true
in your.env
file.- This enables privileged init containers and links the image pull secrets to the service accounts. Privileged init containers are needed to program iptables on OpenShift.
From the root of this repository execute:
./demo/run-osm-demo.sh
By default:
- Prometheus is not deployed by the demo script. To enable prometheus deployment, set the variable
DEPLOY_PROMETHEUS
in your.env
file totrue
. - Grafana is not deployed by the demo script. To enable Grafana deployment, set the variable
DEPLOY_GRAFANA
in your.env
file totrue
. - Jaeger is not deployed by the demo script. To enable Jaeger deployment, set the variable
DEPLOY_JAEGER
in your.env
file totrue
. The section on Jaeger below describes tracing with Jaeger.
-
compile OSM's control plane (
cmd/osm-controller
), create a separate container image and push it to the workstation's default container registry (See~/.docker/config.json
) -
build and push demo application images described below
-
create the following topology in Kubernetes:
bookbuyer
andbookthief
continuously issue HTTPGET
requests againstbookstore
to buy books and github.com to verify egress traffic.bookstore
is a service backed by two servers:bookstore-v1
andbookstore-v2
. Whenever either sells a book, it issues an HTTPPOST
request to thebookwarehouse
to restock.
-
applies SMI traffic policies allowing
bookbuyer
to accessbookstore-v1
andbookstore-v2
, while preventingbookthief
from accessing thebookstore
services -
finally, a command indefinitely watches the relevant pods within the Kubernetes cluster
To see the results of deploying the services and the service mesh - run the tailing scripts:
- the scripts will connect to the respective Kubernetes Pod and stream its logs
- the output will be the output of the curl command to the
bookstore
service and the count of books sold, and the output of the curl command github.com
to demonstrate access to an external service - a properly working service mesh will result in HTTP 200 OK response code for the
bookstore
service with./demo/tail-bookbuyer.sh
along with a monotonically increasing counter appearing in the response headers, while./demo/tail-bookthief.sh
will result in HTTP 404 Not Found response code for thebookstore
service. When egress is enabled, HTTP requests to an out-of-mesh host will result in a HTTP200 OK
response code for both thebookbuyer
andbookthief
services. This can be automatically checked withgo run ./ci/cmd/maestro.go
When the demo is run with DEPLOY_JAEGER
set to true
in your .env
file, OSM will install a Jaeger pod. To configure all participating Envoys to send spans to this Jaeger instance, you must additionally enable tracing using:
kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"observability":{"tracing":{"enable":true,"address": "jaeger.osm-system.svc.cluster.local","port":9411,"endpoint":"/api/v2/spans"}}}}' --type=merge
Jaeger's UI is running on port 16686 and can be viewed by forwarding port 16686 from the Jaeger pod to the local workstation. In the ./scripts
directory we have included a helper script to find the Jaeger pod and forward the port: ./scripts/port-forward-jaeger.sh
. After running this script, navigate to http://localhost:16686/ to examine traces from the various applications.
The Bookstore, Bookbuyer, and Bookthief apps have simple web UI visualizing the number of requests made between the services.
- To see the UI for Bookbuyer run
./scripts/port-forward-bookbuyer-ui.sh
and open http://localhost:8080/ - To see the UI for Bookstore v1 run
./scripts/port-forward-bookstore-ui-v1.sh
and open http://localhost:8081/ - To see the UI for Bookstore v2 run
./scripts/port-forward-bookstore-ui-v2.sh
and open http://localhost:8082/ - To see the UI for BookThief run
./scripts/port-forward-bookthief-ui.sh
and open http://localhost:8083/ - To see Jaeger run
./scripts/port-forward-jaeger.sh
and open http://localhost:16686/ - To see Grafana run
./scripts/port-forward-grafana.sh
and open http://localhost:3000/ - default username and password for Grafana isadmin
/admin
- OSM controller has a simple debugging web endpoint - run
./scripts/port-forward-osm-debug.sh
and open http://localhost:9092/debug
To expose web UI ports of all components of the service mesh the local workstation use the following helper script: /scripts/port-forward-all.sh
When you are done with the demo and want to clean up your local kind cluster, just run the following.
make kind-reset