Skip to content
Steve Withey edited this page Jul 7, 2021 · 10 revisions

Table of Contents

Heroku

Work-In-Progress! Ephemeral Filesystem Issue

This deployment is through Heroku using free dyno and add-ons.

  1. Sign up for a free account on Heroku. Create a new App on Heroku via the Dashboard. Note: The app name will be used as the hostname in your URL
  2. Select the Resources tab and search the Add-Ons to add either Memcached Cloud or MemCachier. The environment variables are created automatically by Heroku.
  3. The Heroku filesystem is ephemeral. We need to save things to /files Not sure how to resolve this but might be able to use s3fs and s3monkey to mount AWS S3 to /files.
  4. Select the Settings tab and click Reveal Config Vars. Add a key for ORIGIN and the value is a websocket URL for the domain wss://nameofyourapp.herokuapp.com:443 Include port 443 as it will use TLS.
  5. Clone RootTheBox to your local system.
$ git clone https://github.com/moloch--/RootTheBox.git
$ cd RootTheBox
  1. Select the Deploy tab and click Container Registry and follow the steps for deploying a Docker-based app.
$ heroku login
$ heroku container:login
$ heroku container:push web
$ heroku container:release web

Log Check

$ heroku logs --tail

Azure Kubernetes Service (AKS)

Tested and verified by Jaa9 on AKS v.1.6.19

Deployment has been tested using Azure DevOPS for Git repository and Pipeline deployment.

  1. Setup:
    1. Azure Kubernetes Service Cluster with Advanced Networking
    2. Azure DevOPS with service connection to AKS
    3. Azure Container Registry

The following tasks have been done to make RootTheBox runnable on AKS and have persistency on the database.

  1. Change Dockerfile to the following. We are building the docker image where files are temporarily stored in /tmp/rtb so we can copy them to /opt/rtb after container is running in AKS and we have access to persistent storage (Azure Disk or Azure Storage Account).
FROM python:3

RUN mkdir /tmp/rtb
ADD . /tmp/rtb

RUN apt-get update
RUN apt-get install build-essential zlib1g-dev -y
RUN apt-get install python3-pycurl sqlite3 libsqlite3-dev -y

ADD ./setup/requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt --upgrade

COPY ./start.sh .

RUN chmod +x start.sh

CMD ["./start.sh"]
  1. Create a shell script to execute the copy job and start rootthebox after container startup (place this in the root folder):
#!/bin/sh
cp -r /tmp/rtb/ /opt/

chmod +x /opt/rtb/rootthebox.py

python3 /opt/rtb/rootthebox.py --setup=docker --sql_dialect=sqlite

tail -f /dev/null
  1. RootTheBox requires memcached to have session persistence, so we need to create two containers in AKS for the solution to work. Create a Kubernetes YAML deployment file with 5 sections:

  2. Create a namespace for all resources

kind: Namespace
apiVersion: v1
metadata:
  name: rootthebox
  labels:
    name: rootthebox
  1. Create a service to expose rootthebox to Internet
apiVersion: v1
kind: Service
metadata:
  name: rootthebox
  namespace: rootthebox
  labels:
    app: rootthebox
spec:
  type: LoadBalancer
  externalTrafficPolicy: "Local"
  ports:
  - port: 80
    targetPort: 8888
  selector:
    app: rootthebox
    tier: frontend
  1. Create a service to expose memcached in the Kuberneets Cluster. After Deployment we need to find the ClusterIP for the service: kubectl describe service memcached
apiVersion: v1
kind: Service
metadata:
  name: memcached
  namespace: rootthebox
spec:
  type: ClusterIP
  selector:
    app: rootthebox
    tier: app-tier
  ports:
    - name: memcached-udp
      protocol: UDP
      port: 11211
    - name: memcached-tcp
      protocol: TCP
      port: 11211
  1. Create a persistent volume claim for database persistency
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sqllite-pv-claim
  namespace: rootthebox
  labels:
    app: rootthebox
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  1. Create rootthebox deployment. Replace MEMCACHED_SERVERS Value X.X.X.X with the memcached ClusterIP. Also replace container image to wherever your are pulling your container image
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rootthebox
  namespace: rootthebox
spec:  
  replicas: 1
  selector:
      matchLabels:
        app: rootthebox
        tier: frontend
  template:
    metadata:
      labels:
        app: rootthebox
        tier: frontend
    spec:
      nodeSelector:
        "beta.kubernetes.io/os": linux
      containers:
      - name: rootthebox
        image: changethistoyourimagelocation
        env:
        - name: MEMCACHED_SERVERS
          value: "X.X.X.X"
        ports:
        - containerPort: 8888
          name: rootthebox
        imagePullPolicy: Always
        volumeMounts:
        - name: sqllite-persistent-storage
          mountPath: /opt/rtb
      volumes:
      - name: sqllite-persistent-storage
        persistentVolumeClaim:
          claimName: sqllite-pv-claim
  1. Create memcached deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: memcached
  namespace: rootthebox
spec:  
  replicas: 1
  selector:
      matchLabels:
        app: rootthebox
        tier: app-tier
  template:
    metadata:
      labels:
        app: rootthebox
        tier: app-tier
    spec:
      nodeSelector:
        "beta.kubernetes.io/os": linux
      containers:
      - name: memcached
        image: bitnami/memcached:latest
        ports:
        - containerPort: 11211
          name: memcached
        imagePullPolicy: Always
  1. We have used Azure DevOPS to build and pushed RootTheBox Docker Image to an Azure Container Registry for AKS to pull from. Be aware that for this to work you need to change .gitignore and remove #Alembic section and alembic.ini from this ignore, or the deployment of the container will not work.

Azure AD

Use Azure Active Directory (Azure AD) instead of the username/password stored in the database.

There is a new option auth that can be set to db (the default) or azuread.

It uses the Microsoft Authentication Library (MSAL) to handle the redirect using the OpenID Connect (OIDC) authorisation code flow.

Entries are still added to the user table using object identifier from Azure AD to match to the uuid field in the database.

Admin permission is now governed by an AppRole in Azure AD. Admins can then be managed centrally. The admin permission is updated in the database on log in to ensure consistency with rest of the application.

To configure the application for Azure AD authentication the following options need to be set:

  • auth = azuread
  • client_id = the client id/app id of the application registration in Azure AD
  • tenant_id = the identifier for the Azure AD tenant (a GUID)
  • client_secret = the secret key in Azure AD used to authenticate the application with Azure AD.
  • redirect_url = the fully qualified URL that Azure AD will redirect to after the user signs in.

Note, the redirect_url default to http://localhost:8888/oidc. The /oidc path is handled by the new CodeFlowHandler in PublicHandlers.py.

Registration is disables in azuread mode, and a new 'Join Team' page allows a new user to join an existing team.