This repository holds DFaaS, a novel decentralized FaaS-based architecture designed to automatically and autonomously balance the traffic load across edge nodes belonging to federated Edge Computing ecosystems.
DFaaS implementation relies on an overlay peer-to-peer network and a distributed control algorithm that takes decisions on load redistribution. Although preliminary, our results confirm the feasibility of the approach, showing that the system can transparently redistribute the load across edge nodes when they become overloaded.
Our prototype is based on OpenFaaS and implements the control logic within Go P2P agents.
This research is conducted by the DatAI (formerly Insid&s) and REDS laboratories of the University of Milan-Bicocca.
If you wish to reuse this source code, please consider citing our article describing the first prototype:
Michele Ciavotta, Davide Motterlini, Marco Savi, Alessandro Tundo
DFaaS: Decentralized Function-as-a-Service for Federated Edge Computing,
2021 IEEE 10th International Conference on Cloud Networking (CloudNet), DOI: 10.1109/CloudNet53349.2021.9657141.
The above figure depicts the considered network scenario. A set of geographically-distributed FaaS-enabled edge nodes (or simply edge nodes) is deployed at the edge of the access network.
Each of these nodes deploys a DFaaS platform for the execution of serverless functions, and is connected to a wireless or wired access point (e.g. a base station, a broadband network gateway, a WiFi access point, etc.).
The edge node can receive functions' execution requests, in the form of HTTP requests, generated by the users served by the access point.
This prototype relies on HAProxy to implement the proxy component, and on faasd 0.18.6 (a lightweight version of OpenFaaS) to implement the FaaS platform.
Also, we exploit Sysbox, an open-source and free container runtime (a specialized "runc") that enhances containers in two key ways:
- improves container isolation
- enables containers to run same workloads as VMs
Thanks to Sysbox we are able to run our prototype as a standalone Docker container that executes our agent, the HAProxy and faasd all together. This way, we can run several emulated edge nodes by simply executing multiple Docker containers.
- Ubuntu 22.04 LTS
- containerd 1.6.27
- Docker CE 25.0.1
- Sysbox CE 0.6.3
Install Ansible, an agentless automation tool that you install on a single host, referred to as the control node.
Then, using the setup_playbook.yaml file, your Ansible control node can setup the environment to execute DFaaS on the managed node(s) specified in an inventory file.
Here is an example of an inventory.yaml file to setup the environment on a host via SSH connection:
ungrouped:
hosts:
<hostname>:
ansible_port: <port_number>
ansible_connection: ssh
ansible_user: <user>
ansible_password: <password>
Run the ansible-playbook
command on the control node to execute the tasks specified in the playbook with the following options:
-i
: path to an inventory file
--extra-vars
: to specify the Sysbox version and shiftfs branch to be installed
--tags
: to specify steps of the playbook to be executed
The following command assumes you are using Ubuntu 22.04 LTS with kernel version 5.15 or 5.16.
ansible-playbook -i inventory.yaml setup_playbook.yaml --extra-vars "sysbox_ver=0.6.3 shiftfs_ver=k5.16" --tags "installation, deploy"
This Ansible playbook installs the required software and executes the docker-compose.yml, deploying three DFaaS nodes containers, and a fourth container called operator, which deploys functions on DFaaS nodes and starts specified load tests.
If you have four different VMs it's recommended to deploy the entire system exploiting the playbook and configuration files in test_environment.
Ansible
You can follow the official user guide.
Docker CE v25.0.1
You can follow the official user guide.
Sysbox CE 0.6.3
You can follow the official user guide.
We do not recommend to set up
sysbox-runc
as your default container, you can skip that part of the guide.We instead recommend installing shiftfs according to your kernel version as suggested by the Sysbox CE user guide.
This script deploy the same set of functions on each of the nodes by using docker/files/deploy_functions.sh. The deploy_functions.sh script waits for the OpenFaaS gateway to be up (max 20 retries, 10s delay), then deploys 3 functions (ocr, shasum, figlet) from the OpenFaas store.
The script has 3 arguments:
- 1st arg: number of nodes (e.g.,
3
) - 2nd arg: node name prefix (e.g.,
dfaas-node-
) - 3rd arg: node name suffix (e.g.,
-1
)
The resulting node name (container) will be dfaas-node-1-1
, that is,
the default name you get when using the provided docker-compose.yml file.
./utils/deploy-functions-to-nodes.sh 3 "dfaas-node-" "-1"
Alternatively you can exploit the deployment functionalities of the operator.
Each node exposes port 808x:80
(e.g., node-1
exposed port is 8081:80
), where port 80
is the HAProxy port.
This example assumes you run DFaaS nodes via Docker Compose with the provided docker-compose.yml file.
You can invoke a function (i.e., via the first node) by simply contact the proxy on http://localhost:8081/function/{function_name}
.
curl http://localhost:8081/function/figlet -d 'Hello DFaaS world!'
Execute workload to a node using vegeta
We provide an example that use vegeta HTTP load testing tool to run workload on a node and demonstrate the load distribution over the federation.
You can install vegeta by executing the following commands:
wget https://github.com/tsenart/vegeta/releases/download/v12.8.4/vegeta_12.8.4_linux_amd64.tar.gz
tar -xf vegeta_12.8.4_linux_amd64.tar.gz && rm vegeta_12.8.4_linux_amd64.tar.gz
sudo mv vegeta /usr/local/bin/
This example uses the vegeta json format and requires jq.
In a nutshell:
- it runs a vegeta attack (duration:
5 minutes
, rate:50 req/s
) to thefiglet
function on the first node - it saves the results and produces report ever 200ms
# Create the vegeta results directory
mkdir -p vegeta-results
export VEGFOLDER="vegeta-results/$(date +%Y-%m-%d-%H%M%S)"
mkdir -p $VEGFOLDER
jq -ncM '{method: "GET", url: "http://localhost:8081/function/figlet", body: "Hello DFaaS world!" | @base64, header: {"Content-Type": ["text/plain"]}}' | \
vegeta attack -duration=5m -rate=50 -format=json | \
tee $VEGFOLDER/results.bin | \
vegeta report -every=200ms
You can also start multiple parallel Vegeta attacks exploiting operator functionalities.
You can produce some plots from vegeta results by exploiting the vegeta plot
command or our plot-results.py script, which is automatically executed after tests execution with the operator.
To use our script, you need to install the required Python packages listed in plot-requirements.txt.
# Encode results as JSON
cat $VEGFOLDER/results.bin | vegeta encode > $VEGFOLDER/results.json
# Create plot with vegeta
cat $VEGFOLDER/results.bin | vegeta plot > $VEGFOLDER/plot.html
# 1st arg: path results.json
# 2nd arg: path output folder
# 3rd arg: rate req/s used for the attack (if merged is True specify rate=0)
# 4th arg: boolean merged (is the input file merged from multiple attacks?)
./operator/docker/files/plot-results.py $VEGFOLDER/results.json $VEGFOLDER/plots 50 False
You can impersonate a malicious node that is not part of the federation by adding the header Dfaas-Node-Id
with a value that is not a valid peer id of the network (e.g., Dfaas-Node-Id: malicious-id
).
All of its requests will be rejected.
# Substitute the CONTAINER_NAME value with the desired container name
export CONTAINER_NAME="dfaas-node-1-1"
docker exec -it ${CONTAINER_NAME} bash
journalctl --follow --unit dfaasagent # ...or whatever you prefer to inspect (e.g., haproxy, faasd, faasd-provider)
For a complex setup running several emulated edge nodes with different topologies see emulator directory. We provide instructions and examples to execute DFaaS nodes via Containernet emulator.
We also provide a simulator to test and compare different load balancing techniques. The simulation code is available into the simulation directory. Data gathered by the DFaaS system used for simulation are available here.
For more information read associated README file.