_ ____
_ __ _____ _____ _ __ ___ ___| | | __ )
| '__/ _ \ \ / / _ \ '__/ __|/ _ \ | | _ \
| | | __/\ V / __/ | \__ \ __/ |___| |_) )
|_| \___| \_/ \___|_| |___/\___|_____|____/
reverselb is a L4 reverse tunnel and load balancer: it creates an encrypted TLS tunnel to an external ingress that in turns receives requests on the specified port and forwards the traffic to the client via encrypted, multiplexed sessions. Since it operates at L4 layer (tcp only for now), it allows to tunnel almost every protocol that runs on top of TCP (plain TCP, HTTP, HTTPS, WS, MQTT, SSH etc). It was inspired on services such as ngrok and inlets: those are great services but they either have limits or don't support TCP tunnels (and k8s LoadBalance for them) in their free tiers.
There is a client and a server components. The server is intended to be run on a machine/container that has a publicly accessible endpoint (there is an Azure ACI sample template below for quick deployment) while the client runs on the private network and configures the service that needs to be accesible externally.
The client (either via the cmd line application, library or container orchestrator extension) makes an TLS protected outbound connection to the reverselb server and configures the tunnel endpoints properties. The server now starts listening externally on the port the client instructed it to and when connections are received on this port, it relays the data back and forth between it and the backend connection. As many connections as desired can be established.
The reverselb server will try to do protocol identification in order to get a possible SNI/Hostname style redirection on the same tunnel port (to be able to share the same service port with multiple service instances). The load balancing is done on serive instance names if multiple registrations for the same name/port are made.
The currently supported protos are:
- HTTP connect protocol
- Custom HA-PROXY like protocol (PROXY->[byte_len]instanceName)
- HTTP Host header [not yet added]
- TLS ClientHello SNI extension
For example, to proxy multiple ssh servers on port 8001:
Using regular connect command:
ssh dario@instancename -o "ProxyCommand=connect -H localhost:8001 instancename 888"
Using embedded proxy support (executable option stdinproxy):
ssh dario@instancename -o "ProxyCommand=./goreverselb -l debug -t 0 stdinproxy -e localhost:8001"
NAME:
goreverselb - create tunnel proxies and load balance traffic between them
USAGE:
goreverselb [global options] command [command options] [arguments...]
COMMANDS:
server runs as a server
tunnel creates an ingress tunnel
stdinproxy creates an stdin/stdout proxy to the endpoint
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--loglevel value, -l value debug level, one of: info, debug (default: "info") [$LOGLEVEL]
--token value, -t value shared secret for authorization [$TOKEN]
--help, -h show help (default: false)
NAME:
goreverselb server - runs as a server
USAGE:
goreverselb server [command options] [arguments...]
OPTIONS:
--port value, -p value port for the API endpoint (default: 0) [$PORT]
--autocertsubjectname value, -s value subject name for the autogenerated certificate [$AUTO_CERT_SUBJECT_NAME]
--httpport value port for the HTTP rest endpoint (server will be disabled if not provided) (default: 0) [$HTTP_PORT]
--natsport value port for the secure NATS endpoint (server will be disabled if not provided) (default: 0) [$NATS_PORT]
--dynport value dynamic frontend port base (default: 8000) [$DYN_FRONTEND_PORT]
--dynportcount value number of dynamic frontend ports (default: 100) [$DYN_FRONTEND_PORT_COUNT]
--help, -h show help (default: false)
NAME:
goreverselb tunnel - creates an ingress tunnel
USAGE:
goreverselb tunnel [command options] [arguments...]
OPTIONS:
--apiendpoint value, -e value API endpoint in the form: hostname:port [$LB_API_ENDPOINT]
--frontendport value, -p value frontend port where the service is going to be exposed (endpoint will be apiendpoint:serviceport) (default: auto) [$PORT]
--serviceendpoint value, -b value backend service address (the local target for the lb: hostname:port) [$SERVICE_ENDPOINT]
--servicename value, -s value service name string [$SERVICE_NAME]
--instancename value instance name string (for SNI/Host functionality) (default: empty) [$INSTANCE_NAME]
--insecuretls, -i allow skip checking server CA/hostname (default: false) [$INSECURE_TLS]
--help, -h show help (default: false)
NAME:
goreverselb stdinproxy - creates an stdin/stdout proxy to the endpoint
USAGE:
goreverselb stdinproxy [command options] [arguments...]
OPTIONS:
--serviceendpoint value, -e value backend service address (hostname:port) [$SERVICE_ENDPOINT]
--help, -h show help (default: false)
cd cmd/goreverselb
export GOPATH=/home/dario/go
go get github.com/rakyll/statik
go generate ./pkg/restapi
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -v ./cmd/goreverselb
# create a tunnel server on the local machine
./goreverselb -t "0000" server -p 9999 -s "localhost"
# register a forwarder to some random endpoint
./goreverselb -t "0000" tunnel --apiendpoint localhost:9999 --servicename "myendpoint-8888" --serviceendpoint ip.jsontest.com:80 --frontendport 8888 --insecuretls=true
# now hit the frontend on the port defined
curl --header "Host: ip.jsontest.com" http://localhost:8888
{"ip": "173.69.143.190"}
[notice that we need to override the host header since the tunnel is a plain TCP one so hitting servers that rely on the host header to find the target won't work without doing so]
cd cmd/goreverselb
export GOPATH=/home/dario/go
go get github.com/rakyll/statik
go generate ./pkg/restapi
GOOS=linux GOARCH=arm GOARM=5 CGO_ENABLED=0 go build -v ./cmd/goreverselb
(running same a Linux)
# server
docker run -it --rm --net host -e "PORT=9999" -e "TOKEN=0000" -e "AUTO_CERT_SUBJECT_NAME=localhost" dariob/reverselb-alpine:latest ./goreverselb server
INFO[0000] goreverseLB version: latest
INFO[0000] Go Version: go1.13.5
INFO[0000] Go OS/Arch: linux/amd64
INFO[0000] CreateDynamicTlsCertWithKey: creating new tls cert for SN: [localhost]
INFO[0001] tunnel service listening on: tcp => [::]:9999
# client
docker run -it --rm --net host -e "LB_API_ENDPOINT=localhost:9999" -e "TOKEN=0000" -e "SERVICE_NAME=my service" -e "PORT=8888" -e "SERVICE_ENDPOINT=ip.jsontest.com:80" -e "INSECURE_TLS=true" dariob/reverselb-alpine:latest ./goreverselb tunnel
INFO[0000] goreverseLB version: latest
INFO[0000] Go Version: go1.13.5
INFO[0000] Go OS/Arch: linux/amd64
INFO[2020-01-12T18:21:40Z] NewTunnelClient to apiEndpoint [localhost:9999] with tunnel info: [{ my service 0000 {8888} 1 80 [ip.jsontest.com]}]
You can expose you kubernetes pods externally via a LoadBalancer operator available here:
export GOPATH=/home/dario/go
go get github.com/rakyll/statik
go generate ./pkg/restapi
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -v ./cmd/goreverselb
sudo docker build -f docker/Dockerfile-alpine.txt -t dariob/reverselb-alpine .
sudo docker tag dariob/reverselb-alpine dariob/reverselb-alpine:0.1
sudo docker tag dariob/reverselb-alpine dariob/reverselb-alpine:latest
sudo docker push dariob/reverselb-alpine:latest
sudo docker push dariob/reverselb-alpine:0.1
export GOPATH=/home/dario/go
go get github.com/rakyll/statik
go generate ./pkg/restapi
GOOS=linux GOARCH=arm GOARM=5 CGO_ENABLED=0 go build -v ./cmd/goreverselb
sudo docker build -f docker/Dockerfile-pi.txt -t dariob/reverselb-pi .
sudo docker tag dariob/reverselb-pi dariob/reverselb-pi:0.1
sudo docker tag dariob/reverselb-pi dariob/reverselb-pi:latest
sudo docker push dariob/reverselb-pi:latest
sudo docker push dariob/reverselb-pi:0.1
You can deploy a cheap entry point using Azure ACI container services (create a free account if you don't have one to evaluate). Create a new template deployment using the template below (make sure that dnsNameLabel and the autocertsubjectname strings match the name and region where you are deploying).
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"containerGroupName": {
"type": "string",
"defaultValue": "reverselbdefaultname",
"metadata": {
"description": "reverseLB server"
}
},
"containerImageName": {
"type": "string",
"defaultValue": "dariob/reverselb-alpine:latest",
"metadata": {
"description": "reverseLB image"
}
}
,"port": {
"type": "string",
"defaultValue": "9000",
"metadata": {
"description": "API endpoint port"
}
}
,"httpport": {
"type": "string",
"defaultValue": "9001",
"metadata": {
"description": "HTTP endpoint port"
}
}
,"natsport": {
"type": "string",
"defaultValue": "9002",
"metadata": {
"description": "NATS endpoint port"
}
}
,"token": {
"type": "string",
"defaultValue": "",
"metadata": {
"description": "shared secret for authorization"
}
}
,"autocertsubjectname": {
"type": "string",
"defaultValue": "reverselb-123.westus2.azurecontainer.io",
"metadata": {
"description": "subject name for the autogenerated certificate"
}
}
,"dnsNameLabel": {
"type": "string",
"defaultValue": "reverselb-123",
"metadata": {
"description": "Dns name prefix for pod"
}
}
,"loglevel": {
"type": "string",
"defaultValue": "debug",
"metadata": {
"description": "loglevel"
}
}
,"portserv1": {
"type": "string",
"defaultValue": "8888",
"metadata": {
"description": "service port 1"
}
}
,"portserv2": {
"type": "string",
"defaultValue": "8889",
"metadata": {
"description": "service port 1"
}
}
},
"variables": {
"reverselbimage": "dariob/reverselb-alpine:latest"
},
"resources": [
{
"name": "[parameters('containerGroupName')]",
"type": "Microsoft.ContainerInstance/containerGroups",
"apiVersion": "2018-10-01",
"location": "[resourceGroup().location]",
"properties": {
"containers": [
{
"name": "reverselbdefaultname",
"properties": {
"image": "[parameters('containerImageName')]",
"environmentVariables": [
{
"name": "PORT",
"value": "[parameters('port')]"
},
{
"name": "LOGLEVEL",
"value": "[parameters('loglevel')]"
},
{
"name": "TOKEN",
"value": "[parameters('token')]"
},
{
"name": "AUTO_CERT_SUBJECT_NAME",
"value": "[parameters('autocertsubjectname')]"
},
{
"name": "OWN_CONTAINER_ID",
"value": "[resourceId('Microsoft.ContainerInstance/containerGroups', parameters('containerGroupName'))]"
}
],
"resources": {
"requests": {
"cpu": 1,
"memoryInGb": 1
}
},
"ports": [
{
"port": "[parameters('port')]"
}
,{
"port": "[parameters('portserv1')]"
}
,{
"port": "[parameters('portserv2')]"
}
]
}
}
],
"osType": "Linux",
"ipAddress": {
"type": "Public",
"ports": [
{
"protocol": "tcp",
"port": "[parameters('port')]"
}
,{
"protocol": "tcp",
"port": "[parameters('portserv1')]"
}
,{
"protocol": "tcp",
"port": "[parameters('portserv2')]"
}
],
"dnsNameLabel": "[parameters('dnsNameLabel')]"
}
}
}
]
}
The client can be embedded in your app if the backend mappings are dynamic like in the case of a load balancer controller (see the kubernetes LoadBalancer repo for more):
...
td := tunnel.TunnelData{
ServiceName: "web8888",
Token: "1234",
BackendAcceptBacklog: 1,
FrontendData: tunnel.FrontendData{
Port: 8888, // passing 0 will let the frontend choose a port
},
TargetPort: 80,
TargetAddresses: []string{"www.google.com"},
}
tc, err := tunnel.NewMuxTunnelClient("localhost:9999", td)
...
tc.Close()