Skip to content

Networking

Kinshuk Bairagi edited this page Sep 1, 2022 · 3 revisions

Cluster Discovery

Each DKV node pushes its nodeInfo & status updates periodically to a metadata database (hereby would be called as discovery server [DS]). This allows the DS to have a complete view of the cluster. A simple representation of this would be

image

When the cluster has more number of nodes both masters and replicas it would be something like this.

image

Traffic Routing

For dynamic cluster change without affecting the clients, we propose that all connections to the dkv cluster be routed via envoy. The cluster changes would be polled from the discovery server and published via an envoy control plane (codename : dkvoy). Here is a sample topology using envoy.

image

Configuring dkvoy

dkvoy runs as a sidecar/cohosted process on the VM usually cohosted with Discovery nodes. Here is a sample configuration of it

$  cat /etc/dkvoy/config.json
{
    "discoveryServerAddr": "multi:///discovery_service_ip1:8082,discovery_service_ip2:8082,discovery_service_ip3:8082",
    "dc-id": "in-hyderabad-1",
    "database": "default",
    "local.instanceGroups": [
        "local-group"
    ],
    "local.shards": [
        "shard0"
    ],
    "shard0.listener_addr": "0.0.0.0:10000",
    "shard0.clusters": [
        "shard0-masters"
    ],
    "shard0-masters.connect_timeout": "10s",
    "appId": "local"
}

Configuring Envoy

Now we need to configure our sidecar envoy process to fetch configuration dynamically from our dkvoy process. A sample configuration of envoy would be like the following

$ cat /etc/envoy/config.yaml
node:
  id: local-group
  cluster: local

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
      protocol: TCP
      address: 0.0.0.0
      port_value: 9901

dynamic_resources:
  lds_config:
    resource_api_version: V3
    api_config_source:
      api_type: GRPC
      transport_api_version: V3
      grpc_services:
        envoy_grpc:
          cluster_name: xds_cluster
  cds_config:
    resource_api_version: V3
    api_config_source:
      api_type: GRPC
      transport_api_version: V3
      grpc_services:
        envoy_grpc:
          cluster_name: xds_cluster

static_resources:
  clusters:
  - name: xds_cluster
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    upstream_connection_options:
      tcp_keepalive: {}
    load_assignment:
      cluster_name: xds_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: sandbox.xds.domain
                port_value: 7979

In the above config sandbox.xds.domain refers to a DNS entry having multiple dkvoy IPs

$  dig +short sandbox.xds.dkv.fkcloud.in
10.1.1.1
10.1.1.2
10.1.1.3

Configuring App

Now that envoy is working and has dynamically opened up the ports for us to talk to DKV service, we can simply use a dkv configuration like to route traffic to either masters/replicas or any combination of them.

{
  "dkvShards": [
    {
      "name": "shard0",
      "topology": {
        "SLAVE": {
          "name": "shard0-slaves",
          "nodes": [
            {
              "host": "127.0.0.1",
              "port": 10001
            }
          ]
        },
        "MASTER": {
          "name": "shard0-masters",
          "nodes": [
            {
              "host": "127.0.0.1",
              "port": 10001
            }
          ]
        }
      }
    }
  ]
}
Clone this wiki locally