-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
consul/connect: sidecar healthchecks are failing when host_networks are defined #9683
Comments
Looks like the same issue is described here |
@AndrewChubatiuk - Have you found a fix? Trying to setup consul-connect by following the tutorial from here, running into this issue: Nomad version:
Consul Version
|
So I found the issue, turns out it was because of the envoy version that the sidecar was using Hope this helps! |
@jsanant |
Hi, I am getting the same issue with Nomad 1.0.1 using the dashcount example with host_networks defined and network_interface: #nomad config
data_dir = "/opt/nomad/data"
bind_addr = "10.1.1.1"
region = "europe"
server {
enabled = true
bootstrap_expect = 1
}
client {
servers = ["10.1.1.1:4647"]
enabled = true
network_interface = "ens10"
host_network "public" {
interface = "{{ GetPublicInterfaces | limit 1 | attr \"name\" }}"
cidr = "<INSTANCE PUBLIC IP>/32"
reserved_ports = "22,80,443,8080"
}
}
addresses {
http = "0.0.0.0"
}
advertise {
http = "10.1.1.1"
} When checking the sidecar service in consul I get the following: ( [
{
"Node": {
"ID": "4d863b15-935b-3191-a383-1933d4d334db",
"Node": "vps-de01-dev-001",
"Address": "10.1.1.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "10.1.1.1",
"lan_ipv4": "10.1.1.1",
"wan": "10.1.1.1",
"wan_ipv4": "10.1.1.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 12,
"ModifyIndex": 13
},
"Service": {
"Kind": "connect-proxy",
"ID": "_nomad-task-839535c0-9640-0b7b-c82d-c274c43c0fb5-group-api-count-api--sidecar-proxy",
"Service": "count-api-sidecar-proxy",
"Tags": [],
"Address": "<INSTANCE PUBLIC IP>",
"TaggedAddresses": {
"lan_ipv4": {
"Address": "<INSTANCE PUBLIC IP>",
"Port": 30628
},
"wan_ipv4": {
"Address": "<INSTANCE PUBLIC IP>",
"Port": 30628
}
},
"Meta": {
"external-source": "nomad"
},
"Port": 30628,
"Weights": {
"Passing": 1,
"Warning": 1
},
"EnableTagOverride": false,
"Proxy": {
"DestinationServiceName": "count-api",
"DestinationServiceID": "_nomad-task-839535c0-9640-0b7b-c82d-c274c43c0fb5-group-api-count-api-",
"LocalServiceAddress": "127.0.0.1",
"Config": {
"bind_address": "0.0.0.0",
"bind_port": 30628
},
"MeshGateway": {},
"Expose": {}
},
"Connect": {},
"CreateIndex": 912,
"ModifyIndex": 912
},
"Checks": [
{
"Node": "vps-de01-dev-001",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"ServiceTags": [],
"Type": "",
"Definition": {},
"CreateIndex": 12,
"ModifyIndex": 12
},
{
"Node": "vps-de01-dev-001",
"CheckID": "service:_nomad-task-839535c0-9640-0b7b-c82d-c274c43c0fb5-group-api-count-api--sidecar-proxy:1",
"Name": "Connect Sidecar Listening",
"Status": "critical",
"Notes": "",
"Output": "dial tcp 127.0.0.1:30628: connect: connection refused",
"ServiceID": "_nomad-task-839535c0-9640-0b7b-c82d-c274c43c0fb5-group-api-count-api--sidecar-proxy",
"ServiceName": "count-api-sidecar-proxy",
"ServiceTags": [],
"Type": "tcp",
"Definition": {},
"CreateIndex": 912,
"ModifyIndex": 941
},
{
"Node": "vps-de01-dev-001",
"CheckID": "service:_nomad-task-839535c0-9640-0b7b-c82d-c274c43c0fb5-group-api-count-api--sidecar-proxy:2",
"Name": "Connect Sidecar Aliasing _nomad-task-839535c0-9640-0b7b-c82d-c274c43c0fb5-group-api-count-api-",
"Status": "passing",
"Notes": "",
"Output": "No checks found.",
"ServiceID": "_nomad-task-839535c0-9640-0b7b-c82d-c274c43c0fb5-group-api-count-api--sidecar-proxy",
"ServiceName": "count-api-sidecar-proxy",
"ServiceTags": [],
"Type": "alias",
"Definition": {},
"CreateIndex": 912,
"ModifyIndex": 912
}
]
}
] and the job configuration: job "countdash" {
datacenters = ["dc1"]
group "api" {
network {
mode = "bridge"
}
service {
name = "count-api"
connect {
sidecar_service {}
}
}
task "web" {
driver = "docker"
config {
image = "hashicorpnomad/counter-api:v2"
}
}
}
group "dashboard" {
network {
mode ="bridge"
port "http" {
host_network = "public"
static = 9002
to = 9002
}
}
service {
name = "count-dashboard"
port = "9002"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "count-api"
local_bind_port = 8080
}
}
}
}
}
task "dashboard" {
driver = "docker"
env {
COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
}
config {
image = "hashicorpnomad/counter-dashboard:v2"
}
}
}
} |
Making the following change to my nomad config fixes the healthcheck issue for the sidecars but creates some undefined/unwished behaviour network_interface = "lo"
host_network "public" {
interface = "{{ GetPublicInterfaces | limit 1 | attr \"name\" }}"
cidr = "<PUBLIC IP>/32"
reserved_ports = "22,80,443,8080"
}
host_network "private" {
interface = "ens10"
cidr = "10.1.1.1/32"
reserved_ports = "22,80,443,8080"
} |
Also tested with
Result is the same as with network_interface = "lo". Sidecar healthchecks are working, but still cannot connect to a service, which is behind consul connect proxy |
@cpl |
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
Nomad version
v1.0.1
Issue
Sidecar healthchecks for a job with Connect-enabled service are failing on Nomad clients with
host_network
s defined and pass on clients withouthost_network
s. Sidecar'sdocker inspect
logs are given below for both casesJob file
Docker inspect output for a sidecar when host_networks are defined
Consul service addresses
Docker inspect output for a sidecar when host_networks are not defined
Consul health status for a sidecar service when
host_network
s are definedThe text was updated successfully, but these errors were encountered: