Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to create more than 1 container when using Consul Connect Stanza with Dynamic Ports #6956

Closed
crizstian opened this issue Jan 17, 2020 · 8 comments
Assignees

Comments

@crizstian
Copy link

For reporting security vulnerabilities please refer to the website.

If you have a question, prepend your issue with [question] or preferably use the nomad mailing list.

If filing a bug please include the following:

Nomad version

Output from nomad version

root@dc1-consul-server:~# nomad version
Nomad v0.10.1 (0d4e5d949fe073c47a947ea36bfef31a3c49224f)

root@dc1-consul-server:~# consul version
Consul v1.6.1
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)

Operating system and Environment details

Linux

Issue

I am unable to create more than 1 container with dynamic ports when using the consul connect stanza, this only allows me to create 1 container only, and obviously in prod I wil be running more than 1 container, so this is an issue or I may mis configuring the hcl file.

Reproduction steps

nomad plan job.hcl

nomad run job.hcl

Job file (if appropriate)

job "cinemas" {

  datacenters = ["dc1-ncv"]
  region      = "dc1-region"
  type        = "service"

  group "payment-api" {
    count = 2

    task "payment-api" {
      driver = "docker"
      config {
        image = "crizstian/payment-service-go:v0.4"
        port_map = {
          http = 3000
        }
      }

      env {
        DB_SERVERS   = "mongodb1.service.consul:27017,mongodb2.service.consul:27018,mongodb3.service.consul:27019"
        SERVICE_PORT = "3000"
        CONSUL_IP    = "172.20.20.11"
      }

      resources {
        cpu    = 50
        memory = 50
      }
    }

    network {
      mode = "bridge"
      port "http" {}
    }

    service {
      name = "payment-api"
      port = "http"

      connect {
        sidecar_service {}
      }
    }
  }

Nomad Client logs (if appropriate)

If possible please post relevant logs in the issue.

root@dc1-consul-server:~# nomad status cinemas
ID            = cinemas
Name          = cinemas
Submit Date   = 2020-01-17T17:55:42Z
Type          = service
Priority      = 50
Datacenters   = dc1-ncv
Status        = running
Periodic      = false
Parameterized = false

Summary
Task Group        Queued  Starting  Running  Failed  Complete  Lost
booking-api       0       0         1        0       0         0
notification-api  0       0         1        0       0         0
payment-api       0       0         0        2       0         0

Future Rescheduling Attempts
Task Group   Eval ID   Eval Time
payment-api  7cb95a40  3m54s from now

Latest Deployment
ID          = 67ee6134
Status      = running
Description = Deployment is running

Deployed
Task Group        Desired  Placed  Healthy  Unhealthy  Progress Deadline
booking-api       1        1       1        0          2020-01-17T18:05:54Z
notification-api  1        1       1        0          2020-01-17T18:05:56Z
payment-api       2        2       0        2          2020-01-17T18:05:42Z

Allocations
ID        Node ID   Task Group        Version  Desired  Status   Created  Modified
465ce441  8a2c2a09  payment-api       0        run      failed   17s ago  1s ago
4a6b3f97  8a2c2a09  payment-api       0        run      failed   17s ago  1s ago
5b81c2d7  8a2c2a09  booking-api       0        run      running  17s ago  4s ago
77371af4  8a2c2a09  notification-api  0        run      running  17s ago  3s ago
root@dc1-consul-server:~# nomad alloc status 465ce441
ID                     = 465ce441
Eval ID                = 0fa75599
Name                   = cinemas.payment-api[0]
Node ID                = 8a2c2a09
Node Name              = dc1-consul-server
Job ID                 = cinemas
Job Version            = 0
Client Status          = failed
Client Description     = Failed tasks
Desired Status         = run
Desired Description    = <none>
Created                = 27s ago
Modified               = 11s ago
Deployment ID          = 67ee6134
Deployment Health      = unhealthy
Reschedule Eligibility = 3m44s from now

Allocation Addresses (mode = "bridge")
Label                      Dynamic  Address
http                       yes      10.0.2.15:24820
connect-proxy-payment-api  yes      10.0.2.15:27636 -> 27636

Task "connect-proxy-payment-api" is "dead"
Task Resources
CPU        Memory           Disk     Addresses
3/250 MHz  9.0 MiB/128 MiB  300 MiB

Task Events:
Started At     = 2020-01-17T17:55:46Z
Finished At    = 2020-01-17T17:55:53Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type                 Description
2020-01-17T17:55:57Z  Killing              Sent interrupt. Waiting 5s before force killing
2020-01-17T17:55:53Z  Killed               Task successfully killed
2020-01-17T17:55:53Z  Terminated           Exit Code: 0
2020-01-17T17:55:53Z  Killing              Sent interrupt. Waiting 5s before force killing
2020-01-17T17:55:53Z  Sibling Task Failed  Task's sibling "payment-api" failed
2020-01-17T17:55:46Z  Started              Task started by client
2020-01-17T17:55:45Z  Task Setup           Building Task Directory
2020-01-17T17:55:42Z  Received             Task received by client

Task "payment-api" is "dead"
Task Resources
CPU     Memory  Disk     Addresses
50 MHz  50 MiB  300 MiB

Task Events:
Started At     = N/A
Finished At    = 2020-01-17T17:55:53Z
Total Restarts = 0
Last Restart   = N/A

Recent Events:
Time                  Type            Description
2020-01-17T17:55:57Z  Killing         Sent interrupt. Waiting 5s before force killing
2020-01-17T17:55:53Z  Killing         Sent interrupt. Waiting 5s before force killing
2020-01-17T17:55:53Z  Not Restarting  Error was unrecoverable
2020-01-17T17:55:53Z  Driver Failure  Failed to create container configuration for image "crizstian/payment-service-go:v0.4" ("sha256:c29d40e80fdbec8aeea0dda2420abb0f421dafa10ff30d417ab616f94cb0faaa"): Trying to map ports but no network interface is available
2020-01-17T17:55:45Z  Driver          Downloading image
2020-01-17T17:55:45Z  Task Setup      Building Task Directory
2020-01-17T17:55:42Z  Received        Task received by client

Logs and other artifacts may also be sent to: [email protected]

Please link to your Github issue in the email and reference it in the subject
line:

To: [email protected]

Subject: GH-1234: Errors garbage collecting allocs

Emails sent to that address are readable by all HashiCorp employees but are not publicly visible.

Nomad Server logs (if appropriate)

    2020-01-17T17:55:53.008Z [ERROR] client.driver_mgr.docker: failed to create container configuration: driver=docker image_name=crizstian/payment-service-go:v0.4 image_id=sha256:c29d40e80fdbec8aeea0dda2420abb0f421dafa10ff30d417ab616f94cb0faaa error="Trying to map ports but no network interface is available"
    2020-01-17T17:55:53.008Z [ERROR] client.alloc_runner.task_runner: running driver failed: alloc_id=465ce441-bb62-7b1f-fb26-072a8b7abfa8 task=payment-api error="Failed to create container configuration for image "crizstian/payment-service-go:v0.4" ("sha256:c29d40e80fdbec8aeea0dda2420abb0f421dafa10ff30d417ab616f94cb0faaa"): Trying to map ports but no network interface is available"
    2020-01-17T17:55:53.008Z [INFO ] client.alloc_runner.task_runner: not restarting task: alloc_id=465ce441-bb62-7b1f-fb26-072a8b7abfa8 task=payment-api reason="Error was unrecoverable"
    2020-01-17T17:55:53.009Z [ERROR] client.driver_mgr.docker: failed to create container configuration: driver=docker image_name=crizstian/payment-service-go:v0.4 image_id=sha256:c29d40e80fdbec8aeea0dda2420abb0f421dafa10ff30d417ab616f94cb0faaa error="Trying to map ports but no network interface is available"
    2020-01-17T17:55:53.010Z [ERROR] client.alloc_runner.task_runner: running driver failed: alloc_id=4a6b3f97-b1f1-530b-f86f-f97fe34c625d task=payment-api error="Failed to create container configuration for image "crizstian/payment-service-go:v0.4" ("sha256:c29d40e80fdbec8aeea0dda2420abb0f421dafa10ff30d417ab616f94cb0faaa"): Trying to map ports but no network interface is available"
@nickethier
Copy link
Member

Hey @crizstian

When using the bridge networking mode in Nomad you must map your ports through the network stanza. See this example: https://www.nomadproject.io/docs/job-specification/network.html#bridge-mode

@nickethier nickethier self-assigned this Jan 17, 2020
@nickethier nickethier added theme/consul/connect Consul Connect integration type/question labels Jan 17, 2020
@crizstian
Copy link
Author

Hey @nickethier I have made that I have it for other service like this

  group "booking-api" {
    count = 1

    task "booking-api" {
      driver = "docker"
      config {
        image   = "crizstian/booking-service-go:v0.4"
      }

      env {
        SERVICE_PORT     = "3002"
        CONSUL_IP        = "172.20.20.11"
        DB_SERVERS       = "mongodb1.query.consul:27017,mongodb2.query.consul:27018,mongodb3.query.consul:27019"
        PAYMENT_URL      = "http://${NOMAD_UPSTREAM_ADDR_payment_api}"
        NOTIFICATION_URL = "http://${NOMAD_UPSTREAM_ADDR_notification_api}"
      }

      resources {
        cpu    = 50
        memory = 50
      }
    }

    network {
      mode = "bridge"
      port "http" {
        static = 3002
        to     = 3002
      }
    }

    service {
      name = "booking-api"
      port = "3002"

      connect {
        sidecar_service {
          proxy {
            upstreams {
               destination_name = "payment-api"
               local_bind_port = 8080
            }
            upstreams {
               destination_name = "notification-api"
               local_bind_port = 8081
            }
          }
        }
      }
    }
  }

but how can I implement dynamic ports, so I can create more than 1 container and register more than 1 container in consul with the service stanza

@nickethier
Copy link
Member

Currently the to field must be set. We don't yet have a good way to make that dynamic since its a port inside the network namespace. However, you should still be able to use dynamic ports. Drop the static value on the http port and set the service port to http instead of 3002. You still need to tell the sidecar_service which port your service is listening on which is that 3002 port. To do this you'll need to set sidecar_service{ local_service_port = 3002 ... } }.

@nickethier
Copy link
Member

Hey @crizstian I just created #6958 that would make it possible to do dynamic port mapping.

@crizstian
Copy link
Author

Thanks @nickethier it will be awesome to have this ability within nomad and the integration with consul-connect

@stale
Copy link

stale bot commented Apr 17, 2020

Hey there

Since this issue hasn't had any activity in a while - we're going to automatically close it in 30 days. If you're still seeing this issue with the latest version of Nomad, please respond here and we'll keep this open and take another look at this.

Thanks!

@stale
Copy link

stale bot commented May 17, 2020

This issue will be auto-closed because there hasn't been any activity for a few months. Feel free to open a new one if you still experience this problem 👍

@stale stale bot closed this as completed May 17, 2020
@github-actions
Copy link

github-actions bot commented Nov 7, 2022

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 7, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants