Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Count or Dynamic can change volumes or ports #9952

Closed
sikishen opened this issue Feb 3, 2021 · 7 comments
Closed

Count or Dynamic can change volumes or ports #9952

sikishen opened this issue Feb 3, 2021 · 7 comments

Comments

@sikishen
Copy link

sikishen commented Feb 3, 2021

Hi There,

i want to know if it's possible to having same service on different volumes separating each container logs or storage to a different folder, like use one job deploying a redis cluster with port start from port 6379 to 6381, volume from /mnt/redis1 to /mnt/redis3, volume part not sure, but for docker-compose i could to this.

version: '3.3'
services:
  redis:
    image: redis:4
    restart: always
    ports:
      - '6379-6381:6379'

docker-compose scale redis=3
then i will have:

c851c3532ab2        redis:4                 "docker-entrypoint.s…"   3 seconds ago       Up 1 second             0.0.0.0:6380->6379/tcp    jobs_redis_2
895497cea7f6        redis:4                  "docker-entrypoint.s…"   3 seconds ago       Up 1 second             0.0.0.0:6381->6379/tcp    jobs_redis_3
e2b0c82b6f72        redis:4                 "docker-entrypoint.s…"   43 seconds ago      Up 41 seconds           0.0.0.0:6379->6379/tcp    jobs_redis_1

so is it possible to scale by different ports, also if the volume could also be separate?

Best Regards,
Siki.

@tgross
Copy link
Member

tgross commented Feb 3, 2021

Hi @sikishen this is a bit of an open-ended question, but it looks like you want to have separate group blocks for each container.

@sikishen
Copy link
Author

sikishen commented Feb 4, 2021

Hi @tgross,

thanks for replying, i want something like this, but it has lots duplicated parts, is there a more convenient way that i could use count or dynamic to set them in one task?

job

job "redis" {
    datacenters = ["staging"]
    constraint {
        attribute = "${meta.tag_Service}"
        value     = "Redis"
    }
    group "redis" {
        count = 1
        task "redis-1" {
            driver = "docker"
            user   = "root"
            config {
                image = "redis:4"
                ports = ["redis-1"]
                volumes = [
                    "/mnt/redis-1-storage:/data"
                ]
            }
            env {
                REDIS_PORT = "6379"
            }
            logs {
                max_files     = 10
                max_file_size = 10
            }
            service {
                tags = ["redis-1"]
                port = "redis-1"
                name = "redis-1"
                meta {
                    tag_Service = "${meta.tag_Service}"
                }
                check {
                    type = "script"
                    name = "redis-1"
                    port = "redis-1"
                    interval = "10s"
                    timeout  = "2s"
                }
            }
        }
        task "redis-2" {
            driver = "docker"
            user   = "root"
            config {
                image = "redis:4"
                ports = ["redis-2"]
                volumes = [
                    "/mnt/redis-2-storage:/data"
                ]
            }
            env {
                REDIS_PORT = "6380"
            }
            logs {
                max_files     = 10
                max_file_size = 10
            }
            service {
                tags = ["redis-2"]
                port = "redis-2"
                name = "redis-2"
                meta {
                    tag_Service = "${meta.tag_Service}"
                }
                check {
                    type = "script"
                    name = "redis-1"
                    port = "redis-2"
                    interval = "10s"
                    timeout  = "2s"
                }
            }
        }
        task "redis-3" {
            driver = "docker"
            user   = "root"
            config {
                image = "redis:4"
                ports = ["redis-3"]
                volumes = [
                    "/mnt/redis-3-storage:/data"
                ]
            }
            env {
                REDIS_PORT = "6381"
            }
            logs {
                max_files     = 10
                max_file_size = 10
            }
            service {
                tags = ["redis-3"]
                port = "redis-3"
                name = "redis-3"
                meta {
                    tag_Service = "${meta.tag_Service}"
                }
                check {
                    type = "script"
                    name = "redis-1"
                    port = "redis-3"
                    interval = "10s"
                    timeout  = "2s"
                }
            }
        }
        network {
            port "redis-1" {
                to = 6379
            }
            port "redis-2" {
                to = 6380
            }
            port "redis-3" {
                to = 6381
            }
        }

    }
}

@tgross
Copy link
Member

tgross commented Feb 4, 2021

The right approach depends a bit on what you're trying to deploy. From your use of docker-compose and worries over port collisions it sounds like you're trying to deploy all 3 redis to the same host, but the constraints you're showing in the jobspec make it sound more like a typical deployment across multiple hosts.

If you want to deploy all 3 on the same host, they should typically be in the same group, as a single group results in a single allocation, which is the unit of deployment. In that case, you might use a dynamic block to generate multiple tasks within a group like this:

jobspec with dynamic tasks (all same host)
locals {
  redis_ports = {
    1 = 6379
    2 = 6380
    3 = 6381
  }
}

job "redis" {
  datacenters = ["staging"]
  constraint {
    attribute = "${meta.tag_Service}"
    value     = "Redis"
  }
  group "redis" {

    network {
      port "redis-1" {
        to = 6379
      }
      port "redis-2" {
        to = 6380
      }
      port "redis-3" {
        to = 6381
      }
    }

    dynamic "task" {
      for_each = local.redis_ports
      labels   = ["redis-${task.key}"]

      content {

        driver = "docker"
        user   = "root"

        config {
          image = "redis:4"
          ports = ["redis-${task.key}"]
          volumes = [
            "/mnt/redis-${task.key}-storage:/data"
          ]
        }
        env {
          REDIS_PORT = id.value
        }
        logs {
          max_files     = 10
          max_file_size = 10
        }
        service {
          tags = ["redis-${task.key}"]
          port = "redis-${task.key}"
          name = "redis-${task.key}"
          meta {
            tag_Service = "${meta.tag_Service}"
          }
          check {
            type     = "tcp"
            name     = "redis-${task.key}"
            port     = "redis-${task.key}"
            interval = "10s"
            timeout  = "2s"
          }
        }
      }
    }
  }
}

But typically you'll want to deploy to multiple hosts, in which case you'll rely on the port mapping to avoid port collisions and just do something like the job below. Note that currently you can't interpolate the allocation index into the volume so I have a distinct_hosts constraint here to make sure there's no conflict over that volume path.

jobspec with count=3
job "redis" {
  datacenters = ["staging"]

  constraint {
    attribute = "${meta.tag_Service}"
    value     = "Redis"
  }

  constraint {
    operator = "distinct_hosts"
    value    = "true"
  }

  group "redis" {

    count = 3

    network {
      port "redis" {
        to = 6379
      }
    }

    task "redis" {

      content {

        driver = "docker"
        user   = "root"

        config {
          image = "redis:4"
          ports = ["redis"]
          volumes = [
            "/mnt/redis-storage:/data"
          ]
        }
        env {
          REDIS_PORT = 6379
        }
        logs {
          max_files     = 10
          max_file_size = 10
        }
        service {
          tags = ["redis"]
          port = "redis"
          name = "redis"
          meta {
            tag_Service = "${meta.tag_Service}"
          }
          check {
            type     = "tcp"
            name     = "redis"
            port     = "redis"
            interval = "10s"
            timeout  = "2s"
          }
        }
      }
    }
  }
}

But if you need to deploy across multiple hosts with unique volumes, you'll need to wait until #7877 lands, or do something with a dynamic group like the following:

jobspec with unique volumes
locals {
  redis = [1, 2, 3]
}

job "redis" {
  datacenters = ["staging"]
  constraint {
    attribute = "${meta.tag_Service}"
    value     = "Redis"
  }

  dynamic "group" {
    for_each = local.redis
    labels   = ["redis-${group.key}"]

    content {

      network {
        port "redis" {
          to = 6379
        }
      }

      task "redis" {

        driver = "docker"
        user   = "root"

        config {
          image = "redis:4"
          ports = ["redis"]
          volumes = [
            "/mnt/redis-${group.key}-storage:/data"
          ]
        }
        env {
          REDIS_PORT = 6379
        }
        logs {
          max_files     = 10
          max_file_size = 10
        }
        service {
          tags = ["redis-${task.key}"]
          port = "redis"
          name = "redis-${task.key}"
          meta {
            tag_Service = "${meta.tag_Service}"
          }
          check {
            type     = "tcp"
            name     = "redis-${task.key}"
            port     = "redis"
            interval = "10s"
            timeout  = "2s"
          }
        }
      }
    }
  }
}

(Note that I've run nomad job validate for all these jobspecs but haven't run them; there may be minor errors when it comes to on-client configuration.)

@sikishen
Copy link
Author

sikishen commented Feb 5, 2021

Hi @tgross,

Thanks for replying, three examples exactly answered my questions -- explained how to use dynamic and count on group and task definitions, even explained how the "distinct_hosts" works, Thank you Sir!

So i have tried all three of them, but seems the config block wasn't recognizing variables, as the first and third ones has a failure says:

2021-02-05T02:47:31Z Failed Validation 3 errors occurred:
* failed to parse config:
* Unknown variable: There is no variable named "task".
* Unknown variable: There is no variable named "task".

2021-02-05T03:05:26Z Failed Validation 2 errors occurred:
* failed to parse config:
* Unknown variable: There is no variable named "group".

this config part is where i exactly need, is there a work around?

Best Regards,
Siki.

@tgross
Copy link
Member

tgross commented Feb 5, 2021

this config part is where i exactly need, is there a work around?

Sorry about that, you've hit a bug. We fixed it in #9921 which will ship in Nomad 1.0.4. If you build from master you'll have a version with that patch.

@sikishen
Copy link
Author

sikishen commented Feb 7, 2021

Build from master got a nomad-1.0.4-dev and works.

@sikishen sikishen closed this as completed Feb 7, 2021
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 24, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants