-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Count or Dynamic can change volumes or ports #9952
Comments
Hi @sikishen this is a bit of an open-ended question, but it looks like you want to have separate |
Hi @tgross, thanks for replying, i want something like this, but it has lots duplicated parts, is there a more convenient way that i could use count or dynamic to set them in one task? job
|
The right approach depends a bit on what you're trying to deploy. From your use of If you want to deploy all 3 on the same host, they should typically be in the same jobspec with dynamic tasks (all same host)locals {
redis_ports = {
1 = 6379
2 = 6380
3 = 6381
}
}
job "redis" {
datacenters = ["staging"]
constraint {
attribute = "${meta.tag_Service}"
value = "Redis"
}
group "redis" {
network {
port "redis-1" {
to = 6379
}
port "redis-2" {
to = 6380
}
port "redis-3" {
to = 6381
}
}
dynamic "task" {
for_each = local.redis_ports
labels = ["redis-${task.key}"]
content {
driver = "docker"
user = "root"
config {
image = "redis:4"
ports = ["redis-${task.key}"]
volumes = [
"/mnt/redis-${task.key}-storage:/data"
]
}
env {
REDIS_PORT = id.value
}
logs {
max_files = 10
max_file_size = 10
}
service {
tags = ["redis-${task.key}"]
port = "redis-${task.key}"
name = "redis-${task.key}"
meta {
tag_Service = "${meta.tag_Service}"
}
check {
type = "tcp"
name = "redis-${task.key}"
port = "redis-${task.key}"
interval = "10s"
timeout = "2s"
}
}
}
}
}
} But typically you'll want to deploy to multiple hosts, in which case you'll rely on the port mapping to avoid port collisions and just do something like the job below. Note that currently you can't interpolate the allocation index into the volume so I have a jobspec with count=3job "redis" {
datacenters = ["staging"]
constraint {
attribute = "${meta.tag_Service}"
value = "Redis"
}
constraint {
operator = "distinct_hosts"
value = "true"
}
group "redis" {
count = 3
network {
port "redis" {
to = 6379
}
}
task "redis" {
content {
driver = "docker"
user = "root"
config {
image = "redis:4"
ports = ["redis"]
volumes = [
"/mnt/redis-storage:/data"
]
}
env {
REDIS_PORT = 6379
}
logs {
max_files = 10
max_file_size = 10
}
service {
tags = ["redis"]
port = "redis"
name = "redis"
meta {
tag_Service = "${meta.tag_Service}"
}
check {
type = "tcp"
name = "redis"
port = "redis"
interval = "10s"
timeout = "2s"
}
}
}
}
}
} But if you need to deploy across multiple hosts with unique volumes, you'll need to wait until #7877 lands, or do something with a dynamic group like the following: jobspec with unique volumeslocals {
redis = [1, 2, 3]
}
job "redis" {
datacenters = ["staging"]
constraint {
attribute = "${meta.tag_Service}"
value = "Redis"
}
dynamic "group" {
for_each = local.redis
labels = ["redis-${group.key}"]
content {
network {
port "redis" {
to = 6379
}
}
task "redis" {
driver = "docker"
user = "root"
config {
image = "redis:4"
ports = ["redis"]
volumes = [
"/mnt/redis-${group.key}-storage:/data"
]
}
env {
REDIS_PORT = 6379
}
logs {
max_files = 10
max_file_size = 10
}
service {
tags = ["redis-${task.key}"]
port = "redis"
name = "redis-${task.key}"
meta {
tag_Service = "${meta.tag_Service}"
}
check {
type = "tcp"
name = "redis-${task.key}"
port = "redis"
interval = "10s"
timeout = "2s"
}
}
}
}
}
} (Note that I've run |
Hi @tgross, Thanks for replying, three examples exactly answered my questions -- explained how to use dynamic and count on group and task definitions, even explained how the "distinct_hosts" works, Thank you Sir! So i have tried all three of them, but seems the config block wasn't recognizing variables, as the first and third ones has a failure says: 2021-02-05T02:47:31Z Failed Validation 3 errors occurred: 2021-02-05T03:05:26Z Failed Validation 2 errors occurred: this config part is where i exactly need, is there a work around? Best Regards, |
Sorry about that, you've hit a bug. We fixed it in #9921 which will ship in Nomad 1.0.4. If you build from master you'll have a version with that patch. |
Build from master got a nomad-1.0.4-dev and works. |
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
Hi There,
i want to know if it's possible to having same service on different volumes separating each container logs or storage to a different folder, like use one job deploying a redis cluster with port start from port 6379 to 6381, volume from /mnt/redis1 to /mnt/redis3, volume part not sure, but for docker-compose i could to this.
docker-compose scale redis=3
then i will have:
so is it possible to scale by different ports, also if the volume could also be separate?
Best Regards,
Siki.
The text was updated successfully, but these errors were encountered: