Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

regexp constraint and system scheduler #1549

Closed
itatabitovski opened this issue Aug 9, 2016 · 4 comments
Closed

regexp constraint and system scheduler #1549

itatabitovski opened this issue Aug 9, 2016 · 4 comments

Comments

@itatabitovski
Copy link

Nomad version

v0.4.0

Operating system and Environment details

debian 8

Issue

In a 5 clients cluster 2 clients are configured with meta.password attribute

"client": {
        "enabled": true, 
        "meta": {
            "password": 12345
        }

a system job with constraint:

      constraint {
        attribute = "${meta.password}"
        regexp    = ".+"
      }

fails to get scheduled

nomad plan system.nomad 
+ Job: "whoami-system"
+ Task Group: "whoami-system" (5 create)
  + Task: "whoami-system" (forces create)

Scheduler dry-run:
- WARNING: Failed to place all allocations.
  Task Group "whoami-system" (failed to place 3 allocations):
    * Constraint "${meta.password} regexp .+" filtered 1 nodes

According to the documentation about the system scheduler I would have expected to run this job on all nodes that have meta.password defined.

@dadgar
Copy link
Contributor

dadgar commented Aug 9, 2016

Hey I believe it did! If you do nomad status whoami-system you should see 2 allocations?

It is saying it failed to place 3: Task Group "whoami-system" (failed to place 3 allocations): and you said only 2/5 had the password so that makes sense

@itatabitovski
Copy link
Author

You are correct! The job is running on both servers.

The plan WARNING was a bit confusing as well as the output from nomad run. Is there a reason for the warnings?

u330p ~/Projects/nomad/jobs $ nomad plan system.nomad 
+ Job: "whoami-system"
+ Task Group: "whoami-system" (5 create)
  + Task: "whoami-system" (forces create)

Scheduler dry-run:
- WARNING: Failed to place all allocations.
  Task Group "whoami-system" (failed to place 3 allocations):
    * Constraint "${meta.password} regexp .+" filtered 1 nodes

Job Modify Index: 0
To submit the job with version verification run:

nomad run -check-index 0 system.nomad

When running the job with the check-index flag, the job will only be run if the
server side version matches the the job modify index returned. If the index has
changed, another user has modified the job and the plan's results are
potentially invalid.



u330p ~/Projects/nomad/jobs $ nomad run system.nomad 
==> Monitoring evaluation "d4318e05"
    Evaluation triggered by job "whoami-system"
    Allocation "23169599" created: node "bb1fd344", group "whoami-system"
    Allocation "3e2d06c5" created: node "07f10f36", group "whoami-system"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "d4318e05" finished with status "complete" but failed to place all allocations:
    Task Group "whoami-system" (failed to place 3 allocations):
      * Constraint "${meta.password} regexp .+" filtered 1 nodes



u330p ~/Projects/nomad/jobs $ nomad status 
ID             Type    Priority  Status
whoami-system  system  50        running



u330p ~/Projects/nomad/jobs $ nomad status whoami-system
ID          = whoami-system
Name        = whoami-system
Type        = system
Priority    = 50
Datacenters = ovh
Status      = running
Periodic    = false

Allocations
ID        Eval ID   Node ID   Task Group     Desired  Status
23169599  d4318e05  bb1fd344  whoami-system  run      running
3e2d06c5  d4318e05  07f10f36  whoami-system  run      running



u330p ~/Projects/nomad/jobs $ nomad node-status -allocs
ID        DC   Name  Class   Drain  Status  Running Allocs
25e01b17  ovh  srv1  <none>  false  ready   0
f0923779  ovh  srv2  <none>  false  ready   0
1b024b60  ovh  srv3  <none>  false  ready   0
bb1fd344  ovh  srv4  <none>  false  ready   1
07f10f36  ovh  srv5  <none>  false  ready   1

@dadgar
Copy link
Contributor

dadgar commented Aug 12, 2016

This has been improved by #1568

The warning exists in case the operator had put an incorrect constraint or that the machines did not have enough resources to run the job.

@dadgar dadgar closed this as completed Aug 12, 2016
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 20, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants