Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🤔[question] Changing the default config path for the determined-agent.service #8891

Closed
2 tasks done
samjenks opened this issue Feb 26, 2024 · 5 comments
Closed
2 tasks done
Labels

Comments

@samjenks
Copy link

Describe your question

Due to resource-ownership constraints I am trying to create multiple agent services on an 8 GPU machine (1 GPU per agent). I installed/setup determined via the ubuntu services setup. I can create new determined-agent services but am not quite clear on how to set the new config path for those services.

The github and The Docs imply that this is possible, but I have no innate knowledge of Go. How do I go about doing it?

Checklist

  • Did you search the docs for a solution?
  • Did you search github issues to find if somebody asked this question before?
@ioga
Copy link
Contributor

ioga commented Feb 26, 2024

hello, as a note, we generally don't support or test such configurations ourselves today, but we'd love to hear about your experience with this.

there's a determined-agent --config-file <PATH> CLI option you can use to specify a config file path. you'd need to hack the services somehow to use it as we don't provide a way to configure that today.

I believe you'd probably also need to set CUDA_VISIBLE_DEVICES for each agent to compartmentalize their cards appropriately.

I assume on master side you're going to set up a resource pool for each individual owner. Please note that open-source edition does not have a way to prevent someone from using the "wrong" resource pool, so there's still be a need for an honor system of sorts.

@samjenks
Copy link
Author

Interesting, that did the trick as far as passing the config, need to work out the resulting connection issues where the second agent fails to connect to the master node.

Does the visible-gpus option in the config not isolate the GPUs for each agent? Or will the agent use GPUs not listed in its visible-gpus?

I was planning on creating workspaces for each owner and then binding a number of resources to them with a shared pool for communal use. Is that what is on the honor system? I assume that means that someone in a workspace can use resources bound to other's workspaces?

@ioga
Copy link
Contributor

ioga commented Feb 26, 2024

need to work out the resulting connection issues where the second agent fails to connect to the master node.

ah, you may need to explicitly set the agent name as well. by default it comes from the server hostname and we disallow duplicates.

Does the visible-gpus option in the config not isolate the GPUs for each agent? Or will the agent use GPUs not listed in its visible-gpus?

yes that might work.

I was planning on creating workspaces for each owner and then binding a number of resources to them with a shared pool for communal use. Is that what is on the honor system? I assume that means that someone in a workspace can use resources bound to other's workspaces?

in OSS edition, a user can go and create their job in the "wrong" workspace to get the access to the "extra resources". by "honor system" I mean that you'll need to "educate" your users no to do this.

@samjenks
Copy link
Author

ah, you may need to explicitly set the agent name as well. by default it comes from the server hostname and we disallow duplicates.

This ended up being the fix.
Last question, can an agent be a part of multiple resource pools?
My previous approach idea was to create a shared resource pool that larger experiments could pull from but in attempting this it appears experiments are limited to a single pool of resources at a time. Is that correct?

@ioga
Copy link
Contributor

ioga commented Feb 26, 2024

that's correct, an agent can only be in one resource pool. it'll indeed be a limitation of this setup.

I believe to achieve what you are trying to do you could

  1. switch to kubernetes
  2. in k8s, create namespaces with quotas for individual teams, and an unlimited namespace for the "larger experiments".
  3. in det, create an RP per namespace.

then you'll end up having both "limited" and "unlimited" RPs you can use with your workspaces.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants