Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🤔 model registry - inference with pytorch model #8806

Closed
2 tasks done
Fedege98 opened this issue Feb 6, 2024 · 1 comment
Closed
2 tasks done

🤔 model registry - inference with pytorch model #8806

Fedege98 opened this issue Feb 6, 2024 · 1 comment
Labels

Comments

@Fedege98
Copy link

Fedege98 commented Feb 6, 2024

Describe your question

Hello! I have a question with respect to a behavior of python library to interact with determined.ai.
Specifically, I want to be able to download a specific pytorch model from the model registry of determined.ai to do real-time inference. Following the documentation it is not clear how to do this.

Checklist

  • Did you search the docs for a solution?
  • Did you search github issues to find if somebody asked this question before?
@KevinMusgrave
Copy link

KevinMusgrave commented Feb 6, 2024

If you have the name of the model in the model registry and the model version number:

from determined.experimental import client

model_name = ...
version_num = ...

# checkpoint_dir is the path that contains the downloaded checkpoint
# By default checkpoint_dir will be checkpoints/<checkpoint_uuid> in the current working directory.
# You can download to a custom path by passing in path=<path> into the download function.
checkpoint_dir = client.get_model(model_name).get_version(version_num).checkpoint.download()

@ioga ioga closed this as completed Apr 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants