Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OOM looks like a preemption #562

Open
Tracked by #311
eu9ene opened this issue May 3, 2024 · 2 comments
Open
Tracked by #311

OOM looks like a preemption #562

eu9ene opened this issue May 3, 2024 · 2 comments
Labels
taskcluster Issues related to the Taskcluster implementation of the training pipeline

Comments

@eu9ene
Copy link
Collaborator

eu9ene commented May 3, 2024

Sometimes we run into OOM and it's hard to say from the logs that it's the case. It looks like a preemption of a spot instance. We should be able to easily identify that the task was terminated because the machine was out of memory.

@eu9ene eu9ene added the taskcluster Issues related to the Taskcluster implementation of the training pipeline label May 3, 2024
@eu9ene eu9ene changed the title OOM looks like preemption OOM looks like a preemption May 3, 2024
@eu9ene
Copy link
Collaborator Author

eu9ene commented May 8, 2024

landing #561 and setting up dashboards for CPU machines can help with that

@bhearsum
Copy link
Collaborator

bhearsum commented Jul 9, 2024

I don't think there's anything we can do to make this better in this repo nor taskgraph. This is a worker issue that's been filed as taskcluster/taskcluster#6894

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
taskcluster Issues related to the Taskcluster implementation of the training pipeline
Projects
None yet
Development

No branches or pull requests

2 participants