You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hi, could we please have a way for ts to consider scheduling multiple jobs on a single GPU automatically?
I understand you didn't like the "free memory" heuristic because of job init time.
May I suggest a "set_max_jobs_per_cpu" kind of flag? If I know that my GPU can fit 3 of my jobs, then I can just set this and things will work, and I'll use my GPUs better.
The text was updated successfully, but these errors were encountered:
Actually I just realized I can approximate this by, for example, splitting my queue into three queues with three ts running on three different sockets.
hi, could we please have a way for ts to consider scheduling multiple jobs on a single GPU automatically?
I understand you didn't like the "free memory" heuristic because of job init time.
May I suggest a "set_max_jobs_per_cpu" kind of flag? If I know that my GPU can fit 3 of my jobs, then I can just set this and things will work, and I'll use my GPUs better.
The text was updated successfully, but these errors were encountered: