-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature] control "submit"-level resource distribution #262
Comments
Hi @mgt16-LANL , that is correct, typically we define the resources once for the executor and then each function which is submitted to a given executor uses the same pre-defined set of resources. The background for this is that the executor gets a set of reserved resources. In principle it would be technically possible to assign resources at the submit level, still that is currently not implemented. |
I guess I'm pretty interested in having the resource allocation be dynamically available, especially from flux/slurm backends for more dynamic/load-balancing workflows! I'll tag as a request. Is there there a reason we couldn't just add these to *args or **kwargs in the BaseExecutor class submit function to be differentially handled by FluxPythonInterface bootup function, for example? |
There are two reasons:
|
-it cloud lead to a confusion with the function arguments, for example if the function has an argument cores and pympipool also uses an argument cores. My suggestion would be use something like "runtime cores" or a different nomenclature for the submit-time resource assignment.
I'm not sure I understand this one - from the https://github.com/pyiron/pympipool/blob/main/pympipool/flux/executor.py code: def bootup(self, command_lst):
if self._oversubscribe:
raise ValueError(
"Oversubscribing is currently not supported for the Flux adapter."
)
if self._executor is None:
self._executor = flux.job.FluxExecutor()
jobspec = flux.job.JobspecV1.from_command(
command=command_lst,
num_tasks=self._cores,
cores_per_task=self._threads_per_core,
gpus_per_task=self._gpus_per_core,
num_nodes=None,
exclusive=False,
)
jobspec.environment = dict(os.environ)
if self._cwd is not None:
jobspec.cwd = self._cwd
self._future = self._executor.submit(jobspec) It would seem like under the single python process, you could expose the underlying Jobspec to the user at submission time without requiring an additional python process? |
About the second part, the |
Ah! That makes sense. Is the preferred method for getting this type of functionality with pympipool to define a set of executors to use as "queues" with more/less resources? |
Yes, at least that is how I was using it so far. This allows |
@mgt16-LANL I have an initial draft for this interface available in #293 it would be very interesting to see if this also solves your needs. |
Hi @jan-janssen - is there a way to control the distribution of cores/threads/gpus at executor.submit() time (e.g. per-"job")? I was looking over the more recent versions and it seems like the interface has gone more the way of initializing the executors with this information - but I very likely could have missed something.
The text was updated successfully, but these errors were encountered: