You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Current Behavior :
OS Server ‘projects' are limited to processing only one project at a time, on a given resource. OS server currently waits until the last datapoint in the current project has completed the compute before the next analysis begins on the next project in the project queue.
Desired Behavior :
We would like OS Server to have the ability to (within the constraints of a defined cluster) parallelize concurrent OSPs, not just parallelize datapoints within a single OSP. This will also have the effect of increasing the utilization of cluster resources (within resource constraints), when projects are submitted.
Rationale:
This feature will both improve the cost effectiveness of HPC resources and reduce the associated run times related to project queuing. This should lead to improved OS server adoption for application use-cases where multiple OSP are being submitted to a single instance of OS Server, nearly simultaneously (regardless of the number of datapoints in each project).
Specifically, this feature will make OS Server a more logical deployment choice for OS Based solutions where minimization of "roundtrip" times for[ submitting inputs and generating results/analysis] are needed. This is particularly important when displacing current residential savings calculation methodologies such as spreadsheet based tools, etc. (where speed is valued as much, if not more, than accuracy) with OS/E+ model based approaches.
The text was updated successfully, but these errors were encountered:
The details on the change are in the PR, but after looking into the queuing design of sever, this functionality was already supported but not enabled as the web-background service had a one resque worker by default which resulted in one project/analysis being ran at a time. Simply adding in additional web-background / resque worker jobs increases the number of projects that are processed. Changes should happen on k8s/helm side vs server code.
Current Behavior :
OS Server ‘projects' are limited to processing only one project at a time, on a given resource. OS server currently waits until the last datapoint in the current project has completed the compute before the next analysis begins on the next project in the project queue.
Desired Behavior :
We would like OS Server to have the ability to (within the constraints of a defined cluster) parallelize concurrent OSPs, not just parallelize datapoints within a single OSP. This will also have the effect of increasing the utilization of cluster resources (within resource constraints), when projects are submitted.
Rationale:
This feature will both improve the cost effectiveness of HPC resources and reduce the associated run times related to project queuing. This should lead to improved OS server adoption for application use-cases where multiple OSP are being submitted to a single instance of OS Server, nearly simultaneously (regardless of the number of datapoints in each project).
Specifically, this feature will make OS Server a more logical deployment choice for OS Based solutions where minimization of "roundtrip" times for[ submitting inputs and generating results/analysis] are needed. This is particularly important when displacing current residential savings calculation methodologies such as spreadsheet based tools, etc. (where speed is valued as much, if not more, than accuracy) with OS/E+ model based approaches.
The text was updated successfully, but these errors were encountered: